id
stringlengths 10
10
| title
stringlengths 3
179
| track
stringclasses 1
value | status
stringclasses 3
values | keywords
stringlengths 2
2.39k
| primary_area
stringclasses 21
values | author
stringclasses 501
values | authorids
stringclasses 501
values | aff
stringclasses 1
value | aff_domain
stringclasses 1
value | position
stringclasses 1
value | rating
stringclasses 355
values | confidence
stringlengths 0
19
| soundness
stringclasses 642
values | contribution
stringclasses 596
values | presentation
stringclasses 782
values | rating_avg
float64 0
9
| confidence_avg
float64 0
5
| soundness_avg
float64 0
4
| contribution_avg
float64 0
4
| presentation_avg
float64 0
4
| corr_rating_confidence
float64 -1
1
| project
stringclasses 1
value | github
stringclasses 1
value | Review
listlengths 2
10
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1TJSnL3ywS | LLM Distillation for Efficient Few-Shot Multiple Choice Question Answering | main | Active | Few-shot learning;Multiple Choice Question Answering (MCQA);Data generation;Knowledge distillation;Multiple Choice Question Answering (MCQA) | transfer learning, meta learning, and lifelong learning | 3;3;3;5;6 | 4;4;4;4;4 | 3;2;2;3;3 | 2;1;2;2;3 | 3;1;3;3;3 | 4 | 4 | 2.6 | 2 | 2.6 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Q1. Do you have any idea, if the huge improvement of the performance of DeBerta after distillation is related to improving of the model's question answering ability, or just due to learning of Multiple Choice QA format?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is clearly written, and presents a practical method of LLM distillation to a smaller encoder-only model\n- A nice ablation study is provided. There are several interesting observations, e.g. increasing the performance with the distill loss + temperature adjustment, or the usage of the format correctness as an implicit sign of the model's confidence."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes the method of distillation of the large language model to the smaller one for efficient solving of Multiple Choice Question Answering task, via data generation and distillation loss. Two methods of data generation are considered: generate the whole question-answer structure with answer options in json format (via 5-shot prompting); or generate question-answer pairs obtaining each option separately. Then, the smaller model is trained on the generated data by distillation loss, learning to predict the larger model's probability of the generated options. The evaluation is done on MMLU benchmark. For the ablation study, ARC dataset is used"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The method is rather straightforward and does not contain a significant novelty, although the presented analysis is good\n- The practical usefullness of the considered task is not so clear. Indeed, Multiple Choice Question Answering is the specific QA format convenient for LLM's evaluation, but the MCQA results are not necessarily directly connected to the general QA abilities of the model. For encoder-only LLMs, classification-based approach looks more appropriate (i.e. scoring the correctness of the QA pair)\n- The model does not outperform Tasksource model which is obtained by the multi-task training of the same backbone: the improvement on MMLU is marginal (+0.5), and on ARC data the proposed approach works significantly worse."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The experiments lack a more detailed analysis of the two data generation methods (e.g., example-based analysis): Why do the two methods (JSON and Decompose) lead to different outcomes in performance? Why does JSON outperform Decompose?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The method is relatively simple and clearly explained.\n2. The paper explores the effectiveness of using LLMs to construct data for MCQA tasks, and the proposed distillation loss training method shows notable performance improvements.\n3. The paper conducted a relatively comprehensive ablation experiment."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper aims to enhance the performance of low-computation-cost, only-encode models for few-shot multiple-choice question answering (MCQA) tasks. It leverages large language models (LLMs) to generate a high-quality, task-specific MCQA dataset for training and introduces a training approach that applies distillation loss based on LLM-assigned scores. Experimental results demonstrate the effectiveness of the proposed method: LLM-driven data generation and knowledge distillation for few-shot MCQA."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The method's performance improvement is limited and depends on the strength of the base model. While the gains are more pronounced with the weaker DeBERTa-base model, they are minimal with the stronger Tasksource model, and even slightly decreases in the case of Decompose. \n2. Additionally, when using DeBERTa-base, the best performance (JSON distill) achieved by using only the constructed dataset does not surpass that of a multi-task fine-tuned model (Tasksource)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "* Has there been any comparison with recently released lightweight models, such as those with a 1B parameter size?\n* Is there a specific reason why only the DeBERTa encoder model was tested?\n* Was there a particular reason for employing few-shot learning in an encoder model instead of using a masked language model (MLM)?\n* Does this paper really aligns to the ICLR conference is questionable. Any other natural language processing conference seems more suitable."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* This paper sheds light again on the encoder-only model, which had been receiving less attention recently.\n* The methodology's adaptability to existing domain-specific benchmarks suggests its potential for broad application across diverse fields."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the possibility of encoder model with LLM-generated dataset and knowledge distillation. To address current effortful MCQA benchmark making, this paper utilizes LLM’s ability in few shot prompting with two formatting strategies. Then, by distilling the loss of bigger LLM into small, encoder-only model, the paper shows the efficient way to achieve performance nearing that of bigger LLM."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**[Novelty is limited]**\n\nThe paper’s novelty appears limited, as it does not introduce a new dataset and relies primarily on formatting prompts either in full JSON format or as segmented parts, raising questions on whether these methods constitute a genuinely novel approach. Furthermore, the distillation technique applied here does not seem particularly innovative, as it essentially reduces to a form of fine-tuning.\n\n**[Using Encoder-only Models - limited Experimental setups]**\n\nAdditionally, while the paper suggests the encoder-only model’s powerful capabilities, this claim is primarily based on improvements from distillation and model size reduction. These factors alone may not suffice to substantiate the model’s claimed \"power\" without more substantial baseline comparisons, particularly in tasks beyond fine-tuning.\n\n**[Inadequate analysis of suggested method]**\n\nThere is inadequate validation of the quality of the LLM-generated dataset, which raises further concerns about the reliability and applicability of the findings."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the weakness"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The approach addresses a relevant problem in natural language processing, providing a practical solution for scenarios where computational resources are limited.\n2. The framework is straightforward, and the two methods of data generation (JSON and decomposed) are described in detail, with thoughtful consideration of their benefits and limitations.\n3. The paper presents extensive experiments, including performance comparisons, ablation studies, and evaluations on the MMLU benchmark."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel approach to address few-shot multiple choice question answering (MCQA) by leveraging large language models (LLMs) for data generation and knowledge distillation into a smaller, efficient encoder-only model, DeBERTa-v3-base. The study addresses the computational challenges associated with using LLMs directly in real-world applications and provides a three-step framework involving synthetic data generation, LLM-based scoring, and distillation training. Experimental results demonstrate significant improvements in accuracy over baseline models on the Massive Multitask Language Understanding (MMLU) benchmark, as well as competitive performance compared to larger models like LLaMA-7B and Flan-T5-250M. The paper also includes ablation studies on various generation methods, scoring techniques, and hyperparameters."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The method relies heavily on the availability of robust LLMs, which may not be readily accessible in languages other than English or for certain domain-specific tasks.\n2. The decomposed generation method, while reducing parsing errors, often results in noisy data due to longer and less structured answers."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* On line 245, why is it “approximately 4000 MCQA examples”? Shouldn’t this be exact?\n* Why was number of negative examples set to 5 when MMLU and ARC only have 3?\n* What temperature was used for the MMLU experiments?\n* In Section 3.2, how is a sequence is being transformed into a scalar — [CLS] token? Pooling?\n* From Section 3.2 it is my understanding that handling a MCQ with n options requires n forward passes. Is that correct?\n* How was inference done for the baseline models?\n* When JSON samples are not properly formatted, are they resampled, or are less than 1024 samples used?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "(Originality) While synthetic data generation with LLMs and knowledge distillation into transformer based models are both widely used and studied, the authors consider the specific setting of MCQA and distilling a decoder-only model into an encoder-only model, which is a new setting.\n\n(Quality) The authors report results across several random seeds. They also do some nice ablation studies. The limitations section was also of high quality.\n\n(Clarity) The paper was generally clear and easy to follow.\n\n(Significance) As mentioned in originality, this paper explores a setting that is slightly different from past work."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors use a few task-specific multiple choice questions as seed examples to get a LLM to generate task-specific, synthetic multiple choice data. They explore two ways of prompting the LLM to generate this data. They train a small, encoder-only model via knowledge distillation using soft labels assigned by the LLM. They show that training on synthetic data via distillation is better than just training on a few non-synthetic task-specific data points directly, and also compare to some other models. The authors also conduct ablation studies regarding the amount of synthetic data, synthetic data generation temperature, and choice of LLM."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* A couple typos (these didn’t affect my review at all, but just mentioning them)\n * Line 69/70 missing a space\n * Line 179 “a scalar values” -> “scalar values”\n* When generating synthetic data, how can you be sure you’re not generating questions that are in the MMLU/ARC test sets (or that are quite close?). It would be nice to see something like nearest neighbors of generated questions, or something like overlap of answer options with answer option sets from the test sets.\n* A note on the tasksource+decompose/JSON is I don’t think it can necessarily be concluded that tasksource+JSON is better than tasksource as 0.5 is quite a narrow margin.\n* In my mind the main weakness of this paper would be lack of significance.\n * In practice, in the resource constrained setting there are already compelling alternatives to the approach described in this paper. For example, tasksource has the same amount of parameters, comparable performance, and faster inference as it only needs one forward pass. It also doesn't require synthetic data generation for each task. Furthermore, performance is less good than that of e.g., Gemma-2-2b-it and similar models which can be run quite cheaply on even a laptop (especially after quantization). I don’t see when “distillation into DeBERTa” would be used in practice because there are already very compelling alternatives. I'd be happy to hear the authors' take on this, though.\n * A paper definitely doesn't need to be the best \"in practice\" option to be useful, as it might provide surprising/intuitive insights compared to previous work. However, I don’t find the results of this paper particularly surprising in light of past work. Knowledge distillation is already widely used with LMs, and distillation from larger LLMs to smaller LLMs is done all the time with good results. Synthetic data generation with LLMs is also frequently done, and has been shown to work well. That more synthetic data works better is as expected.\n\n*Strengths & Weaknesses tl;dr*: I think the authors’ study is well thought out and put together, and mostly easy to follow. However, I don’t think it provides substantial insight/methods beyond what already seems to be common knowledge in the research community. I’ve assigned a rating of 3, but I’d choose 4 if it were an option because I think the paper is overall well made but just doesn’t have the level of impact I typically associate with ICLR papers."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "uses a large language model for few-shot multiple-choice question answering by generating synthetic training data and distilling knowledge into a smaller model, significantly boosting its performance."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024llm,\ntitle={{LLM} Distillation for Efficient Few-Shot Multiple Choice Question Answering},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1TJSnL3ywS},\nnote={under review}\n}"
},
"abstract": {
"value": "Multiple Choice Question Answering (MCQA) is an important problem with numerous real-world applications, such as medicine, law, and education. The high cost of building MCQA datasets makes few-shot learning pivotal in this domain. While Large Language Models (LLMs) can enable few-shot learning, their direct application in real-world scenarios is often hindered by their high computational cost. To address this challenge, we propose a simple yet effective approach that uses LLMs for data generation and scoring. Our approach utilizes LLMs to create MCQA data which contains questions and choices, and to assign probability scores to the generated choices. We then use the generated data and LLM-assigned scores to finetune a smaller and more efficient encoder-only model, DeBERTa-v3-base by leveraging distillation loss. Extensive experiments on the Massive Multitask Language Understanding (MMLU) benchmark demonstrate that our method improves accuracy from 28.9\\% to 39.3\\%, representing a gain of over 10\\% compared to a baseline finetuned directly on 5-shot examples. This shows the effectiveness of LLM-driven data generation and knowledge distillation for few-shot MCQA."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Few-shot learning",
"Multiple Choice Question Answering (MCQA)",
"Data generation",
"Knowledge distillation",
"Multiple Choice Question Answering (MCQA)"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1806f3accd9e08892dfab43cf7fe9dc18ecc46fa.pdf"
},
"presentation": null,
"primary_area": {
"value": "transfer learning, meta learning, and lifelong learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "LLM Distillation for Efficient Few-Shot Multiple Choice Question Answering"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1TXDtnDIsV | Learning Mamba as a Continual Learner | main | Active | Continual Learning;Sequence Modelling | transfer learning, meta learning, and lifelong learning | 3;5;6 | 4;4;4 | 2;2;3 | 2;2;2 | 3;2;3 | 4.666667 | 4 | 2.333333 | 2 | 2.666667 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "n/a"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I am open to discussion and willing to reconsider my score if my major concerns can be adequately addressed.\n\n\n**Claims on the Effectiveness of the Proposed Regularization Technique**\n\n- For example, lines 326-329 state:\n > We apply this regularization to MambaCL and other sequence prediction models (weighted by a scalar λ) together with the MCL objective in Eq. (7), which improves the meta-training stability and convergence for all models.\n\n- The authors do not fully support their claims about \"improving the meta-training stability and convergence for all models.\" Specifically, there are no experiments showing learning curves (or similar alternatives) for all models during meta-training to compare results with and without this technique. \n\n- A seemingly related empirical evidence is presented in Figure 4. However, the results appear to pertain to a *single* model, and it is unclear, based on the figure caption and the text in lines 481-485, which specific model (i.e., Mamba, transformers) was used in this ablation study. Although the experiment demonstrates the sensitivity of meta-testing performance to the regularization strength, it lacks comprehensive evidence across multiple models to support the authors claim.\n\n\n**Experiment Implementation Details**\n\n- In the paper, it is mentioned: \n > Following Lee et al., 2024, we set the initial learning rate to 1 × 10⁻⁴...\n\n- Cloud the authors please provide some motivations for using the same hyperparameters as in Lee et al., 2024, given that the meta-training setups differ? Specifically, the authors used a pre-trained CLIP backbone as a visual encoder and included the proposed regularization loss across all models. \n\n- Moreover, were these hyperparameters adjusted for different model architectures based on some meta-validation sets, e.g., for linear transformers and Mamba? If not, wouldn't using fixed hyperparameters for all experiments and models potentially lead to sub-optimal results? If these hyperparameters are not optimal for every models, this could produce misleading results and potentially invalidate the observations.\n\n**Meta-Overfitting in Figures 3a and 3b**\n\n- The authors observed that transformers and their variants seem to suffer from severe meta-overfitting based on the results in Figures 3a and 3b. However, the potential underlying causes for this overfitting are quite unclear. Specifically:\n\n - As previously mentioned, based on the current description of the implementation details, it's unclear whether this overfitting is due to the use of improper hyperparameters, such as learning rates.\n\n - Additionally, it is undetermined whether this overfitting is influenced by the use of regularization terms for all models during meta-training. Would removing this regularization loss for transformers significantly reduce meta-overfitting?\n\n- Could the authors please provide some insights into why Mamba did not suffer from the same degree of overfitting?\n\n- While the occurrence of meta-overfitting is expected, the degree of overfitting—particularly in relation to the number of training tasks and training shots used in meta-training—exhibited by transformers and their variants in Figures 3a and 3b is somewhat surprising. Specifically, in Figure 3b, adding more training shots per class even, and almost monotonically, decreased the classification accuracy on the queries.\n\n\n**Robustness in Figure 3c**\n\n- It is somewhat unclear how the authors performed the input noise perturbation. Specifically, what does $ x_i$ in line 473 refer to? Is it the original input image to the CLIP encoder, or the extracted image embeddings that serve as inputs to the sequential learning models?\n\n- I find it very interesting that Mamba exhibits excellent robustness to input noise, even with a standard deviation as large as 10. Could the authors potentially discuss some potential reasons behind Mamba's extreme robustness to large input noise?\n\n**General Comments on MCL**\n\n- Some important challenges in the MCL setup for continual learning include: 1) its application to long continual learning sequences, 2) the requirement for offline training datasets (meta-training), and 3) generalization to unseen long OOD meta-testing tasks. These challenges cannot be resolved simply by switching from transformers or their variants to Mamba.\n\n- Are there any differences on the problem formulation and the meta-training setups between the ones in the paper and the one in MetaICL: Learning to Learn In Context, Min et al., NAACL 2022?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper is well-structured and easy to follow.\n- The authors clearly explained the issue of increased compute complexity with using transformers for MCL."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper follows the meta continual learning (MCL) framework as outlined by Lee et al., 2024. The authors meta-train sequential models on offline meta-training sequences to enhance their sequence modelling capability. The authors propose using Mamba as the sequential model instead of transformers or attention-free variants to alleviate high computational costs while still achieving satisfactory performance. Additionally, the authors introduce a selective regularization technique for meta-training, which enhances the association between query tokens and previously correlated input tokens. Experimental results demonstrate that Mamba achieves improved generalization and robustness compared to transformer variants in MCL, while using less memory for inference."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "In general:\n\n- The paper shows limited novelty. The problem formulation, specifically the recasting of the continual learning problem as a sequential modelling problem in recurrent models, mirrors the previous work by Lee et al., 2024. From the technical side, the authors propose a new selective regularization technique for meta-training and claim it improves training stability and convergence. While the technique itself is novel, there are several questionable aspects regarding this technique and the authors' claims. I cannot fully credit the novelty of this technique until these issues are addressed.\n\n- Although the authors claim better generalization and robustness when using Mamba instead of transformers based on empirical results, these results appear somewhat questionable. Furthermore, there is a lack of new insights and detailed analysis; for instance, the authors did not delve deeper into the underlying mechanisms that led to these results. This deeper analysis is crucial, especially if the primary motivation of the paper is to use Mamba (or any different model architecture) instead of transformers for the same problem settings.\n\nPlease kindly refer to the questions for more details."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see the Weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. It proposes MambaCL as a strong sequential approach to meta-continual learning. \n\n2. It performs thorough experiments and discover multiple interesting observations.\n- The use of Mamba may be more helpful for generalization over Transformers as discussed in Fig.3.\n- MambaCL is particularly effective in on fine-grained recognition tasks as shown in Table 3.\n- Integration of Mamba with MoE improves the MCL performance as reported in Table 6."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work addresses meta-continual learning using a state space model Mamba. It performs comprehensive experiments across various CL benchmarks and reports several interesting results, including comparison with Transformers and extension to Mamba mixture-of-experts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The technical novelty is limited.\n- This work is largely based on the work of (Lee et al., 2024), which first formulates the MCL problem as a sequent modeling.\n- This work simply replaces Transformers of (Lee et al., 2024) with a state space model Mamba. \n- Except this replacement, there is little novelty as its application is rather straightforward, following (Lee et al., 2024). \n\n2. The use of Mamba instead of Transformers leads to little performance improvement as reported in Table 1-5. \n- The main benefit of Mamba over Transformer lies in fewer parameters and increased processing speed as shown in Table 7.\n\n3. Implementation details are missing. \n- Appendix is too sketchy to fully understand how the MambaCL is implemented. \n- The code is not provided."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "N/A"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* It is interesting to explore how Mamba performs in a meta-continual learning setting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors explore a key research question: Can the attention-free Mamba model effectively handle meta-continual learning (MCL) tasks? They reframe State Space Models (SSM) and Mamba as sequence-prediction-based continual learners, training them via meta-learning across continual learning episodes. To enhance this training, they introduce a selectivity regularization technique. Extensive experiments reveal that Mamba consistently performs well in various MCL settings, significantly surpassing other attention-free approaches and often equaling or surpassing Transformer models in performance—all while using fewer parameters and computational resources. Notably, Mamba demonstrates strong reliability, generalization, and robustness in complex scenarios."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The conclusion of this paper is unsurprising, as Mamba's MCL performance aligns closely with its results on standard benchmarks.\n\n* There is insufficient analysis explaining how and why Mamba outperforms other attention-free architectures and achieves comparable results to Transformer-based models."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning Mamba as a Continual Learner},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1TXDtnDIsV},\nnote={under review}\n}"
},
"abstract": {
"value": "Continual learning (CL) aims to efficiently learn and accumulate knowledge from a data stream with different distributions. By formulating CL as a sequence prediction task, meta-continual learning (MCL) enables to meta-learn an efficient continual learner based on the recent advanced sequence models, e.g., Transformers. Although attention-free models (e.g., Linear Transformers) can ideally match CL's essential objective and efficiency requirements, they usually perform not well in MCL. Considering that the attention-free Mamba achieves excellent performances matching Transformers' on general sequence modeling tasks, in this paper, we aim to answer a question -- Can attention-free Mamba perform well on MCL? By formulating Mamba with a selective state space model (SSM) for MCL tasks, we propose to meta-learn Mamba as a continual learner, referred to as MambaCL. By incorporating a selectivity regularization, we can effectively train MambaCL. Through comprehensive experiments across various CL tasks, we also explore how Mamba and other models perform in different MCL scenarios. Our experiments and analyses highlight the promising performance and generalization capabilities of Mamba in MCL."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Continual Learning",
"Sequence Modelling"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f426ca68936779a2ca68468d9dc7f4ec832bf0fa.pdf"
},
"presentation": null,
"primary_area": {
"value": "transfer learning, meta learning, and lifelong learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Learning Mamba as a Continual Learner"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1ThYY28HXg | GenXD: Generating Any 3D and 4D Scenes | main | Active | 3D Generation; 4D Generation; Diffusion Models | generative models | 3;5;6;8 | 5;5;4;4 | 1;2;3;3 | 2;3;4;3 | 2;2;3;3 | 5.5 | 4.5 | 2.25 | 3 | 2.5 | -0.83205 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Q1: Following W1, what are the assumption and failure cases of the proposed camera estimation?\n\nQ2: Following W3, please describe how the metric is calculated in detail for fair comparison against prior methods."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "S1: Sensible model design\nAlthough the masked latent conditioning is not new, the architectural modification upon SVD is sensible and allows joint training on 3D and 4D data. \n\nS2: General model for 3D and 4D generation\nThe proposed model is capable of 3D and 4D generation of both object-centric and scene-level videos, which is more general than most prior methods. On a side note, the authors should also include MotionCtrl in Table 1.\n\nS3: Good writing\nThe paper is well-written and easy to follow overall."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes GENXD, a latent diffusion model for 3D/4D generation of objects or scenes. Specifically, it adopts masked latent conditions to support various number of input views, and the alpha-fusing mechanism allows joint training on 3D and 4D data. Considering the lack of 4D scene dataset, the authors further curated a new dataset, CAMVID-30K, by estimating camera with a SfM-based method and filtering out videos without object motion. Qualitative and quantitative results show that the proposed method generates comparable or slightly more satisfactory outputs than corresponding prior arts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1: Limitation of camera pose estimation\nThe proposed camera pose estimation relies on segmentation of all moving pixels. However, in scenarios where camera moves independently of object motion, especially when camera motion is large or objects take up a large portion of the scene, it would be challenging to estimate accurate camera pose. Does the method assume that these cases do not exist in the dataset?\n\nW2: Quality of 3D object generation\nThe results of 3D object generation seem to be of comparable or worse quality than the prior state-of-the-arts both qualitatively (Figure 10) and quantitatively (Table 6). Moreover, the quantitative evaluation is incomplete since some more recent methods (Zero123XL, Magic123, SV3D, EscherNet, etc) are missing and the metric is limited to CLIP-I only, while prior works usually report metrics like LPIPS, SSIM, Chamfer Distance, 3D IoU (on 3D object datasets like Google Scanned Objects).\n\nW3: Evaluation of 4D object generation\nAgain, the quantitative evaluation for 4D object generation is limited to the CLIP-I metric and more recent methods like STAG4D and DreamGaussian4D are missing. Also, it is unclear if the metrics in Table 3 are calculated on the training (synthesized) video frames only or on densely sampled views and timestamps. Since the proposed method optimizes 4D-GS only on one camera orbit without SDS loss, I suspect that the outputs look good on these frames/views but worse than other methods in novel views.\n\nW4: Small camera motion in 4D scene generation\nAll the presented results on 4D scene generation seem to have smaller camera motion compared to results shown in prior work like MotionCtrl. Although the results in Figure 5 and supplemental video show decent temporal consistency and motion, I’m wondering if it is limited to camera trajectories without much deviation from the input view.\n\nW5: Lack of results on motion strength control\nWhile the paper emphasizes the contribution of motion strength control, there is only one example of a simple driving scene. It would be more insightful to show more diverse motion cases to understand the effeteness and limitations of it."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "The author should provide rigorous analysis of the accuracy of the camera-controll capability, and how changing alpha values affects the generated motions."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The paper shares technical details on how to annotate magnitude motion and camera poses from in the wild videos. The alpha-fusion layers for motion disentangle seems an interesting design."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper trained a video generation model that can control camera trajectory and magnitude of motion and supports multiple frame conditioning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "First, I feel the claim of being able to perform 4D generation is an over-claim to me. 4D generation requires the capability of either directly generating 4D representations such as dynamic 3D GS, or at least generating synchronized multi-view videos like in SV4D. Neither of these capabilities were presented in the main paper. In table 1, the capability of generating synchronized videos were not discussed, and to me, this is a severe misrepresentation. It would be more appropriate for the author to rebrand their method as a motion-controllable and 3D-aware video model. \n\n2nd, although the idea of using alpha-fusion seems interesting, it is currently not properly evaluated. It did not show how changing alpha values affects the magnitude of generated motions, and it did not evaluate the camera control accuracy as other related papers did. Reporting CLIP-score and FID is not enough to reflect the accuracy of the proposed capability of the method.\n\n3rd, a minor point, I am not sure promoting the capability of taking multiple image input can be regarded as a major technical contribution, given it is already supported in prior works including CAT3D, and it is conceptually trivial to be implemented in most video generation models."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In the top case of Figure 10, the results from the proposed method appear off-center, possibly due to an inappropriate object-to-image occupancy ratio in the input images. Adjusting this ratio might improve the alignment of the results.\n2. If the learnable fusion weight, alpha, is set to 1, would it enable video generation based on the first frame? With alpha at 1, only the outputs from the temporal modules would contribute."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The proposed method is the first to generate any 3D and 4D scenes with camera control and an arbitrary number of condition frames.\n2. The proposed multiview-temporal modules with alpha-fusing enable separate multi-view and temporal information and effectively conduct both 3D and 4D generation.\n3. The paper constructs a new dataset for 4D scene generation. The dataset and the data curation pipeline potentially benefit the following video generation with camera control and 4D generation.\n4. The paper is well-written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "1. This paper aims to jointly generate 3D and 4D objects and scenes with camera control. \n2. This paper proposed multiview-temporal modules that disentangle camera and object movements and thus can learn from both 3D and 4D data. The proposed approach employs masked latent conditions to support a variety of conditioning views.\n3. They construct a dataset CamVid-30K that consists of high-quality 4D data with camera poses for model training\n4. Extensive experiments show that the proposed method can achieve comparable or better results than baselines in 3D/4D object/scene generation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Experiments**:\n\n1. In the experiment of 4D object generation, some relevant references and comparisons are missing, such as Consistent4D [1] and STAG4D [2]. Since these works are open-source, it would strengthen the paper to include these baselines or clarify why they are not suitable for comparison. They take single-view video as input, which should be applicable for this work.\n2. In Table 3, it would also be beneficial to report temporal consistency metrics (e.g., FVD), as temporal consistency is critical for 4D object generation.\n\n**Minor Points:**\n\n1. Clarifying the selection process for the 44K dynamic data in Objaverse-XL would be helpful. According to Diffusion4D [Liang et al. (2024)], ~323K dynamic objects were collected. For instance, what filters were applied in this work? Will the selected dynamic objects be publicly available? Adding these details in the Appendix would enhance transparency.\n2. Some technical details are missing: What is the maximum number of frames the model supports? Additionally, in Table 3, Zero-1-to-3 and RealFusion were originally designed for 3D reconstruction—how were they adapted for 4D generation in this work?\n\n[1] Jiang, Yanqin, et al. \"Consistent4d: Consistent 360 {\\deg} dynamic object generation from monocular video.\" ICLR 2024.\n\n[2] Zeng, Yifei, et al. \"Stag4d: Spatial-temporal anchored generative 4d gaussians.\" ECCV 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The statement in section 3 that \"when the camera is moving while the object remains static, the motion strength is significantly smaller compared to videos with object motion\" seems not so easy to understand. I assume the authors mean that this is the common case for natural captured video where cameras are still or moving in a slow motion?\n\n2. Does the $\\alpha$ needs to be explicitly set during training / inference? For example let network itself output the weight when dealing with 4D content and explicitly set it as 0 when dealing with 3D content. If so then it would be interesting to see given same conditions (more complicated than Figure 7) what would model outputs for different $\\alpha$. Like given multi-frames of a static scene but telling model do 4D generation and given multi-step frames with dynamic objects and force model to do static 3D generation.\n\n3. It's kind of confusing that the 5.4 ablation study is for model training or just inference after training? If it's after training, than the results in table 5 is somehow not so useful as it's trained with $\\alpha$ but in inference time not allowed to use it, which would certainly lead to performance drop."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The data curation pipeline for transforming existing videos into trainable 4D dataset is quite useful, and the proposed curation pipeline and the CamVid-30K should be beneficial to the field.\n\n2. Combining all source and sub-tasks' data (object/scene, 3D/4D) is fundamentally useful and a model trained on mixture of data should have better generalization ability. The proposed $\\alpha$ parameter seems can be understood as an explicit control to switch between 3D and 4D generation given same conditions.\n\n3. The results are promising and generally good. And extensive evaluations on multiple benchmarks show the effectiveness of the proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In the paper the authors propose two main contributions: a curated dataset for 4D generation model learning, named CamVid-30K; and a model trained to generate in 3D or 4D given arbitrary condition images, named GenXD.\n\nAuthors proposed a detailed pipeline on how to combine existing techniques to curate a 4D scene datasets for model training. Including instance segmentation modules for static / dynamic decomposition, static structure-from-motion to recover camera parameters and sparse depth map, relative depth align with sparse depth map for spotting the dynamic object and introducing a motion strength factor as additional condition.\n\nAuthors proposed a new model GenXD to train on this dataset combining with other object-centric 3D/4D datasets and 3D scene datasets. They further design a $\\alpha$-fusing strategy to better disentangle the spatial and temporal information in the data source. Experiments across various benchmark show impressive performance of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There are some minor errors or confusing points in the paper. I'll list some here and some in the following questions section.\n\n1. In L253, \"The keypoint $(u_i, v_i)^T$ in the $i$-th frame is first back-projected into world space to obtain the 3D keypoint $kp_i$\". I agree here the $kp_i$ should be in world space, but according to Eq.(3) seems it's in the camera space? From my perspective the Eq.(3) is transforming image-space coordinates to camera-space coordinates, missing the step of transforming to world coordinates.\n\n2. In all the figures with camera trajectory visualization, the legends and axis notations are very small and impossible to tell the actual information, also the trajectory only lies in a small region in the plot. I suggest authors remove the axis notations if they are too small, and zoom in to show the trajectory in a more detailed way.\n\n3. In section 5.2 4D object Generation, it seems unfair to say \"results in our method being $100\\times$ faster\", as the efficiency comes from using a different underlying 3D representation comparing to other methods, which are orthogonal to the proposed method. I think here using CLIP similarities for comparison is reasonable. Showing speed is fine but shouldn't be used as comparison.\n\n4. I think in general the paper is with good results. But according to the task of the proposed method, I expect to see more scene level 3D or 4D generation results, including larger camera trajectories and failure examples."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024genxd,\ntitle={Gen{XD}: Generating Any 3D and 4D Scenes},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1ThYY28HXg},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent developments in 2D visual generation have been remarkably successful. However, 3D and 4D generation remain challenging in real-world applications due to the lack of large-scale 4D data and effective model design. In this paper, we propose to jointly investigate general 3D and 4D generation by leveraging camera and object movements commonly observed in daily life. Due to the lack of real-world 4D data in the community, we first propose a data curation pipeline to obtain camera poses and object motion strength from videos. Based on this pipeline, we introduce a large-scale real-world 4D scene dataset: CamVid-30K. By leveraging all the 3D and 4D data, we develop our framework, GenXD, which allows us to produce any 3D or 4D scene. We propose multiview-temporal modules, which disentangle camera and object movements, to seamlessly learn from both 3D and 4D data. Additionally, GenXD employs masked latent conditions to support a variety of conditioning views. GenXD can generate videos that follow the camera trajectory as well as consistent 3D views that can be lifted into 3D representations. We perform extensive evaluations across various real-world and synthetic datasets, demonstrating GenXD's effectiveness and versatility compared to previous methods in 3D and 4D generation. The dataset and code will be made publicly available."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"3D Generation; 4D Generation; Diffusion Models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/bdce10e24a52c41ddaf103f9f899cbad21b41f20.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/2931e9d0738280f9869f3cc20f8888959a6e6be0.zip"
},
"title": {
"value": "GenXD: Generating Any 3D and 4D Scenes"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1UMxtR9Eb9 | Unifying Disentangled Representation Learning with Compositional Bias | main | Active | Unsupervised Representation Learning;Disentangled Representation Learning;Compositionality | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 3;3;6;6;8 | 4;4;2;3;3 | 1;2;3;3;4 | 2;2;3;2;3 | 2;3;3;3;4 | 5.2 | 3.2 | 2.6 | 2.4 | 3 | -0.716713 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- can authors elaborate on why the maximum likelihood is needed despite already enforcing low reconstruction error ?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- presentation: the paper is polished, clear, and well-written\n- relevance of the topics: learning models that disentangle sources of information whether attributes or objects without any prior knowledge about the type of sources but rather that rely on general prior information about the data structure like compositionality to enforce disentanglement is of great nterest to the community."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the learning of disentangled representations in particular the adaptation of existing frameworks to the learning of representations that can disentangle both attributes (e.g., color, texture, ...) and objects in a scene which authors claim prior work only tacked one of the other. The authors propose to leverage compositionality to learn disentangled representations. The setup includes pre-trained VAEs which provide representations that are then combined. The new representations serve as input to a diffusion-based decoder which is trained to reconstruct the composition of the original images. A pre-trained diffusion model is also used to enforce consistency between the input composite representations and the representation of the generated image. The method is tested for feature and object disentanglement on multiple synthetic datasets where is shows either superior or comparable performance to attribute or object disentanglement methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- complexity of the proposed approach leads to limited applicability and impact: the proposed approach requires the use of pretrained diffusion models to operate (i.e., to maximize the likelihood of composite images) and requires access to composite images to train the model. \n- limited performance increase: while results show more consistent improvements for the **multi-seed** attribute disentanglement experiments, the gains are less consistent across metrics for the **single-seed** object disentanglement experiment.\n\n\nMinor:\n- theta should be a subscript in line 187\n- typo line 212, 281, 310\n- error in figure 1: z3 should be blue instead of orange\n- line 227: figure 1 above\n\n- Not sure I am getting lines 210-213"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Are there any further insights on the failure cases? Is it harder to compose attributes or objects?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. Addresses both attribute and object disentanglement by developing appropriate mixing strategy for latents. This is helpful to steer the field towards disentangling different types of factors of variation - eg properties of object and object themselves.\n2. The paper gives an in depth analysis of the intricacies involved in optimizing for compositionality.\n3. The paper is well written for the most part. There are appropriate visualizations in method and experiments that complement the text."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper attempts to tackle attribute and object disentanglement through the same mechanism as opposed to separate treatment by prior methods. Building on diffusion based decoding approaches that maximize compositionality, this paper lays emphasis on composing/mixing strategy of latents for object/attributes."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The impact of paper can be more by showing results on real world data"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How would this generalize to more complex datasets where the exact factors of disentanglement might not be known. Does this scale to lots of disentangled factors (dozens or hundreds) or would that make the mixing strategies too complicated?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper addresses the learning of disentangled representations for both objects and attributes and makes use of a standard generative model for learning them. By introducing specific mixing strategies to combine latent representations of different images under given constraints the model is able to learn disentangled representations under a fairly simple framework.\n\nThe evaluation shows that the model learns better disentangled representations than the given baselines."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Note: I am not an expert on disentangled representation learning and know little/none of the related work.\n\nThe paper proposes an approach to learn a generative model for learning disentangled representations by maximizing the compositionality of representations. By mixing the representations of two images (given some constraints to make sure the results latent representations are valid) and maximizing the likelihood of the resulting composite images the model learns representations that can be disentangled on the object and attribute level. Experiments on synthetic datasets show that the model performs well in disentangling factors across several datasets both on the object and attribute level."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "It seems like the approach is only useable if the practitioner already knows the underlying factors they want to disentangle, as the latent mixing strategies take this knowledge under account. \nIt's also not clear to me if this would translate to real-world datasets with more complicated distributions.\nThe experiments show results for either object disentanglement or attribute disentanglement but no experiments for joint object and attribute disentanglement.\nAll experiments are done on rather simple synthetic datasets."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- What challenges do the authors anticipate in applying this model to real-world, complex datasets, and how might they address these?\n- Could dynamic/learned mixing strategies replace fixed ones to improve flexibility in complex scenes? \n- Have the authors thought about under which conditions their method can provide identifiability guarantees?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "**Strengths:**\n\n- The paper is relatively clear and easy to understand;\n\n- The general idea of enforcing compositional consistency across mixed latent representations is fairly neat, and could possibly be extended to more challenging scenarios;\n\n- The results seem to match or exceed some of the previous works on disentanglement benchmarks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a framework for disentangled representation learning that targets both attribute—and object-based disentanglement within a single model. The authors formulate disentangled representation learning as maximizing the compositionality of randomly mixed latent representations of distinct images. The method uses a pre-trained diffusion model as an image generator and introduces an additional compositional consistency loss to encourage the composite images to remain faithful to the composite latent. The authors claim that their method can obtain superior performance in standard disentanglement benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Weaknesses:**\n\n- The approach relies on a pre-trained diffusion model to ensure composite image realism, but this doesn’t guarantee alignment with the intended attribute or object combinations. As such, it is my understanding that this can compromise the interpretability and control of compositions in the general case, especially in more complex scenarios with subtle and/or hierarchical attribute/object relationships. \n- There are no guarantees that the latent representations are identifiable under the current model, and by implication, neither are the compositions;\n- The fixed mixing strategies, although appropriate for the simple cases studied, are quite rigid and likely would not adapt well to more complex scenarios in real data;\n- The scope of the evaluation is limited to toy settings which is somewhat outdated given the recent progress in generative modelling.\n- The writing is a little careless at times, there are numerous typos and/or grammatical issues some of which are mentioned below.\n\nIn my opinion, in its current state, this work largely sidesteps the key challenges in the area today, particularly the theoretical analysis of identifiability for latent representations and the development of scalable techniques that allow object-centric methods to be applied effectively in real-world settings. Therefore, I would encourage the authors to bolster their current contribution by tackling one of the two aforementioned challenges in the future.\n\n**Typo corrections:**\n\nline 34 \"theoretically prove\" \\\nline 46 \"a unique object\" \\\nline 70 \"and verify\" \\\nsection 2 heading change to \"Background\" \\\nline 77 \"incompatible with\" \\\nline 97 \"that render\" \\\nline 107 \"tailored specifically\" \\\nline 122 \"maximizing the likelihood\" \\\nline 122 \"disentangle attributes and objects\" \\\nline 147 \"to the type of\" \\\nline 163 \"While (Jung et al., 2024) rely\" \\\nline 165 sentence needs rewriting for clarity \\\nline 167 \"derive a specific\" \\\nline 177 \"of each factor\" \\\nline 177 \"derive a corresponding\" \\\nline 188 \"independent sampling of\" \\\nline 190 \"is equivalent\" \\\nline 197 \"always contains\" \\\nparagraph starting at line 206 could do with rewriting for clarity \\\nline 216 \"belong to the same\" \\\nline 259 \"While Jung et al. (2024) also maximize...\" \\\nline 295 \"to each factor of\" \\\nline 307 \"ensure reliable image generation\" \\\nline 310 \"from scratch\" \\\npage 6 footnote \"significantly\" \\\n\netc"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please address the main weaknesses listed above. These are the most critical ones, I find the paper interesting but these weaknesses do need to be tackled, specifically:\nA. Could you explain or correct the mismatch between your results and those previously reported?\nB. Could you provide results on unsupervised segmentation tasks using the three typical metrics: Adjusted rand index for foreground objects (FG-ARI), mean intersection over union (mIoU), and mean best overlap (mBO) (see Jung et al 2024 as an example).\nC. Could you provide results on at least a couple of the more complex datasets listed above (and for the tasks used in the state of art work mentioned).\n\nAdditionally these are more questions that are interesting to discuss.\n\nD. The authors state at various points in the manuscript that previous methods use inductive biases specific to either attributes or objects, making them unsuitable for both simultaneously. For instance, in the statements, “Existing disentangled representation learning methods rely on inductive biases tailored for specific factors of variation (e.g., attributes or objects). However, these biases are incompatible with other classes of factors” and “Unlike previous methods, which introduce inductive biases tailored specifically to either attribute or object.”\nHowever, the proposed method also requires a choice of mixing strategy tailored to either attributes or objects, which seems like an inductive bias itself, specific to one type of disentanglement. Could this advance choice also be considered a form of inductive bias that is specific to objects or attributes? Likewise, could state-of-the-art methods (e.g., Jung et al., 2024) also be modified to handle both attributes and objects? It’s unclear to me to what extent prior methods are fundamentally \"unable\" to address both types of disentanglement, as opposed their experiments being focused on of the the two tasks but potentially adaptable to the other in a way similar to how this proposed method can be adapted via choosing an appropriate mixing strategy.\n\nE. In Section 2 the authors make the following comment “in object-centric scenes, the same objects can appear in different spatial locations, complicating the definition of independence metrics for object representations”. It would be great to show qualitatively in examples like Figure 2 what happens when the image contains 2 identical objects and one of them is added or removed from the image. Would the proposed framework work or would there be a confusion among those object. I say this in part out of curiosity and in part because in Figure 3 (right 3rd column for inserting) it seems the model is confusing two similar objects and is adding the one in the back rather then one in the front. Could you provide those qualitative examples (if not possible in the rebuttal then in a potential future version of the paper).\n\nF. I could not find any detail (even in the appendix) about w(t). Could you please provide details about this function for both attribute and object tasks.\n\nG. The authors mention that Jung et al. use a similar prior term but since they use the same diffusion model (as opposed to a pre-trained and frozen one) they are measuring $p(x^c|z^c)$ rather than $p(x^c)$. I have two comments and questions about this:\n1. Even when using a frozen diffusion model, wouldn’t the final decoded image be conditioned on $z^c$? \n2. Regardless, I think this would be a good choice to compare. How does the current framework compare quantitatively to a similar framework that uses the term from Jung et al? Using Jung et al. solution would simplify the framework and reduce the need for training an extra model. Could you provide a comparison between these two options?\n\nH. For the DCI metric the authors say “we perform PCA as post-processing on the representation before evaluation, following (Du et al., 2021; Yang et al., 2023)”. While I appreciate that this has been done before I wonder if it is a fair evaluation of disentanglement when it is applied only to some methods. Shouldn’t each vector $z_i$ be considered one of the “dimensions”. With PCA one is not measuring the disentanglement of each dimension but rather the disentanglement of a rotated version of the linear combination of the dimensions. This does not seem the same. Please help me understand why this makes sense and it is a fair evaluation, or if you agree with me that this is not a fair evaluation please compute and report the DCI score without PCA."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The paper is easy to read. The proposed framework leverages and combines many techniques (such as diffusion models, SSL, optimal transport) in interesting way. The final framework is simple and from the reported results effective."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes a framework to learn disentangled representations of either attributes (e.g., an object's color or orientation) or distinct objects within a scene. The frameworks begins by encoding a pair of images using a VAE encoder. The embeddings generated are $k$ vectors that eventually will be the disentangled representations. At this stage a mixer samples some vectors from image 1 and some vectors from image 2 generating the representation of a ``new’’ composed image. These new representation are then noised and denoised thanks to a diffusion model before going through the decoding stage of the VAE. The mixing component can be adjusted according to the desired inductive bias. For attribute disentanglement, the model enforces mutual exclusivity by ensuring each latent vector is sampled from only one of the two images. In contrast, for object disentanglement, this exclusivity constraint is removed, allowing, for instance, the first latent vector to be sampled from both images.\n\nThe objective function is composed of three terms: (1) a latent denoising objective using a diffusion decoder (as in Jung et al., 2024); (2) a term to maximize the likelihood of the composed image, implemented as a diffusion loss, where the diffusion model is pre-trained for each task and then frozen; and (3) a consistency objective, which ensures that the latent representation $z$ of a given image and the latent representation re-encoded after decoding the reconstructed image from $z$ remain close. For this last term, the authors found that using an NCE-like objective, where each representation should be close to its counterpart and distant from other batch representations, outperformed simply minimizing cosine similarity.\n\nThe proposed method is evaluated against various baselines, datasets, and metrics for both attribute and object disentanglement, showing improved performance across the board."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The main weaknesses of this paper are in the empirical evaluations. Specifically, some of the results reported do not match those previously published, a very common task used to assess object disentanglement (unsupervised segmentation) is missing, none of the experiments are done on realistic or complex datasets (although recent state-of-art works do employ those kind of datasets). These are the main points to be discussed during the rebuttal. Fixing these could increase the soundness and the contribution scores, hence, also the final recommendation score. See below for more details on all these weaknesses.\n- Results reported in this work about other baselines do not seem to match the original results reported by the respective original papers on the same tasks and datasets. For example LDS property prediction in the original paper shows much better accuracy (80.23% on Shape, compared to the one reported in this work for LSD which is only 68.25%, for comparison the proposed method accuracy is 70.90%). For the properties “material” and “shape” the differences is even higher. \n- State of art works on object disentanglement consistently use unsupervised segmentation to assess the usefulness of the generated representations, however, these tests are missing from the current work. This is an important task because it shows a concrete application of these type of representations (and for ease of comparison given that all recent works use both unsupervised segmentation as well as property prediction).\n- Both set of experiments (attributes and objects) lack of realistic or more complex datasets which state-of-the art have been using (in addition to some of the datasets used in this work). While it is not needed to have results on all of the following datasets, showing that the proposed method scales to the complexity of some of those datasets comparably to the state of art would make the contribution stronger. For example:\n - For the attribute disentanglement FactorVAE uses CelebA.\n - For Object centric Jung et al. 2024 use Super-CLEVR (multi-colored parts and textures), and MultiShapeNet (for realistic images), while other work such as Object Centric Slot Diffusion use the MOVi-C dataset (which contains complex objects and natural background), MOVi-E datasets (which contains up to 23 objects per scene), FFHQ (high quality image of faces).\n\nOther minor evaluations weaknesses:\n- Attribute disentanglement results are reported with standard deviation (great!) but it is unclear on how many runs. Results for object disentanglement are provided without any standard deviation (but they should). \n\nMinor Writing Comments. This writing suggestions are not critical but they would improve clarity and readability of the paper. No need to discuss them in rebuttal but they do need to be fixed and could increase the presentation score.\n- I find the first part of the paper (until section 3) lacking important details that could easily be provided. For example:\n - The abstract is very dry, there is no mention of which are the “strong baselines”, nor which tasks this work was tested on, nor quantitative evaluation to show that the propose method “matches or exceeds” baselines. Consider adding more information.\n - From the abstract (and even the introduction and the beginning of section 3.1) it is not clear what “mix”, “compose”, “composition operator” mean. It could be concatenation, averaging, summing… it will only become clear much later but It would be great to provide more details if not in the abstract (ideal) at least in the introduction.\n - Still by the end of Section 2 there is no formal definition of “attribute” and “object”. The first example of attributes is at page 4. Having these definitions would help the reader understanding the work much better since the beginning of the paper. From the examples at page 4 it seems that nose is an attribute and face an object but it could easily be argued that actually nose is an object in itself, or that face is an attribute of a bigger objet (human body). Again this highlight the need for a formal definition of attributes and objects.\n- In Figure1 there is a concrete image example but it is not clear if it belongs to Attribute mixing or Object mixing. The “thing” being mixed is a cylinder and a ball so why is it linked both to attributes and objects? It would be clearer to provide an example for both. Note that everything becomes clearer once the whole paper has been read but the first time the reader reaches Figure 1 this could be a source of confusion.\n- At page 6 the authors say “This occurs because the encoder can collapse the posterior pθ(z|x) into a single mode“. I know if this is an issue with posterior collapse. If the encoder collapses the posterior, then the first loss ($L_{diff}$) should become high hence preventing the collapse. The problem seems to be related to the fact that the learnt encoding is sufficiently different (hence not collapsed) to keep $L_{diff}$ while what the authors want is not just $\\hat{z} = z$ but also as different as possible with respect to other $z$s.\n- Typo (?): “we can without modifying the objective function, which will be introduced in next paragraph.” It is not clear what is that “we can”.\n- Typo: line 241 “an noised”.\n- The following sentence is incomplete: “we adjust our image encoder to take VAE features as input”. Please clarify which kind of adjustments?\n- “When back-propagate the gradient through xc, we truncate the gradient at the last iteration of decoding”. Why, it would be great to explain and motivate this choice.\n- Typo in Line 310: “model on each training dataset from the scratch”. Should be “from scratch”.\n- It would be great to explain how you understand which latent controls which factor. I believe there is a brief explanation in the appendix but it would be great if it could be explained in the main paper.\n- In table 3 and some part of the appendix the loss term $L_con$ is called $L_cycle$. Please update it so that it is consistent throughout the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024unifying,\ntitle={Unifying Disentangled Representation Learning with Compositional Bias},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1UMxtR9Eb9},\nnote={under review}\n}"
},
"abstract": {
"value": "Existing disentangled representation learning methods rely on inductive biases tailored for the specific factors of variation (e.g., attributes or objects).\nHowever, these biases are incompatible with other classes of factors, limiting their applicability for disentangling general factors of variation.\nIn this paper, we propose a unified framework for disentangled representation learning, accommodating both attribute and object disentanglement.\nTo this end, we reformulate disentangled representation learning as maximizing the compositionality of the latents.\nSpecifically, we randomly \\textit{mix} two latent representations from distinct images and maximize the likelihood of the resulting composite image.\nUnder this general framework, we demonstrate that adjusting the strategy for mixing between two latent representations allows us to capture either attributes or objects within a single framework.\nTo derive appropriate mixing strategies, we analyze the compositional structures of both attributes and objects, then incorporate these structures into their respective mixing strategies.\nOur evaluations show that our method achieves performance that matches or exceeds strong baselines in both attribute and object disentanglement."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Unsupervised Representation Learning",
"Disentangled Representation Learning",
"Compositionality"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/0e2b98f438c7e0cc2b5de2ea9804f3edbd62ef36.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/e60a0665754a17055fb44994d1a73776711a582d.pdf"
},
"title": {
"value": "Unifying Disentangled Representation Learning with Compositional Bias"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1Uem0nAWK0 | Inference time LLM alignment in single and multidomain preference spectrum | main | Active | LLM;Alignment;inference | foundation or frontier models, including LLMs | 3;3;5;6 | 3;3;3;3 | 3;2;3;2 | 3;2;3;3 | 2;2;3;2 | 4.25 | 3 | 2.5 | 2.75 | 2.25 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See above"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. A novel inference-time model editing technique using Alignment Vectors that allows dynamic adjustment of LLM outputs along preference dimensions without retraining or complex prompt engineering\n2. A substantial synthetic dataset (38k examples) spanning three domains and three proficiency levels, with human-evaluated quality checks showing strong inter-annotator agreement\n3. Demonstration that AVs can be effectively transferred across different fine-tuning stages of the same model while maintaining performance\n4. A resource-efficient approach to achieving multidomain diverse behaviors that is 12x faster than traditional retraining methods"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel approach for adjusting Large Language Model (LLM) behaviors during inference time using Alignment Vectors (AV). The key innovation is treating alignment as a model editing problem where preference dimensions are encoded as vectors that can be dynamically combined with the base model through simple linear operations. The authors focus on three proficiency levels (expert, generic, and avoidance) across three specialized domains (medical, legal, and financial), demonstrating how their method enables flexible control over model outputs without requiring retraining. The work includes creation of a synthetic dataset with 38k query-response pairs and shows that their approach reduces resource usage by 12x compared to traditional retraining methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The evaluation based on GPT-4 judged metrics might need further validation with human study.\n2. Validation is limited to only one model (Mistral-7b) - broader testing across different open-source LLMs would strengthen the findings.\n3. Besides prompting, any test-time adaptation methods should be compare in the main experiments?\n4. Any further illustrations on \"over-generalization effect\"?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper presents a simple and effective idea to align the preferences of LLMs in inference time. The transferability of this approach across different domains is good.\n\n2. The authors have also built a large dataset that contains responses in avoidance, generic responses, and expert opinions.\n\n3. The AVs offer flexibility to adjust the level of LLMs in generation by adjusting their weights."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a preference alignment approach that only aligns during inference, using encoded representations called Alignment Vectors (AVs). The AVs are learned and tuned for the same model in different tuning stages, which shows good transferability across different domains. The authors also build a diverse domain-specific dataset with responses categorized into three levels. Extensive experiments demonstrate that AVs can help LLMs align to different domains and show promising performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The work aims to align LLMs during inference, and I agree that \"it requires full re-training when a change is needed.\" However, AVs are the subtraction of an aligned model and an unaligned model. Alignment during inference is to the unaligned one, making it return to the aligned model. If I understand correctly, this process still requires training and not fully inference-time alignment.\n\n2. Although this inference-time alignment method reduces the training cost, it requires two times inference, i.e., unaligned models and AVs.\n\n3. The dataset is built upon prompting Claude to generate different responses at different levels. Although the languages are appropriate to these levels (e.g., experts) and express relevant concepts, such as medical terms, are their content appropriate as well? For example, is a medical case resolved by LLMs, or do these LLMs only create or even hallucinate something to meet the prompts' requirements? The practicality of this alignment method is still awaiting to examine in this regard."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* Can you please review the concerns regarding novelty and clarify the contribution of the work in that context? \n* As a suggestion, the paper structure could be improved for readability. I would recommend moving the “Methodology” section to be before the “Synthesizing Specialized Preference Data”. The “Methodology” section is the core contribution and it makes sense to center it. The “Synthesizing” section could also be combined more directly with the Experiments section, so that all relevant details concerning the experiments are presented together.\n* As a suggestion, I think it would be better to not refer to the “preference accuracy” and “GPT-4 judged generation accuracy” as accuracy metrics. This is because there is no comparison to a ground truth and thus it is not accurate to refer to these metrics as accuracy metrics. “Likelihood preference rate” and “GPT-4 judged rate” may be more appropriate names. In my opinion, calling the rates that are reported “accuracy” also lends itself to misleading claims regarding the performance of the approach (e.g., reading the reported 100% accuracy numbers as perfect performance, when it is more appropriate to think of them at the rate that a particular class of text was preferred)."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The simplicity of the approach is a major strength, in that inference-time alignment significantly reduces computational costs in cases where it is of interest to align to many potential reward mixtures over single or multiple preference dimensions.\n* The work also includes a dataset contribution of the generated personas, which has potential for reuse in future work."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes an approach to inference-time control over the alignment of a large language model to multiple, potentially competing preference dimensions. The approach defines an “alignment vector” which is the difference between the weights of a model aligned to a particular dimension (e.g., using DPO or RLHF). The approach allows for smooth interpolation between the base model and the aligned model, on for any given dimension, as well as for choosing an operating point in a trade-off space between multiple dimensions. In this work, they investigate dimensions along the axes of specialized domains (Medical, Financial, and Legal) and subject matter proficiency. This is implemented by constructing 12,000-13,000 personas related to each of the specialized domains, generating LLM outputs with a prompt that emphasizes each proficiency level (avoidance, generic response, and expert response). They observe that the likelihood of the expert responses tend to increase as the mixture weights are tuned away from the base model towards that of the aligned model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Unfortunately, this work may not be sufficiently novel nor sufficiently well-grounded in the related literature. I believe that the approach proposed in the present work is essentially a special case of the “Rewarded Soups” and “Personalized Soups” approaches proposed by Rame et al [1] and Jang et al [2]. In those prior works, they similarly propose inference-time weighted mixtures over models aligned to different reward functions. They also conduct much more extensive experiments and provide more rigorous theoretical motivation for the approach. \n* The theoretical motivation is relatively superficial compared to related prior work (i.e., works that connect weight interpolation to linear mode connectivity).\n* Few details are provided regarding the methodology for creating the persona dataset. For example, no details are provided about the “thorough clean-up, involving truncation, and reformatting” (Line 159).\n\n\n1. Rame, Alexandre, et al. \"Rewarded soups: towards pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards.\" Advances in Neural Information Processing Systems 36 (2023).\n2. Jang, Joel, et al. \"Personalized soups: Personalized large language model alignment via post-hoc parameter merging.\" arXiv preprint arXiv:2310.11564 (2023)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "It appears this research has some level of human involvement participating in annotating data."
},
"flag_for_ethics_review": {
"value": [
"Yes, Responsible research practice (e.g., human subjects, data release)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- How do you check what a valid persona-query pair is? How were 13k, 12.3k, 12.8k selected? Is it based on non-repetitive persona-query samples alone or was there for Quality control involved? (section 3.1)\n- Were the annotators human or was it machine annotations? (section 3.3)\n- How can you be certain the LLM generation can serve as a ground truth? \n- Is it better to have an LLM that is aligned to one domain instead of all three domains (equation 3)? I imagine an expert in the field would feel indifferent if the specialized LLM for healthcare was also aligned with law, etc.?\n- Are there other metrics to measure outside of preference accuracy? I think the benchmark otherwise is not robust enough given preference accuracy is a hand crafted metric from the authors.\n- How are metrics like safety and helpfulness quanitfied. It was not written clearly?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is well motivated. It is true that there has been limited study on aligning LLMs at inference time.\n- The paper presents two clear research questions that they will address.\n- results show nearly maximal performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an LLM alignment method at inference time which has not been well studied. On top of preference tuning at inference time, they propose a model editing technique called alignment vector arithmetic (subtracting base model with aligned model at inference time) strengthening the methods sections of this paper. It appears there method on inference time alignment performs quite strongly in the three domains under three different instruction types (avoid, generic, and expert). From these three expert instruction type appears to do the best overall. Performance metrics were measured but were observed with some level of hesitancy and there were not many inference time alignment approaches making it difficult to assess. Authors can potentially show the benefits of inference time alignment versus that during training to further motivate the problem."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The selection of LLM is not well motivated? Why did you use Claude-3-Sonnet over GPT4 or even open source models like Llama-2/3?\n- minor attention to detail but keep writing conistent. I see instances were \\citep was used and where \\cite was used.\n- Not sure I gree with the multidomain preference approach. Seems that instead of building a generalist AI, experts in the field would prefer a specialized version of the LLM. However I will listen to the authors justification in the rebuttal period.\n- please formalize a mathematical definition of the preference accuracy.\n- the task is not super clear. Figure 2 looks amazing but I'm not sure what was done to achieve this.\n- Writing clarity can be improved. They talk about using Claude then in the section 5.3 they say they use mistral 7b. LLM selection is also not properly motivated.\n- Paper can motivate the need for inference time alignment over conventional approaches."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper presents Alignment Vectors (AV), a new method for dynamically aligning LLMs during inference, enabling customizable outputs while reducing cost."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024inference,\ntitle={Inference time {LLM} alignment in single and multidomain preference spectrum},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1Uem0nAWK0},\nnote={under review}\n}"
},
"abstract": {
"value": "Aligning Large Language Models (LLM) to address subjectivity and nuanced preference levels requires adequate flexibility and control, which can be a resource-intensive and time-consuming procedure. Existing training-time alignment methods require full re-training when a change is needed and inference-time ones typically require access to the reward model at each inference step. To address these limitations, we introduce an inference-time model alignment method that learns encoded representations of preference dimensions, called Alignment Vectors (AV). These representations are computed by subtracting the base model from the aligned model as in model editing enabling dynamically adjusting the model behavior during inference through simple linear operations. Even though the preference dimensions can span various granularity levels, here we focus on three gradual response levels across three specialized domains: medical, legal, and financial, exemplifying its practical potential. This new alignment paradigm introduces adjustable preference knobs during inference, allowing users to tailor their LLM outputs while reducing the inference cost by half compared to the prompt engineering approach. Additionally, we find that AVs are transferable across different fine-tuning stages of the same model, demonstrating their flexibility. AVs also facilitate multidomain, diverse preference alignment, making the process 12x faster than the retraining approach."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LLM",
"Alignment",
"inference"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/945b44aadde6e88b4cafbcf18b6c6730072dc554.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Inference time LLM alignment in single and multidomain preference spectrum"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1V28zvLJMg | Debiased Deep Evidential Regression for Video Temporal Grounding | main | Active | Video Temporal Grounding;Uncertainty Quantification;Multi-Modal Fusion;Deep evidential regression;Evidential deep learning | applications to computer vision, audio, language, and other modalities | 5;5;5;6 | 4;4;3;3 | 2;3;3;2 | 3;3;3;3 | 3;3;3;2 | 5.25 | 3.5 | 2.5 | 3 | 2.75 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Since the datasets have annotations like a matched moment and text, how to evaluate the model's ability to learn uncertainty when processing an unreasonable text query? Like the example in Figure 1\n2. In Geom-regularization, how to define accurate predictions? how to define less accurate predictions?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. It introduces the first extension of Deep Evidential Regression (DER) to Video Temporal Grounding (VTG) tasks, aiming to address uncertainties in open-world scenarios. \n2. It proposes a Debiased DER Model (DDM-VTG) that tackles modality imbalance and counterintuitive uncertainty through a Reflective Flipped Fusion block and a Geom-regularizer, enhancing the model's sensitivity to text queries and calibrating uncertainty estimation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "It presents DDM-VTG, a new model that integrates Deep Evidential Regression into Video Temporal Grounding to handle uncertainties in open-world scenarios. It addresses modality imbalance and counterintuitive uncertainty with a Reflective Flipped Fusion block and a Geom-regularizer, enhancing model robustness and effectiveness across benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The datasets used are not open-world.\n2. The performance on the video summarization task is not advantageous enough.\n3. Figure 2 shows 4 cases of the uncertainty. It is not clear how the method addresses (a)(b)(d) and how to evaluate if the methods can handle these scenarios."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see the weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tThe proposed baseline model is innovative for its integration of Deep Evidential Regression (DER) with VTG tasks to address both aleatoric and epistemic uncertainties. \n2.\tThe paper not only identifies the existence of modal imbalance and structural flaws in regularization within the baseline model but also offers solutions to these issues.\n3.\tThe authors have conducted extensive experiments across various benchmarks, which effectively demonstrate the efficacy of their approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a novel approach to Video Temporal Grounding (VTG) by integrating Deep Evidential Regression (DER) to address uncertainties in open-world scenarios, such as out-of-distribution (OOD) data and open-vocabulary queries. The authors propose a Debiased DER Model for Video Temporal Grounding (DDM-VTG) that tackles modality imbalance and counterintuitive uncertainty through a Reflective Flipped Fusion (RFF) block, a query reconstruction task, and a Geom-regularizer. The model demonstrates effectiveness and robustness across multiple benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tWhile the paper presents a novel approach to addressing uncertainties in VTG, it could benefit from a deeper analysis of the limitations of the proposed model, especially in handling highly ambiguous queries or extremely OOD data.\n2.\tThe paper could provide more insights into how the DDM-VTG model generalizes to other video-related tasks beyond the tested benchmarks.\n3.\tWhen designing the baseline, whether DER provides positive assistance for the correct prediction of the model, the author needs to provide corresponding proof experiments.\n4.\tWhen introducing the baseline, the author believes that it has a modal imbalance problem, and DDM-VTG effectively alleviates this imbalance, which requires corresponding experimental evidence.\n5.\tThe method proposed by the author showed out of distribution predictions on the qv height dataset, which to some extent indicates the generalization of DDM-VTG, but it is not clear and specific enough. The author needs to provide results on charades-CD."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- In Figure 2, several challenges within VTG tasks are highlighted, but it appears that targeted comparative experiments were not conducted in the study. When compared with other works, can DDM-VTG perform better in addressing these challenges? Some discussions are expected.\n- In the Query Reconstruction task, how can DDM-VTG ensure that the tokens predicted by the QR head are accurate when dealing with complex videos? What happens if the predictions are incorrect? Does it affect the accuracy of temporal localization of the whole video?\n- In the case study, the average length of the videos is 150 seconds. How would the model perform with longer videos, and would the cost increase significantly?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The basic idea is easy to follow and the main motivation is clear.\n- The innovative integration of DER into VTG tasks is a novel approach that effectively addresses key issues like OOD videos.\n- The proposed method achieves strong experiment results, both compared to its baseline and other SOTA methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes Debiased DER Model for VTG, tackling open-vocabulary queries and out-of-distribution videos in video temporal grounding tasks. It extends the vanilla DER to VTG and establishes a baseline. To address two critical biases in the baseline—modality imbalance and counterintuitive uncertainty—the method incorporates a RFF block for progressively enhancing modal alignment, a query reconstruction task to ensure robust cross-modal alignment capabilities and a Geom-regularizer to calibrate uncertainty estimation. The proposed method has been evaluated on 4 datasets, demonstrating its effectiveness in Moment Retrieval, Highlight Detection and Video Summarization. The ablation studies also support the analysis."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- In figure3, I can’t see the difference between the two distributions except for the color, which might be confusing as to why one is unreliable and the other is trustworthy.\n- About the presentation. In 4.3, there is a significant disparity in the level of detail explained for different modules, perhaps the arrangement of content in the main text and appendix could be adjusted to make it clearer for readers.\n- The experimental section only shows the comparison with SOTA methods on various metrics. In the appendix, only some cases of the QVHighlights dataset are shown, without visual results for the other datasets mentioned in the paper, and it also lacks displays of comparative results for the three sub-tasks. \n- It would be more complete to have a discussion of this increased cost if there are any, as well as techniques used to overcome it.\n- (Minor) Minor typos/grammatical mistakes (e.g. 4.2 “VALLINA”)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Why does the model only mask and reconstruct one noun? Would masking more words help enhance text sensitivity?\n2. In the conclusion, the authors claim that the model’s capabilities are limited by data quality and scale. ActivityNet-Captions and Ego4D-NLQ are large-scale datasets. Would the model perform well on these two datasets?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper extends the deep evidential regression to video temporal grounding for uncertainty estimation.\n2. The authors propose a Geom-regularizer to solve the counterintuitive uncertainty and calibrate the estimation of uncertainty. \n3. The proposed method achieves comparable performance in the majority of benchmarks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the issue of open-world challenges caused by open-vocabulary queries and out-of-distribution videos in video temporal grounding. The authors adopt the Deep Evidential Regression as baseline, and propose a Reflective Flipped Fusion block to realize modality alignment and query reconstruction. Meanwhile, a Geom-regularizer is proposed to debias and calibrate uncertainty estimation. Extensive experiments are conducted on the public dataset to validate the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The evaluation of location bias is insufficient. There are no transfer experiments on the Charades-CD and ActivityNet-CD datasets to validate the model in OOD scenarios, as done by MomentDETR and MomentDiff. \n2. The study of query reconstruction (QR) is not thorough. The authors only present performance across different QR epochs and learning rates.\n3. Insufficient performance evaluation. Ego4D-NLQ is widely used in previous works, yet this study does not report results on this dataset. Additionally, the paper fails to compare with recent works, such as \"R2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding\" from ECCV 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper presents SRAM, a novel approach that leverages Evidential Deep Learning to enhance model's robustness and interpretability in Video Temporal Grounding tasks."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024debiased,\ntitle={Debiased Deep Evidential Regression for Video Temporal Grounding},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1V28zvLJMg},\nnote={under review}\n}"
},
"abstract": {
"value": "Existing Video Temporal Grounding (VTG) models perform well in accuracy but often fail to address open-world challenges posed by open-vocabulary queries and out-of-distribution (OOD) videos, which can lead to unreliable predictions. To address uncertainty, particularly with OOD data, we build a VTG baseline using Deep Evidential Regression (DER), which excels in capturing both aleatoric and epistemic uncertainty. Despite promising results, our baseline faces two key biases in multimodal tasks: (1) Modality imbalance, where uncertainty estimation is more sensitive to the visual modality than the text modality; (2) Counterintuitive uncertainty, resulting from excessive evidence suppression in regularization and uneven sample error distribution in conventional DER. To address these, we propose an RFF block for progressive modality alignment and a query reconstruction task to enhance sensitivity to text queries. Additionally, we introduce a Geom-regularizer to debias and calibrate uncertainty estimation. This marks the first extension of DER in VTG tasks. Extensive experiments demonstrate the effectiveness and robustness of our approach. Our code will be released soon."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Video Temporal Grounding",
"Uncertainty Quantification",
"Multi-Modal Fusion",
"Deep evidential regression",
"Evidential deep learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d8c1ec2d0da362eb2f91b0eea496e4d2063cb0c6.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Debiased Deep Evidential Regression for Video Temporal Grounding"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1VwWi6zbxs | Mastering Task Arithmetic: $\tau$Jp as a Key Indicator for Weight Disentanglement | main | Active | task arithmetic;model editing;task vector | transfer learning, meta learning, and lifelong learning | 3;5;5;6 | 4;2;2;3 | 2;2;3;3 | 3;2;3;3 | 3;3;2;3 | 4.75 | 2.75 | 2.5 | 2.75 | 2.75 | -0.622543 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* I find the notion of \"One-shot\" and \"fine-tuned\" experimental setting could be improved; First because the notion of fine-tuning can become confusing between the coefficients $\\alpha$ vs the model parameters $\\theta$ fine-tuning. Second, because it is not clear if it is referring to a specific method/objective for fine-tuning the task coefficients (e.g. AdaMerging or others) or simply hyperparameter search."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* The paper is well motivated and grounded in previous related work\n* The proposed method is simple and could adapt to different task arithmetic variants\n* Interesting insights on the link between the proposed regularisation and weight disentanglement\n* A more efficient implementation of the method is proposed for handling larger number of tasks (Equation 11)"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper tackles the task of task arithmetics, i.e. how to combine *task vectors/parameters* to form a multi-task model. A key issue is to determine the best combination weights as to minimise interference between tasks, and maximise sharing of information / positive transfer.\nMore specifically, the authors make use of two previously introduced notions: **(i)** the notion of **weight disentanglement** which was proposed as a measure of task interference in task arithmetic. And **(ii)** the Neural Tangent Kernel (NTK) which designates a training regime where parameter updates can be expressed with a linearised approximation.\nPrevious works have suggested that performing task arithmetics under the NTK regime can lead to better MTL performance. the authors investigate this behaviour in more depth. Based on this analysis, they also propose a regularisation technique to further reduce task interference when performing task arithmetic, which involves slightly fine-tuning the task vectors themselves."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Missing baselines on more recent task arithmetic work: The main tables should include some recent task arithmetic results (e.g. TIES-Merging and AdaMerging) as well as standard single task and MTL baselines (although it is in the appendix), if only to better understand the existing gap in performance.\n* Missing discussion about the extra cost: The paper briefly mentions efficiency of the method (e.g. equation 11 or line 364), however I think this could be discussed in more depth: On the one hand, the proposed method seems more robust to task coefficients $\\alpha$, which could save on hyper parameter tuning; On the other hand, it involves a fine-tuning procedure which requires knowledge/access to all tasks simultaneously (Equation 10) as opposed to directly combining task vectors obtained independently from one another."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- How does the computational cost of the proposed method compare to existing approaches?\n- Can the method be adapted to work with limited or no access to data from other tasks?\n- How well does the approach generalize to other domains beyond image classification?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper provides a comprehensive theoretical and empirical study of the relationship between the proposed $\\tau$Jp metric and task interference in neural networks.\n\n2. The introduction of $\\tau$Jp as a new metric for weight disentanglement is novel and well-motivated.\n\n3. The proposed regularization method eliminates the need for tuning inference-time hyperparameters ($\\alpha$), which is a practical advantage."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new metric called $\\tau$Jp ($\\tau$-Jacobian product) for measuring weight disentanglement in task arithmetic operations on neural networks. The authors theoretically analyze the relationship between $\\tau$Jp and interference between tasks, and introduce a regularization method based on minimizing $\\tau$Jp during fine-tuning. Experiments on image classification tasks demonstrate improved performance and reduced need for hyperparameter tuning compared to existing methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The method requires access to data from all other tasks during training, which is often unavailable in realistic task arithmetic scenarios. This limits the practical applicability of the approach.\n\n2. The computational cost of calculating τJp is likely very high, as it involves multiple Jacobian-vector products. The paper does not report runtime or resource requirements, making it difficult to assess scalability.\n\n3. Experiments are limited to image classification tasks. Evaluation on other domains like language tasks would strengthen the claims of generality.\n\n4. The derivation of Equation 7 from the weight disentanglement definition is non-trivial and should be explained more clearly."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "No"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Suggestions:\n\n1.The related work section could be improved by explicitly connecting prior studies to this paper's contributions, emphasizing how the proposed method addresses existing limitations. \n2.Consider moving the related work section after the methods section, especially since the current structure delays the introduction of the proposed method until page 5. This change would allow readers to quickly understand the proposed approach before diving into comparisons, enhancing readability and engagement."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.The paper addresses an important and timely topic: in an era where foundation models are prevalent, better understanding weight disentanglement is particularly valuable for enhancing the practical applicability of these models.\n\n2.The proposed metric offers a deeper understanding of weight disentanglement, and the regularization method effectively reduces task interference, minimizing the need for coefficient adjustments.\n\n3.The success of the proposed method in incremental learning scenarios aligns well with real-world applications, demonstrating its scalability and practical relevance when future tasks are unknown."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel metric, $\\tau \\text{Jp}$ ($\\tau$-Jacobian product), to improve understanding of weight disentanglement in task arithmetic. It demonstrates that τJp inversely correlates with normalized accuracy, suggesting it as an indicator for weight disentanglement. A regularization technique is proposed to minimize τJp during fine-tuning, effectively reducing the need for coefficient adjustments in task addition and negation. It also proves valuable in incremental learning scenarios where future tasks are unknown."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.While the paper introduces the $\\tau \\text{Jp}$ metric and explains its relationship with weight disentanglement, the theoretical justification for why $\\tau \\text{Jp}$ regularization effectively reduces task interference could be further elaborated.\n\n2.The proposed regularization method lacks a comparison with other existing regularization techniques, which makes it difficult to fully assess its relative strengths and weaknesses. \n\n3.The paper mentions task addition, task negation, and task analogies in the introduction and background sections as key operations in task arithmetic, but there are no experiments evaluating task analogies. This inconsistency weakens the completeness of the experimental validation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Could the proposed regularization affect the model's plasticity? Specifically, how might the addition of this regularization impact the fine-tuning performance, potentially influenced by the strength of the regularization?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "+ This paper is well-written and easy to follow.\n+ The experiments are extensive, and the results sound good.\n+ The design of metric τJp is reasonable and interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a novel approach to task arithmetic in neural networks, which leverages a novel metric that quantifies the relationship between task vectors and the Jacobian of pre-trained models. The authors claim that by minimizing this metric through regularization, they can significantly reduce interference between task predictions and enhance the accuracy of task arithmetic operations. The experimental results demonstrate substantial improvements in performance for both task addition and task negation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The novelty of this paper may be limited. My consideration is that this paper seems to fundamentally align with the approach proposed by Ortiz-Jimenez et al. (2023) [1], which also emphasizes fine-tuning models in the tangent space. Although using the specific regularization term, this paper does not sufficiently differentiate itself from this existing work.\n- While the empirical results are compelling, the paper lacks a thorough theoretical explanation for why the proposed regularization leads to better performance compared to other methods, such as those discussed in Ortiz-Jimenez et al. (2023). I am confused about why a simple and soft regularization results in such improvement compared to [1]. A deeper theoretical analysis could strengthen the paper's contributions.\n- The authors briefly mention tuning the regularization strength but do not provide sufficient details on how this hyperparameter was selected. The sensitive analysis of this hyperparameter is also necessary for the paper.\n\n[1] Guillermo Ortiz-Jimenez, Alessandro Favero, and Pascal Frossard. Task arithmetic in the tangent space: Improved editing of pre-trained models. Advances in Neural Information Processing Systems, 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Our proposed $\\tau$Jp regularizer improve the performance of task arithmetic and lead to its practical applications."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024mastering,\ntitle={Mastering Task Arithmetic: \\${\\textbackslash}tau\\$Jp as a Key Indicator for Weight Disentanglement},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1VwWi6zbxs},\nnote={under review}\n}"
},
"abstract": {
"value": "Model-editing techniques using task arithmetic have rapidly gained attention. Through task arithmetic, simply through arithmetic operations on the weights of pre-trained and fine-tuned models create desired models, such as multi-task models, models with specific tasks unsolvable, or domain-transferred models. However, task arithmetic faces challenges, such as low reproducibility and the high cost associated with adjusting coefficients in the arithmetic operations on model parameters, which have limited its practical success. In this paper, we present three key contributions in the context of task addition and task negation within task arithmetic. First, we propose a new metric called $\\tau$Jp which is based on the product of the task vector ($\\tau$) and the Jacobian of the pre-trained model with respect to its weights. We show that $\\tau$Jp has a causal relationship with the interference that occurs from arithmetic operations. Second, we show that introducing regularization to minimize $\\tau$Jp significantly mitigates interference between task inferences, which leads to eliminating coefficient tuning and better accuracy on each task. Third, in the context of incremental learning, we confirmed that our $\\tau$Jp regularization demonstrates more robust performance in environments where future tasks to be learned are not accessible, validating the scalability of the approach. Finally, we demonstrate that the $\\tau$Jp regularizer further reinforces the performance of task arithmetic by leveraging publicly available fine-tuned models, offering practical benefits for real-world applications."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"task arithmetic",
"model editing",
"task vector"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/22d9dea5dae592047e0035c4c20f89b0f99b9681.pdf"
},
"presentation": null,
"primary_area": {
"value": "transfer learning, meta learning, and lifelong learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Mastering Task Arithmetic: $\\tau$Jp as a Key Indicator for Weight Disentanglement"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1W6oINj8ne | BRSSD10k : A SEGMENTATION DATASET \\OF BANGLADESHI ROAD SCENARIO | main | Active | Instance Segmentation;Computer Vision;Dataset;Autonomous Driving;Bangadeshi Road | datasets and benchmarks | 1;3;3;5 | 4;5;5;4 | 1;3;1;3 | 2;2;1;2 | 1;3;1;3 | 3 | 4.5 | 2 | 1.75 | 2 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "Human faces appear on the road. They are not removed and blurred."
},
"flag_for_ethics_review": {
"value": [
"Yes, Privacy, security and safety"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- The authors need to consider including more baselines for evaluation."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- The authors have compiled a comprehensive dataset with over 10,000 high-resolution images and detailed instance segmentation annotations, covering a diverse range of geographic regions within Bangladesh.\n\n- A rigorous two-stage validation process for annotations ensures high-quality data, which is essential for developing robust and accurate computer vision models.\n\n- Comparative evaluation with multiple state-of-the-art models (e.g., YOLOv5, YOLOv8, YOLOv9) showcases the benchmark's effectiveness and sets a baseline for future research on BRSSD10k.\n\n- The inclusion of region-specific object classes (e.g., rickshaws, CNGs, informal stalls) provides a unique contribution, enabling autonomous systems to better understand and navigate environments outside of structured Western road layouts."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces BRSSD10k, a segmentation dataset specifically tailored to the unique and diverse road scenarios in Bangladesh. This dataset consists of 10,082 high-resolution images from nine cities across the country, with detailed annotations covering 34 classes that reflect the region's distinct transportation environment. Classes include locally prevalent elements such as rickshaws, CNGs (auto-rickshaws), and informal roadside stalls, which are critical for developing robust autonomous driving systems for Bangladeshi roads."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The dataset only covers limited regions in one country, which is not enough to evaluate the generalization ability of segmentation.\n\n- The quality of the segmentation masks is not satisfactory.\n\n- Certain critical classes, such as traffic lights, construction vehicles, and road blockers, are underrepresented in the dataset.\n\n- The dataset currently lacks nighttime and adverse weather imagery (e.g., rain or fog), which are essential for real-world segmentation.\n\n- The paper only evaluates three versions of the YOLO model, which may limit insights into how BRSSD10k performs across different model architectures. \n\n- There is no analysis on how models trained on BRSSD10k generalize to other datasets or vice versa."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "It would make sense to extend the dataset with full panoptic labels.\n\nIt would make sense to cite and discuss related road driving datasets: ACDC, WildDash, FishyScapes, SegmentMeIfYouCan."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The dataset will likely prove as a valuable contribution to the field.\n\n- Many stuff classes are annotated (sky, road, wall, fence).\n\n- Little effort is required to extend the dataset for panoptic segmentation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The manuscripts presents a novel road-driving dataset for instance segmentation. The dataset includes more than 10000 high resolution images acquired along 9 cities in Bangladesh. The dataset taxonomy includes 34 classes that reflect typical needs of autonomous driving and regional characteristics. The taxonomy is mostly well-balanced (Figure 2). There are around 6000 training, 2000 validation and 2000 test images. The presented experiments involve object detection with stock models and report mAP50 performance on validation and test datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- it is hard to recommend n+1-th road-driving dataset for publication at a major conference\n\n- dataset focuses on typical images, for which our models are known to work well \n\n- the baseline models address only object detection (some universal segmentation model such as MaskFormer would be a better choice)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- [Q1] How is the dataset split into train/val/test? Do you perform geographic\n splitting, or is the splitting purely at random?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- [S1] Diverse object classes from multiple cities in Bangladesh, reflecting a\n unique label distribution that is materially different from other established\n datasets such as Waymo Open and nuScenes.\n- [S2] The authors also present the results of a few detection baselines based\n on YOLO, trained and evaluated on this dataset's corresponding splits."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a new dataset focused on object detection and segmentation,\ntailored to the specific driving conditions in Bangladesh in terms of its\nappearance and taxonomy.\n\nThe dataset encompasses ~10k camera images collected in Bangladesh using a cell\nphone. The images are sourced from video chunks originating from diverse\nregions, and contiguous footage is sampled to 1 Hz. The frames are annotated\nwith object bounding boxes and segmentation masks.\n\nThe paper motivates the dataset as helpful in developing computer vision\nalgorithms specific to Bangladeshi driving scenes and performs a brief\ncomparative analysis of different YOLO-based models trained on this dataset. The\npaper helpfuly provides metadata like class and geographic distribution\nhistograms as well as many qualitative examples in order to help the reader get\na sense of the dataset.\n\nWhile it is definitely important to promote datasets which cover a diverse range\nof environments, I think the quantitative argument made in this paper to\nmotivate the dataset could be strengthened. For example, the argument could be\nimproved by showing experimental results which demonstrate the limitations of\nother dataset on data collected in Bangladesh."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- [W1] The related section could be made a bit more comprehensive. For example,\n it would be interesting to also discuss other datasets focusing on non-Western\n streets, such as the dataset introduced in [@traphic]. Even though it's\n mentioned later in the paper, the BadODD dataset should also be covered in the\n related work section and in the relevant tables.\n- [W2] While it is helpful to benchmark a few existing models on the proposed\n dataset, it would be beneficial to also compare these numbers with those from\n models trained on a mainstream dataset such as CityScapes or Mapillary Vistas.\n If models trained on a dataset like Cityscapes or Mapillary Vistas fail to\n perform well on this dataset, that would make for a good quantitative argument\n for why this dataset will help the community.\n - As a side-note, even if the taxonomy another dataset won't match the one in\n BRSSD10k perfectly, this gap could be alleviated by the use of an\n off-the-shelf VLM, which have been shown to be very good at tasks like open\n set object detection---see, for example, Grounding DINO [@liu2024grounding].\n- Minor Suggestions\n - Sections 7.3, 7.4, and 7.5 can be shortened and replaced with more\n comparisons, or additional details about the dataset or its software\n development kit. Readers can refer to the corresponding references if they\n are curious about the specific loss functions used to train these models.\n - The citation markers seem to be missing parentheses around them. For\n example, a sentence like \"... complex environments He et al. (2017)\" should\n be formatted like \"... complex environments (He et al., 2017).\"\n- References:\n - [@traphic]: Chandra, Rohan, et al. \"Traphic: Trajectory prediction in dense\n and heterogeneous traffic using weighted interactions.\" CVPR. 2019.\n - [@liu2024grounding]: Liu, Shilong, et al. \"Grounding dino: Marrying dino\n with grounded pre-training for open-set object detection.\" arXiv preprint\n arXiv:2303.05499 (2023)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "The authors strongly emphasize that the main motivation of this work is that there lack segmentation datasets in Bangladeshi. It should be clarified that the contribution of a dataset does not lay in its location, but the data quality, diversity, and scale."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "It clearly analyzes the practical scenario characteristics in Bangladeshi. The class definition and labeling process fully fits the scenarios. This dataset acts as a valuable resource for developing autonomous driving models in this country. It may also contribute to general vision perception tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a road segmentation dataset for autonomous driving purpose. It focuses on the scenarios in Bangladeshi and make specific adaptions in class definition and labeling. Validation experiments are conducted."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.The authors just use one paragraph to summarize the related datasets without any detailed comparison. I do not think the authors really understand the development of this field as there are only eight references. \n2. The scenarios in the dataset are more likely to be corner cases comparing with the mainstream segmentation datasets. Its universality cannot be verified.\n3.The structure of the manuscript is poorly organized. The logic between sections 3-6 are chaotic.\n4.It is really confusing that the authors validate the segmentation dataset with YOLO.\n5. It is really funny that the GT maps in Figure 3 are wrong."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "BRSSD10k is Bangladesh's first instance segmentation dataset for autonomous driving, with 10,082 high-res images from 9 cities. It offers detailed road element annotations, providing a key benchmark for AI models in diverse South Asian road scenarios"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024brssdk,\ntitle={{BRSSD}10k : A {SEGMENTATION} {DATASET} {\\textbackslash}{\\textbackslash}{OF} {BANGLADESHI} {ROAD} {SCENARIO}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1W6oINj8ne},\nnote={under review}\n}"
},
"abstract": {
"value": "In this paper, we present a novel Bangladeshi Road Scenario Segmentation Dataset designed to advance autonomous driving technologies under the challenging and diverse road conditions of Bangladesh. This comprehensive instance segmentation dataset comprised 10,082 high-resolution images captured across nine major cities, including Dhaka, Sylhet, Chittagong, and Rajshahi, addressing the critical need for region-specific computer vision data in developing countries. Unlike existing autonomous driving datasets that primarily focus on western road conditions, BRSSD10k encompasses a wide range of environments unique to Bangladesh, including unstructured urban areas, hilly terrains, village roads, and densely populated city centers. The dataset features instance segmentation annotations with classes specifically tailored to reflect the distinctive elements of Bangladeshi roads, such as rickshaws, CNGs (auto-rickshaws), informal roadside stalls, and various nonstandard vehicles. To demonstrate its utility as a benchmarking tool for autonomous driving systems, we present comparative results from several state-of-the-art instance segmentation models tested on this dataset, achieving an mAP of 0.441. This evaluation not only showcases the dataset's effectiveness in assessing model performance but also underscores the need for adaptive algorithms capable of handling diverse and unpredictable urban environments in the context of autonomous navigation."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Instance Segmentation",
"Computer Vision",
"Dataset",
"Autonomous Driving",
"Bangadeshi Road"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/579a826d28a09538bc6be840565dc960b193450e.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/988c17d58f6bc59372415bbca67af3f085fe26cd.zip"
},
"title": {
"value": "BRSSD10k : A SEGMENTATION DATASET \\\\OF BANGLADESHI ROAD SCENARIO"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1X1R7P6yzt | Discrete GCBF Proximal Policy Optimization for Multi-agent Safe Optimal Control | main | Active | control barrier functions;multi-agent systems;black-box systems;partial observability;reinforcement learning | reinforcement learning | 5;5;6 | 3;3;4 | 2;2;3 | 2;2;3 | 2;3;3 | 5.333333 | 3.333333 | 2.333333 | 2.333333 | 2.666667 | 1 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Why is the dependency of the algorithm on a nominal policy a bad idea in the given settings? Since it appears easy enough to construct one (say a PID controller like in [1]) for the environments given, is this the right direction?\n2. What is the difference between the training and inference situations in terms of the number of agents? Does the algorithm need to be retrained for every new number of agents unlike in [1] where the algorithm was trained on 8 agents and deployed on up to 1024 agents (albeit being purely concerned with single goal reaching while avoiding collisions)?\n3. With regards to the sample efficiency and computation requirements, how is DGPPO w.r.t. the baselines (I noticed the training time was listed as 12 hours on the reference specifications)? On a related note, how is the benefit of a constant set of hyperparameters demonstrated? Can we confidently say the hyperparameter search for the baselines takes significantly longer (in wall clock time on a comparable machine)?\n4.What are the restrictions on the definition of the avoid set $\\mathcal{A}_i$ and the assumptions on the function $h_i^{(m)}$? Do the avoid sets primarily represent distance to $y^k$ greater than some safe radius?\n5. The LiDAR part of the observation appears less clear. From the appendix (Sec B.2.1) is it right to say that only the LiDAR environments use the 32 equally spaced ray capturing relative positions? How are the obstacles in the VMAS environments represented to the agent?\n6. The experiments with scalability to multiple agents (Fig. 5) appear quite close to the baselines. Is there a better comparison available?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The proposed method (DGPPO) is an elegant way to solve the discrete-time distributed MASOCP (multi-agent safe optimal control problem) with unknown environment dynamics. This assumption was not present in previous work which had to differentiate through the given transition functions.\n- The theorems introduced provide a solid foundation for the applicability of DGPPO in the discrete-time setting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The proposed DGPPO framework addresses challenges in multi-agent systems (MAS) by learning both a discrete graph control barrier function (DGCBF) and a high-performance safe policy under unknown discrete-time dynamics, changing neighborhoods, and input constraints. DGPPO combines reinforcement learning and DGCBF, achieving high task performance and safety across varied environments without needing a pre-existing nominal policy or multiple hyperparameter sets, consistently outperforming other methods in both metrics of safety rate vs cost for various simulations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Scalability appears limited (only up to 7 agents) compared to the continuous time setting of GCBF+ [1] (most likely due to the present of unknown environment dynamics and the noise introduced through the sample score function gradient).\n- The stability of DGPPO compared to the baselines does not seem appropriately explained. Is it a purely empirical observation or is there some theoretical justification available?\n- In the given setting, why is the assumption of unknown dynamics interesting? To me, the environments considered are purely the settings of [1] without using the environment dynamics directly (even though they are available). Would it not be a better idea to consider an environment where the dynamics are not as simple as the ones in [1] or some complex unknown function (for e.g., common Mujoco robots)?\n\nReferences:\n\n[1] GCBF+: A Neural Graph Control Barrier Function Framework for Distributed Safe Multi-Agent Control, Zhang et al, T-RO, 2024"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Q1: How does DGPPO ensure that agents achieve the global objective, rather than just meeting safety constraints? \n\nQ2: Could DGPPO’s rollout requirements be reduced to improve sample efficiency without compromising safety? \n\nQ3: What are the practical scalability limits of DGPPO when applied to larger MAS, particularly with the use of GNNs?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The authors present an innovative combination of reinforcement learning and discrete-time CBFs to address the challenges of safety and task performance in MAS with unknown dynamics. The extension to the DCBF framework in discrete-time and the introduction of DGCBFs allow for neighborhood adaptability, overcoming limitations associated with continuous-time CBFs. The approach is well-motivated, tackling safety in unknown environments without requiring a predefined nominal policy—a substantial improvement for multi-agent reinforcement learning (MARL). I particularly appreciate the rigorous theoretical presentation to support the proposed approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel framework, Discrete Graph Control Barrier Functions Proximal Policy Optimization (DGPPO), for ensuring safe control in multi-agent systems (MAS) operating under unknown discrete-time dynamics and input constraints. Unlike prior approaches that rely on continuous-time models and known nominal policies, DGPPO incorporates discrete CBFs (DCBFs) and reinforcement learning to dynamically learn both a high-performance safe policy and a discrete graph CBF (DGCBF). Through extensive empirical validation across various simulated MAS environments, DGPPO claims to achieve both high task performance and safety without the need for hyperparameter adjustments specific to each environment."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While DGPPO introduces a novel safety mechanism for MAS, nonetheless I believe there are few critical concerns that could limit the effectiveness and general applicability of the approach.\n\n**Lack of clear performance metrics**: \n\nwhile DGPPO is shown to achieve high safety, the empirical results focus primarily on safety metrics and the minimization of constraints. It remains unclear if the agents are successfully accomplishing global objectives. Without metrics like mean global success rate or reward, it is difficult to assess if the agents are merely achieving safety (e.g., by staying stationary) rather than making meaningful progress toward the task goals while satisfying the safety constraints. This is especially relevant as the DGPPO framework does not incorporate a nominal policy, meaning that without these metrics, the experiments risks overlooking cases where the agents avoid unsafe states at the expense of task completion. Does the proposed framework present mechanisms to ensure that safety constraints do not excessively dominate the objective?\n\n**Limited scalability experiments**: \n\nthe authors state that DGPPO scale well w.r.t to the baseline approach tested, however testing 5 agents instead of 3 as in the original experiment, I believe it is too limited to claim scalability of the proposed approach. Crucially, as stated from the authors themselves the proposed approach requires both stochastic and deterministic rollouts to enforce DGCBFs. While this approach ensures safety in discrete-time settings, it also introduces significant sample inefficiency, which may limit the framework’s scalability to larger or more complex MAS. Hence, an extensive test with for instance 10 or 15 agents would strength the results of the paper.\nMoreover while the authors employ GNNs with attention mechanisms to handle changing agent neighborhoods, the computational complexity of GNNs in larger MAS could become important. In high-density environments with frequent neighborhood changes, maintaining an updated and accurate DGCBF through GNNs could pose significant computational challenges, possibly impacting real-time applicability. A detailed discussion on the scalability of GNN-based policies for larger agent systems would add valuable context to the method’s limitations.\n\n**Dependence on hyperparameter $\\nu$ for constraint enforcement**: \n\nthe authors claim on the fact that DGPPO is less sensitive to hyperparameters does not seem to be properly backed up. From the plot in Fig. 6b the value of $\\nu$—responsible for controlling the weight on constraint minimization steps—significantly impacts performance. Misalignment in $\\nu$ could lead to either overly conservative or unsafe policies, showing that DGPPO still requires careful tuning, contrary to its stated hyperparameter robustness."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "(1) The policy in this work only takes local observations as input, such that it is decentralized. Why do you refer to it as a distributed policy? \n\n(2) The modified constraint (11b) is too strict which cannot be satisfied by a lot of policy classes, such as the Gaussian policy.\n\n(3) In (12), please provide the explicit gradient formula when the safety condition is violated. Note that the authors provide a gradient in (41). Nevertheless, this gradient is not associated to the policy loss function (under safety violation) in (12).\n\n(4) Theorem 3 is very difficult to understand. The orthogonality assumption is impractical. The reviewers also find that the authors try to replace the stationary state distribution (which is a function associated to the policy parameter $\\theta$) with a constant state distribution in this theorem to obtain their gradient $g$ in (13). What is the purpose of giving Theorem 3? What is the relationship between (13) and (14)? \n\n(5) The reason of using discrete graph CBFs should be explained clearly. Note that we can regard the multi-agent system as a large but single agent. Then, you can directly use the discrete CBF given in Theorem 2 to learn safe policies. In this case, the distributed control nature can still be preserved as the learned observation-based policy is end-to-end.\n\n(6) Theorem 4 is hard to understand. What is the relationship between the discrete graph CBF and the discrete CBF? Similar to Theorem 1, it is important for the authors to show that the safe set is forward invariant based on the discrete graph CBF.\n\n(7) In (11b), the safety constraint is calculated using a stochastic policy $\\pi$. However, in Fig. 1, deterministic policies are used for estimating the discrete graph CBF.\n\n(8) Why do the agents have different GAEs in your algorithm? Are you suggesting that the agents are heterogeneous and that their local policies differ?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "(1) The multi-agent safe optimal control problem considered in this paper is both general and challenging, as neither the system model nor a nominal policy is available in advance.\n\n(2) The learned policy is safe, which does not require additional safety filters in implementation.\n\n(3) Extensive simulations are conducted, and state-of-the-art baselines are compared."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a safe multi-agent reinforcement learning method based on distributed control barrier functions (CBFs) for multi-agent systems with limited perception capabilities. Simulation results on several multi-agent safe coordination tasks demonstrate the effectiveness of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There exist several theoretical issues in the paper. The motivation of employing the discrete graph CBF is unclear. Some implementation details should be incorporated. See Questions part for more details."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024discrete,\ntitle={Discrete {GCBF} Proximal Policy Optimization for Multi-agent Safe Optimal Control},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1X1R7P6yzt},\nnote={under review}\n}"
},
"abstract": {
"value": "Control policies that can achieve high task performance and satisfy safety constraints are desirable for any system, including multi-agent systems (MAS). One promising technique for ensuring the safety of MAS is distributed control barrier functions (CBF). However, it is difficult to design distributed CBF-based policies for MAS that can tackle unknown discrete-time dynamics, partial observability, changing neighborhoods, and input constraints, especially when a distributed high-performance nominal policy that can achieve the task is unavailable. To tackle these challenges, we propose **DGPPO**, a new framework that *simultaneously* learns both a *discrete* graph CBF which handles neighborhood changes and input constraints, and a distributed high-performance safe policy for MAS with unknown discrete-time dynamics.\nWe empirically validate our claims on a suite of multi-agent tasks spanning three different simulation engines. The results suggest that, compared with existing methods, our DGPPO framework obtains policies that achieve high task performance (matching baselines that ignore the safety constraints), and high safety rates (matching the most conservative baselines), with a *constant* set of hyperparameters across all environments."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"control barrier functions",
"multi-agent systems",
"black-box systems",
"partial observability",
"reinforcement learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d19a658d0ce2d53bb4213598817fd2d41831ee94.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/a419ca7c7867d4551af7467c34c3bf123dd67fd0.zip"
},
"title": {
"value": "Discrete GCBF Proximal Policy Optimization for Multi-agent Safe Optimal Control"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1X85iw7tqY | CtrlSynth: Controllable Image Text Synthesis for Data-Efficient Multimodal Learning | main | Active | clip;synthetic data;multimodal learning;longtail | applications to computer vision, audio, language, and other modalities | 3;5;6;6 | 5;4;4;4 | 3;3;3;3 | 2;3;3;3 | 3;3;3;3 | 5 | 4.25 | 3 | 2.75 | 3 | -0.942809 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See above."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The pipeline can be thought of as a way to distill knowledge from the language models and stable diffusion models to augment the dataset of CLIP. This is an interesting way to inject new information in synthetic data. \n\n- The results are good, demonstrating improvements over CLIP while maintaining the amount of data it sees since the fix the number of iterations and just change the proportions of real vs their synthetic data."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a controllable image-text generation pipeline that can augment data to improve CLIPs image retrieval, classification, and compositional performance. Specifically, they leverage strong vision models to tag images with objects and attributes, use the knowledge in language models to create new variations of the captions, and use diffusion models to generate images based on the new captions as prompts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- They say that the language model takes an instruction on how to generate a caption given the visual tags. They show some examples in Appendix A1. The instructions don't mention any editing, it mostly just says to describe the image better. In that case, do the gains come from some hallucination in the LLM caption that makes varied images?\n\n- Have the authors tried any other variation of editing instructions? Is there any analysis on the kinds of image editing prompted by the text that improve performance more? Are there specific prompts that serve as better negatives when tuning the CLIP contrastive loss?\n\n- There are other works that edit images based on text instructions like instruct pic to pic, magic brush etc. It might have been nice to see to see if editing certain things in images based on the LLM prompts is better than just using SD to generate since SD can often lack accuracy in generating the correct attribute object relation compositions. \n\n- Nit: There are several works that either generate synthetic images based on the dataset they want to target (https://arxiv.org/pdf/2406.05184), or for cross domain retrieval (https://arxiv.org/pdf/2401.00420). A discussion for comparison could be nice."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Refer to the Weakness."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper focuses on noise and misalignment in the large-scale image-text datasets, which is a critical challenge in multimodal learning.\n- The paper introduces an innovative approach that emphasizes fine-grained control, utilizing generative models to decompose and refine images and texts at a detailed level. Notably, it is training-free and suited for integration with different pre-trained generative models.\n- The experiments presented in the paper show that the proposed method improves downstream performances of multimodal models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes CtrlSynth, a controllable image-text synthesis pipeline for data-efficient multimodal training. Addressing limitations in existing large-scale datasets that are often noisy and misaligned, CtrlSynth enables fine-grained control by decomposing images into basic elements and applying user-specified modifications to synthesize new data. This training-free and flexible pipeline can work with different models and supports closed-loop synthesis (image to text and vice versa). The proposed method also boosts the performance of multimodal model training."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper, while contributing valuable ideas, has several notable weaknesses that are significant and need to be addressed. \n\n### Methodological Weaknesses\n- The proposed pipeline shares significant similarities with GenArtist[1] on image editing. The paper does not clearly demonstrate the differences between this work and GenArtist. It is important for the authors to specify these distinctions and highlight the novelty of their approach. Additionally, a thorough comparison should be incorporated into the experimental section to strengthen the evaluation.\n- While fine-grained control is presented as the main contribution, represented by the text and image controllers in the pipeline, the design is inadequate and lacks clarity. The design of the pipeline does not effectively demonstrate how the editing condition is provided to the generative model in a fine-grained manner. The text controller relies solely on prompt concatenation, making the mapping between visual tags and policies unclear and limiting precise control. Additionally, the paper does not address how to maintain image consistency after editing, which is essential for practical use. These shortcomings contribute to potential inconsistencies and an insufficient explanation of how fine-grained control is maintained. The image controller exists the same problem.\n\n### Experimental Limitations\n- The datasets used (CC3M and CC12M) are relatively small, with no experiments conducted on larger datasets such as LAION-400M or LAION-5B.\n- The paper only tests a limited range of multimodal model structures, lacking experiments on models like BLIP and CLIP of different ViT models.\n- The study does not address data-efficiency validation. Existing data-efficiency-focused works, such as SemDeDup[2], Filter-&-Align[3], and Sieve[4], refine or filter datasets for better performance. The paper should include comparisons with these approaches in terms of model performance and the amount of training data.\n\n---\nReference\n\n[1] Zhenyu Wang, Aoxue Li, Zhenguo Li, and Xihui Liu. GenArtist: Multimodal LLM as an Agent for Unified Image Generation and Editing. arXiv.2407.05600.\n\n[2] Amro Abbas, Kushal Tirumala, Daniel Simig, Surya Ganguli and Ari S. Morcos. SemDeDup: Data-efficient learning at web-scale through semantic deduplication. arXiv.2303.09540.\n\n[3] Lei Zhang, Fangxun Shu, Tianyang Liu, Sucheng Ren, Hao Jiang, and Cihang Xie. Filter & Align: Leveraging Human Knowledge to Curate Image-Text Data. arXiv.2312.06726.\n\n[4] Anas Mahmoud, Mostafa Elhoushi, Amro Abbas, Yu Yang, Newsha Ardalani, Hugh Leather, and Ari Morcos. Sieve: Multimodal Dataset Pruning Using Image Captioning Models. arXiv.2310.02110."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Can the authors provide details on the overall efficiency of the proposed pipeline? For example, how long does it take to generate 1 million images along with their captions? It would also be good to know the time cost at each component, e.g. vision tagging, caption generation, image generation. A more complete picture of the efficiency in the pipeline would better help to assess the value of this work."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The idea is clear and effective: by combining multiple expert models, we can obtain fine-grained image tags, captions, and synthetic images, which together help to create a high-quality synthetic dataset.\n\n2. The modularized pipeline is flexible, as each model can be replaced without affecting the performance of the other components.\n\n3. Experiments are comprehensive. Compared to the baseline CLIP, the improvements from CtrlSynth are evident."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a multimodal data synthesis pipeline called CtrlSynth. Specifically, CtrlSynth includes a vision tagging model to extract key objects, attributes, and relations from an image, which can then optionally be combined with the original text for a language model to generate new image descriptions. Finally, the newly generated image caption is input into a text-to-image model to generate an image. The authors have demonstrated the effectiveness of their pipeline by comparing it with CLIP pretraining data. Overall, the enhanced dataset appears to be superior to the original one."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Practical concerns: By using several models, such as the vision tagging model, LLM, and diffusion model, the proposed method might not be efficient for scaling up to larger datasets, particularly considering the time cost associated with image synthesis.\n\n2. The assumption behind CtrlSynth is based on a fixed number of data samples, where the method helps a model achieve better performance than training on the original dataset. However, given the recent trends in LLM and multimodal LLM research, where pretraining data continues to scale up, the proposed method may not be scalable for very large datasets. While this is a challenge, under the current setting in the paper, CtrlSynth is indeed effective."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Introduces a new controllable synthesis pipeline (CtrlSynth) that allows fine-grained data manipulation, enabling user-defined policies for image-text synthesis.\n- Achieving significant performance improvements across diverse tasks such as zero-shot classification, retrieval, and long-tail recognition is inspiring.\n- Clearly explains the methodology with diagrams and examples, easy to understand the synthesis process and its components."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces CtrlSynth, a controllable pipeline for generating synthetic image-text data to improve multimodal models. By allowing fine-grained control over data synthesis, CtrlSynth decomposes and recomposes visual semantics using pretrained models, enhancing diversity and alignment of generated samples."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In the ablation experiments, it was observed that the performance improvement brought by CtrlSynth-img alone is minimal. Would it be possible to completely remove the generation of synthetic images and focus resources on improving the diversity and quality of synthetic text? Would this lead to consistent performance improvements across all tasks?\n\n2. The paper mentions that CtrlSynth uses a self-filtering mechanism to improve the quality of synthetic data, but it lacks detailed explanations about the implementation, such as how the alignment threshold for visual tags is selected.\n\n3. The paper does not explain in depth how CtrlSynth fundamentally differs from other caption augmentation methods like VeCLIP and LaCLIP. It is necessary to provide a clearer comparison, clarifying whether the increased diversity brought by the decomposition of visual tags and user control strategies is more important, or whether it is the generation of more fine-grained semantic captions that matters.\n\n4. The experiments may be limited to a few selected models (e.g., Mistral-Nemo and Qwen2-7B). Would using larger LLMs lead to better results? \n\n5. A drawback of this method is that the data generation pipeline involves multiple different models and is not end-to-end in training, requiring substantial resources and time for building the synthetic data in the early stages."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024ctrlsynth,\ntitle={CtrlSynth: Controllable Image Text Synthesis for Data-Efficient Multimodal Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1X85iw7tqY},\nnote={under review}\n}"
},
"abstract": {
"value": "Pretraining strong vision or multimodal foundation models like CLIP relies on large-scale datasets (e.g., image-text pairs) that may be noisy, potentially misaligned, and have long-tail distributions. Previous work has shown promising results in augmenting datasets by generating synthetic samples. However, they only support domain-specific ad hoc use cases (like for image or text alone) and are limited in data diversity due to a lack of fine-grained control over the synthesis process. \nWe design a controllable image-text synthesis pipeline called CtrlSynth to enable data-efficient multimodal learning and improve vision and multimodal models in various use cases. The key idea is to decompose the visual semantics of an image into basic elements, apply user-specified control policies (e.g. remove, add, replace operations), and recompose them to synthesize images or texts. The decompose and recompose feature in CtrlSynth allows users to control data synthesis in a fine-grained manner by defining customized control policies to manipulate the basic elements. CtrlSynth leverages the capabilities of pretrained foundation models such as large language models (LLMs) or diffusion models (DMs) to reason and recompose basic elements such that synthetic samples are natural and composed in diverse ways. CtrlSynth pipeline is training-free and has a modular design, making it easy to support different pretrained models. \nCtrlSynth pipeline is also closed-loop, meaning it can synthesize text data based on the image or vice versa. Our evaluation shows that CtrlSynth samples substantially improve zero-shot classification, image-text retrieval, and compositional reasoning performance of CLIP models. We will publicly release the code and pipeline for future research."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"clip",
"synthetic data",
"multimodal learning",
"longtail"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/4a9c22db8db43cdc1ea19455bc0fad09ada14da8.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "CtrlSynth: Controllable Image Text Synthesis for Data-Efficient Multimodal Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1Xg4JPPxJ0 | Are Transformers Able to Reason by Connecting Separated Knowledge in Training Data? | main | Active | Transformer; Chain-of-Thought; In-Context-Learning; Compositional Generalization | foundation or frontier models, including LLMs | 5;6;6;6 | 3;3;3;3 | 3;2;2;3 | 3;3;2;3 | 1;2;3;2 | 5.75 | 3 | 2.5 | 2.75 | 2 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. In the FTCT learning task (e.g., Figure 1), why in the $D_{train}$, we need to add noisy tokens in the token sequence? Why in the $D_{test}$ we do not add noisy tokens in the prompt?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper investigates whether transformers are capable of generalizing to longer reasoning chains through connecting shorter ones seen in the training stage, which is an interesting and important research question.\n2. The paper is technically sound: the trained transformers behave compositionally (with few-shot chain-of-thought prompting) and the authors provide insights on its internal workings: induction head and attention assignment, demonstrating that the transformer learn a generalizable program in its internal computing.\n3. Authors also theoretically prove that Transformers have the expressivity to simulate the generalizble underlying program."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The work sets out to investigate whether transformers are capable of generalizing to longer reasoning chains through connecting shorter ones seen in the training stage. The authors introduce \"Fragemented at Training, Chained at Testing\" learning task to train a randomly initialized 3-layer 3-head GPT2-like transformer. They find that with few-shot chain-of-thought prompting, transformers can perform good compositional reasoning skills by combineing fragments together. The authors further show that the generalization performance highly correlates to model complexity (require multiple-layer attention structure) and high relative knowledge ratio of training data. The paper also discusses the internal working of the model (learn an underlying generalizable program) to interpret the transformer's generalization behaviors and provide theoretical insights on transformer's expressivity on learning a such program."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Since the experiment setting is a randomly initialized transformer trained on synthetic data, to what extent the paper's conclusion can be extended to real pre-trained language models is questionable.\n2. the notations used in the paper are quite complicated, making the paper a little bit difficult for readers to follow."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "My main concerns have already been expressed in the \"weakness\" section."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The design of the FTCT task is well-conceived, as it effectively mimics real-world scenarios where knowledge is often fragmented and must be integrated to draw comprehensive conclusions. This setup provides a meaningful and practical benchmark to evaluate the compositional reasoning abilities of Transformers, making the study relevant and valuable for advancing our understanding of machine learning models' capabilities.\n- Chapter 5, \"transformer does compositional reasoning via the underlying program\", is very interesting as it explores the possible underlying mechanisms and principles that allow Transformers to perform compositional generalization. This chapter goes beyond just presenting empirical results by looking into how these models might internally handle and integrate fragmented knowledge. This deeper investigation adds value by giving us a better understanding of how Transformers achieve complex reasoning tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces the \"FTCT\" (Fragmented at Training, Chained at Testing) task to evaluate if Transformers can perform compositional reasoning similar to humans. The task involves training models on separate knowledge fragments and testing them on integrating these fragments to form complete causal graph traces. The study finds that few-shot Chain-of-Thought prompting helps Transformers combine these fragments correctly, even without seeing such combinations during training. The results indicate that model complexity and the data's knowledge ratio play a role in enabling this skill. The authors provide theoretical and empirical evidence for their claims, showing that Transformers can learn a generalizable program to aid in compositional reasoning. The findings are interesting and suggest potential areas for further exploration."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- While the task studied in this paper requires strong compositional generalization abilities, it is simple and singular in its form. Generally, using a simple and singular synthetic dataset is suitable for highlighting the shortcomings of the Transformer architecture. However, since the paper concludes that Transformers possess this capability, the experiments on this task alone are not sufficient to support such a conclusion. I believe that more diverse and comprehensive tasks are needed, and ideally, this capability should also be validated on complex real-world tasks.\n- In the related work section, the paper discusses various tasks used to probe compositional generalization abilities. The authors mention that these existing tasks have not been studied from the perspectives of few-shot prompting and chain-of-thought reasoning. However, this distinction alone is insufficient; if the difference is merely in this aspect, it would be possible to modify existing datasets instead of creating a new one. The novelty of this newly created task is not demonstrated well. Therefore, the authors need to provide more explanation regarding the motivation and innovation behind the proposed task.\n- The experiments in this paper use Transformers with relatively small parameter sizes. It is unclear whether the conclusions drawn from these experiments would hold true for larger Transformer models. This limitation raises questions about the generalizability of the findings to more complex and sizable architectures."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Although it is clear that the model is learning some ability to connect reasoning chains together (e.g. during training the model might just as AB and BC and correctly chain ABC), will the model be able to correctly chain together the values of AC? This could make for an interesting experiment where we could have some skip links in the test data and check for values accuracy\n- Checking my understanding, is there a typo in Figure 1, where B=106 and B=103, should it be C=108 and C=105, respectively?\n- Are there more than one set of the causal chains? The set equation in line 155 seems to suggest there is only one sequence of length n.\n- Why are the noise vertices inserted in a predictable manner?\n- I am curious about this 0.3 relative knowledge ratio threshold where it is reported that compositional reasoning emerges. Could it be that 0.3 is when the probability that there is at-least one occurrence for every (v_i, v_{i+1}) in the train set reaches close to 1? \n- Why is there a drop in performance in Figure 2 (right) and relative knowledge of 0.25?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper presented a very intriguing and creative approach to testing the ability for models to learn compositional reasoning ability\n- There are some really interesting results, specifically the exact complexity (and the increased expressability) needed for the transformer architecture to optimally solve the FTCT task\n- The insights regarding the few shot CoT results are of significance and spark further research in this area\n- The empirical findings of how the transformers performs this task is enlightening and should spark some interest for further research in this area"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper investigates how well transformers are able to perform compositional reasoning tasks. To that end, the paper introduces a new dataset methodology, namely the Fragmented Training, Chained at Testing (FTCT) that simulates how models would be presented with training data in practice (with incomplete fragments of reasoning paths with noise + context) and how well the model is able to piece together the full reasoning chain in test-time. Using this methodology, the paper runs insightful experiments that ablate different lengths of partial reasoning chains during training, different transformers and neural architectures, and number of few shot CoT examples. Through these experiments, the authors find that few shot CoT plays an important role for compositional reasoning, the impact of increasing relative knowledge ratio, and the increasing expressibility of adding layers and heads in the transformers architecture. Lastly, the paper presented some empirical evidence that you need a certain complexity of the transformers architecture to simulate the optimal program for the FTCT task in training and testing."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The clarity of this paper is lacking, especially in the notation and writing. For instance, in Figure 1, there is a seeming typo in some of the values that contradicts the setup of the dataset. Separately, some concrete examples of the data (including noise + context tokens) of the FTCT dataset would really improve the readers understanding (it took me multiple re-read to get the gist of the methodology)\n- The paper's definition of compositional reasoning should be explicitly written out in the paper. The only real definition of this is in the abstract where it is stated that \"For example, if someone learns ( B = f(A) ) from one source and ( C = g(B) ) from another, they can deduce ( C = g(f(A)) ) effortlessly, even without encountering ( AC ) or ( ABC ) together, showcasing their compositional generalization ability.\"\n- With this FTCT methodology, it seems clear that the model is learning some ability to connect sequential reasoning chains together (e.g. during training the model might just as AB and BC and correctly chain ABC), but the approach does not test if the model can correctly reason about AC in test-time, which is an aspect of compositional reasoning (as mentioned in the abstract)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Table 1 shows that all Transformer models attain 1.0 Values accuracy, even for small models that get very low Vertices accuracy. Can you account for this discrepancy?\n2. An unintuitive pattern in the results (e.g., Fig. 2 and Table 3) is that accuracy *decreases* with the number of few-shot prompts $>1$. This results stands in contrast to a standard ICL setting, where inclusion of more examples typically improves performance. It is stated that this is “possibly due to increased dissimilarity between training and testing data with more CoT examples” (Line 277-278). Why does including more CoT examples causes the test data to be OOD? If this is the case, this seems like an important confound affecting this experiment setup that may not be adequately accounted for.\n3. It is interesting to contrast the results from Sec. 5-6 with Zhang et al., 2022 (“On the Paradox of Learning to Reason from Data”), who apply a similar methodology but find that gradient descent fails to discover the correct $\\theta^*$ for a logical induction task with very similar structure. Is there a reason why here the training succeeds at uncovering the underlying program, whereas in previous work it does not? More generally, it would be nice to see reference to this paper in the discussion."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "**Motivation:** The key research questions of the paper are clearly delineated in the introduction:\n\n1) When are Transformers able to perform compositional reasoning by connecting fragmented knowledge in training data?\n2) How do different training factors impact the emergence of this ability?\n3) What internal mechanisms enable Transformers to develop this ability?\n\nThese questions are broadly relevant to current research and the paper is structured in a way that consistently centers these 3 questions throughout.\n\n**Mechanistic interpretability analysis:** I especially enjoyed the approach to question (3). Broadly speaking, the authors approach this question by first demonstrating that there exists a program that solves the task (Sec. 5.1) and that this program can be approximated by a 2-layer Transformer (Sec. 5.2). Then, through linear probing experiments (Sec. 6), they give an empirical argument that the Transformers trained on FTCT have learn to implement this program. I am not an expert on probing so I can’t speak to the soundness of the methods, but I found the combination of Sec. 5-6 to be an elegant argument from a mechanistic interpretability standpoint."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "**Aims:** This paper seeks to understand at a mechanistic level how Transformers are able to perform compositional reasoning in a few-shot chain-of-thought setting.\n\n**Methods:** A synthetic dataset is generated consisting of chains of nodes and edges derived from causal graphs. At training time, spurious nodes are inserted randomly into the chains; at testing time, few-shot prompts consisting of intact chains (no spurious nodes) are provided to the model. Models are tested on their ability to reconstruct full causal chains from fragmented chains learned in training, with evaluation based on accuracy in predicting both the correct vertex order and values in the chain.\n\n**Results:**\n\n- Zero-shot versus few-shot prompting is compared, with findings showing that few-shot CoT prompting significantly enhances performance in compositional reasoning tasks, particularly in forming the correct vertex order.\n- A space of small, GPT-2-style models ranging from 42M-54M parameters are trained on the FTCT dataset. Results show that multi-layer, multi-head Transformers (minimum 2 layers and 2 heads) perform notably better, while single-layer/single-head models and MLPs perform poorly.\n- The impact of training data’s relative knowledge ratio (ratio of child chain length to complete chain length) is studied, with a critical threshold (ratio ≥ 0.3) identified where compositional reasoning reliably emerges.\n- Mechanisms underlying the model's success, such as induction heads for in-context learning and attention patterns facilitating parent-child relationships, are analyzed through linear probing, revealing specific mechanisms by which the model achieves compositional reasoning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Novelty:** The paper claims to “introduce a learning task” (FTCT) based on causal graphs, and yet the design of this task is nearly identical to the setup in Prystawski et al. (2024). Given that the main distinction between FTCT and the prior work is the introduction of spurious nodes (line 105-106), I would expect to see this prior work—which actually *did* introduce a novel learning task—given more prominent attribution. \n\n- (Currently this work is referenced in Line 103—”The empirical findings of our work align with the observation in (Prystawski et al., 2024)…” The wording of this reference obfuscates the underlying causal structure that this prior work likely played in informing the current paper.)\n\n**Generality:** The key findings of this paper are framed in very broad terms: \n\n> “The emergence of compositional reasoning is highly influenced by the data’s relative knowledge ratio and model complexity. Specifically, a relative knowledge ratio of at least 0.3 and a Transformer architecture with at least two layers and two heads are critical for achieving this ability.” (Lines 520-523)\n> \n\nHowever, these conclusions are all drawn relative to one synthetic dataset with a highly-specific structure; it is unclear to what extent the empirical conclusions (e.g., compositional reasoning in transformers requires a relative knowledge ratio ≥ 0.3) generalize beyond the FTCT task. To make a convincing argument that these results have meaning beyond this one benchmark, this analysis ought to be replicated on more naturalistic reasoning benchmarks where few-shot CoT prompting is commonly used.\n\n**Clarity:** The description of the FTCT dataset/task design (Sec. 3) fails to convey a clear description of the experiment setup and requires too much work of the reader. All aspects of prompt construction are described in excruciating formal detail, making it hard to separate design choices that are key to the experiment from implementation details. Overall, the formalism in this section is a barrier to understanding what’s going on at a more basic level.\n\n- Fig. 1 lacks basic signposting needed to convey what is going on.\n - First off, there is no caption. This is a major omission as the figure is definitely not self-explanatory.\n - The blue highlights draw the reader’s attention to spurious features of the task (noise nodes) instead of the actual purpose of the task (predicting values of causal nodes).\n- Other comprehension/clarity issues in Sec. 3:\n - “We assume op(e) represents operations like (+a) or (−b)” Does this mean addition/subtraction are the *only* possible operations?\n - I don’t understand how the merge operation works from the description.\n - Some unconventional choices of notation, such as using $f$ as an index over few-shot examples.\n - What is “downside processing” - do you mean “downstream”?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024are,\ntitle={Are Transformers Able to Reason by Connecting Separated Knowledge in Training Data?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1Xg4JPPxJ0},\nnote={under review}\n}"
},
"abstract": {
"value": "Humans exhibit remarkable compositional reasoning by integrating knowledge from various sources. For example, if someone learns ( B = f(A) ) from one source and ( C = g(B) ) from another, they can deduce ( C = g(f(A)) ) effortlessly, even without encountering ( AC ) or ( ABC ) together, showcasing their compositional generalization ability. In this paper, we introduce a learning task, \"FTCT\" (Fragmented at Training, Chained at Testing), to assess if Transformers can replicate this skill. In the training phase, data consist of separated knowledge fragments from an overall causal graph. During testing, Transformers must infer complete causal graph traces by integrating these fragments. Our findings demonstrate that few-shot Chain-of-Thought prompting enables Transformers to perform compositional reasoning by revealing correct combinations of fragments, even if such combinations were absent in the training data. Furthermore, the emergence of compositional reasoning ability is strongly correlated with the model complexity and data's relative knowledge ratio. We propose, both theoretically and empirically, that Transformers learn an underlying generalizable program from training, enabling effective compositional reasoning during testing."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Transformer; Chain-of-Thought; In-Context-Learning; Compositional Generalization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/cb4822c7c17fddb5cca6f4a1f589c8e11ec6b09a.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/b8c4e05383b92ddd3e548442492bdbb451408314.zip"
},
"title": {
"value": "Are Transformers Able to Reason by Connecting Separated Knowledge in Training Data?"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1XxNbecjXe | Soft Prompts Go Hard: Steering Visual Language Models with Hidden Meta-Instructions | main | Active | security;machine learning;adversarial perturbations;large language models | alignment, fairness, safety, privacy, and societal considerations | 3;5;5;6 | 4;4;4;4 | 3;2;3;2 | 2;2;2;3 | 3;3;3;3 | 4.75 | 4 | 2.5 | 2.25 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weakness"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Figures clearly illustrate the point of the paper. \n2. The writing is easy to follow\n3. Articulate the attack model and assumptions\n4. Run transferability test"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduced an attack that enables adversaries to add stealthy “meta-instructions” to images that influence how visual language models respond to queries about these images"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. L33, \" but injection attacks in non-text modalities are a new, yet-to-be-explored area of LLM safety research\". This type of attack has been widely explore in [1] and [2]\n2. L81, \"users are victims of adversarial third-party content that they ask the model to process\". I'm curious whether the images are generated by the users or not. If the user create the line chart shown in Fig. 1 from their local devices, does it mean the attack studied in the paper doesn't exist anymore?\n3. Table 4, why is the transfer rate of llava on negative as low as 0.1?\n4. I'm curious what will happen if the system prompt of the VLM contradicts with the meta-instruction in the image?\n5. Overall, I think the paper is in a good quality. The major downside is the novelty, as we already know from previous work that optimizing the input image towards a certain attack target is feasible for VLM. Thus, it's not a new vulnerability in VLM. Though the author attempts to differentiate their attack setting from previous jailbreaking and soft prompt attacks, the overall attack surfaces and methods remain largely the same. I would like to the see more insights coming from the paper. \n\n\n[1] Are aligned neural networks adversarially aligned?\n[2] A Survey of Attacks on Large Vision-Language Models: Resources, Advances, and Future Trends"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I summarize some of my comments on weaknesses of the paper into questions below:\n\n1) Do the authors agree with my comments about their portrayal of previous works, and if so what steps are the authors taking to address this? Concretely, what sections of the paper have been rewritten.\n2) Have the authors been able to run the suggested experiments I have mentioned above, and if so what did they find?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "## Originality\n\nThe question of prompt injection vulnerabilities to large language models is of significant importance. The authors demonstrate that models are vulnerable to similar attacks of this nature through their vision input as are possible through their text input. What's more, they show the vulnerability is in some cases worse through the image input.\n\nWhilst the idea of providing meta-instructions through image inputs its not entirely novel (see weaknesses section), this paper is the most thorough treatment of the subject that I am aware of, and brings to light new and concerning ways that a model's output can be tampered with using images.\n\n## Quality and clarity\n\nThe paper is well written the method is conveyed clearly. The results section contains a good depth of experiments, most importantly covering a number of popular open-source VLMs and target meta-instructions.\n\n## Significance\n\nAs VLMs are used more frequently for agentic tasks that will expose them to untrusted data from the internet, prompt injection / meta-instruction attacks will become more and more concerning. Thus the paper concerns a timely and interesting threat model that the adversarial attack community should be exploring in more detail."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a method to create image inputs to Vision Language Models (VLMs) that lead said model to respond to any user query appended to the image with a certain \"spin\", e.g. responding with a certain sentiment, or in a certain language. The authors refer to this as embedding a \"meta-instruction\" in an image. \n\nCritically, a meta-instruction attack is only successful if the models response to the users query (and the attacked image) responds to the query whilst following the meta-instruction (e.g., if the meta-instruction was \"talk in French\" and the model responded in French but did not answer the users query, then this would not be a successful attack).\n\nTo train these meta-instruction attacks, the authors perform projected gradient descent on an image to minimize the language modeling loss of the VLM inputted with this image over a dataset of synthetic question answer pairs with the answers following some target natural language meta-instruction.\n\nThe results of the paper demonstrate that this method can be used to learn adversarial images for various different types of meta-instructions. The authors also demonstrate a non-trivial transfer of meta-instruction images between models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the critique is long, this is only because I believe the paper has interesting results that could be improved.\n\n## Presentation of previous work\n\nThe authors make a number of claims about prior work that I believe are not completely accurate. Editing the language around these claims would help to improve the paper. Here are some examples that I believe need to be addressed:\n\n- Line 32 - \"But injection attacks in non-text modalities are a new, yet-to-be-explored area of LLM safety research.\" I do not think this is entirely true. For example, Bailey et al. [1] explore how train an image to convey a certain text prompt, which they demonstrate can be a prompt injection attack. \n- Line 83 - \"By design, jailbreaking and adversarial examples produce contextually incoherent outputs that do not actually answer users’ questions about images.\" I think this depends on how you define an image jailbreak. For example, Dong et al. [2] produce adversarially perturbations to harmful images that lead to a model answering coherently about said image --- in particular the model is able to correctly identify what is in the image. While the authors claim here is correct for other image jailbreaking work, such as Qi et al. [3] who learn images unrelated to the harmful request they are trying to elicit a response about from the model, it is not universally correct. For this reason the claim should be softened.\n- Line 84 - \"They [jailbreaking and image adv attacks] are not stealthy and cannot be used for indirect attacks because users would notice that the VLM’s outputs are wrong given the conversation context and inputs.\" Bailey et al. [1] and Qi et al. [3] both demonstrate methods to create jailbreaking images under epsilon ball constraints, which is the definition of stealthiness the authors use on line 290. \n\n## Novelty / originality\n\nFollowing on from some of the comments above, I believe there is a question of novelty / originality of this work. \n\nIn particular, the general algorithm presented to produce meta-instruction attacks essentially involves creating a dataset of input output pairs, and training an image by PGD to maximize the likelihood over this dataset. This method appears to fit into the \"Behavior Matching\" algorithm from Bailey et al. [1] \n\nDespite this, I believe the work does contain novel and important contributions. In particular: \n1. The study of the changes in semantic meaning present in images from various different attacks, with meta-instruction attacks preserving meaning.\n2. The transfer experiments in Table 4 are very interesting.\n3. This is the most thorough treatment of prompt injection image attacks I have seen.\n\n## Summary\n\nCombining the above two points, I believe the paper needs to be\nrewritten to more clearly lay out the novelty of the paper \nand more accurately represent the papers contribution. \nMy high level suggestions would be:\n1. Make it clear that prior works have examined prompt injecting image attacks, however yours is a more complete treatment of the topic.\n2. Make it clear that your method to create such attacks is a special instance of what prior works have introduced. \n3. From this, your novelty comes not from the method but rather the results. E.g. line 88 that reads \"We design, implement, and evaluate a method for creating a new type of image perturbations that act as cross-modal soft prompts for a language model while preserving the visual semantics of the image.\" needs to be adjusted.\n4. Given that I do not think the method is novel, I would suggest running the following additional experiments:\n\t1. In Table 4, add transfer results to Claude and GPT-4o. These results should feature in the transferability experiment.\n\t2. More detailed defense experiments. Appendix C shows fairly simple defenses can work to avoid meta-instruction attacks. [1] finds that training perturbations under different constraints (e.g. a moving patch) ends up being more robust to simple defenses. It would be interesting to see if this result is reproducible in your setting.\n\nTo reiterate, I think studying prompt-injection images to models is important, and the authors present valuable results. I thank the authors for their hard work! \n\n\n[1] - Bailey, Luke, et al. \"Image hijacks: Adversarial images can control generative models at runtime.\" arXiv preprint arXiv:2309.00236 (2023).\n\n[2] - Dong, Yinpeng, et al. \"How Robust is Google's Bard to Adversarial Image Attacks?.\" arXiv preprint arXiv:2309.11751 (2023).\n\n[3] - Qi, Xiangyu, et al. \"Visual adversarial examples jailbreak aligned large language models.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 19. 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The concept of embedding hidden meta-instructions within images offers a new approach to prompt injection for multi-modal models, highlighting a potential vulnerability not extensively covered in existing literature.\n\n2. It is interesting to see how the method reveals hidden capabilities of instruction-tuned models. In some cases, the meta-instructions successfully steer the model's outputs in ways that explicit instructions fail to achieve.\n\n3. The study provides an empirical evaluation on a range of meta-objectives (e.g., sentiment, language, and political bias), demonstrating the effectiveness of the attack method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a new type of attack on visual language models. These attacks, termed meta-instruction attacks, involve subtle image perturbations that act as soft prompts to influence how a model interprets images and responds to queries. The idea is to steer the model’s outputs to satisfy adversary-chosen objectives, such as a specific sentiment, style, or political bias, without the user being aware of the manipulation. The authors demonstrate the effectiveness of this approach across various visual language models, showing that these perturbations often outperform explicit instructions and are transferable across models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper's reliance on just five images from a single dataset, ImageNet, limits the robustness and generalizability of its evaluation. ImageNet, which is primarily focused on object recognition, may not adequately represent the diversity and complexity of images encountered in real-world scenarios. Incorporating evaluations on datasets with more varied and complex scenes, such as MSCOCO, would provide a more comprehensive assessment of performance.\n\n2. The paper simulates user interaction by generating questions to test meta-instructions, but it provides limited clarity on whether these questions adequately cover a broad range of natural user queries. Limited prompt diversity may affect the robustness of the attack if VLMs encounter different prompts in real-world scenarios.\n\n3. Since the meta-instruction is added as noise to the image, the paper does not demonstrate the effectiveness of meta-instructions against recent inference-time defense methods like DISCO[1], DiffPure[2], and IRAD[3]. This could be valuable for understanding its performance in the context of contemporary robustness strategies.\n\n[1] DISCO: Adversarial Defense with Local Implicit Functions.\n[2] Diffusion models for adversarial purification.\n[3] IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Could you also provide an evaluation when random resizing or cropping is applied? Since this paper addresses practical concerns, it would be valuable to test your method under common “defenses” encountered in everyday scenarios.\n\n2. Are there any failure cases? For example, are there meta-instructions that are particularly difficult to achieve?\n\n3. Why is it necessary to evaluate cosine similarity as done in Section 5.3? Could you clarify the relevance of this metric?\n\n4. Is there an evaluation that checks whether the generated textual outputs remain consistent with the input images?\n\nOverall, I appreciate the practical focus of this paper. I would be happy to raise my evaluation if these concerns are addressed."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The focus on the dissemination of seemingly coherent misinformation is highly practical and addresses a significant real-world concern.\n\n2. The evaluation is thorough, including robustness testing against JPEG compression as a defense (which I suggest moving to the main text, given its practicality in everyday use) and examining the transferability of the attack across different vision-language models (VLMs)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new attack objective in which the output text remains consistent with the input images but adopts an adversary-chosen style, sentiment, or point of view. The adversarial optimization is applied to the input image, ensuring that the modifications are imperceptible to humans. Experiments demonstrate that images containing hidden meta-instructions achieve significantly higher success rates compared to those with explicit instructions. This attack highlights a practical risk, as it enables the dissemination of seemingly coherent but misleading information."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. A NeurIPS 2024 paper [1] also explores the dissemination of seemingly coherent misinformation in visual language models, but through the lens of data poisoning. While this paper focuses on test-time adversarial attacks, it would be beneficial to discuss the key differences between test-time attacks and training-time poisoning, and in what scenarios each is more practical, given the similarity in objectives between the two papers.\n\n2. The evaluation of image semantics preservation seems suboptimal. In Section 5.3, semantics are defined using cosine similarity between images, but it is unclear why this metric is particularly relevant. A more meaningful evaluation would assess how well the actual text output of the visual language model aligns with the input images, which is the core focus of this paper—consistent outputs with images but in adversary-chosen styles, sentiments, or viewpoints.\n\n\nReference:\n[1] Xu, Yuancheng, et al. \"Shadowcast: Stealthy data poisoning attacks against vision-language models.\", The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce a new type of indirect, cross-modal injection attacks against VLMs to influence how the model interprets the image and steer its outputs to express an adversary-chosen style, sentiment, or point of view."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024soft,\ntitle={Soft Prompts Go Hard: Steering Visual Language Models with Hidden Meta-Instructions},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1XxNbecjXe},\nnote={under review}\n}"
},
"abstract": {
"value": "We introduce a new type of indirect, cross-modal injection attacks against language models that operate on images: hidden \"meta-instructions\" that influence how the model interprets the image and steer its outputs to express an adversary-chosen style, sentiment, or point of view. We create meta-instructions by generating images that act as soft prompts. In contrast to jailbreaking attacks and adversarial examples, outputs produced in response to these images are plausible and based on the visual content of the image, yet also satisfy the adversary's (meta-)objective. We evaluate the efficacy of meta-instructions for multiple models and adversarial meta-objectives, and demonstrate how they \"unlock\" capabilities of the underlying language models that are unavailable via explicit text instructions. We describe how meta-instruction attacks could cause harm by enabling creation of self-interpreting content that carries spam, misinformation, and spin."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"security",
"machine learning",
"adversarial perturbations",
"large language models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/fb4adaa1ef6d33d20980028e22c625f0e811905a.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Soft Prompts Go Hard: Steering Visual Language Models with Hidden Meta-Instructions"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1XzTxtezgj | Intervention-based Causal Discrimination Discovery and Removal | main | Active | Fairness;Causal inference;Intervention-based metric | alignment, fairness, safety, privacy, and societal considerations | 3;3;5;5;6 | 4;4;3;3;2 | 2;3;3;2;3 | 2;2;2;2;3 | 2;1;3;2;3 | 4.4 | 3.2 | 2.6 | 2.2 | 2.2 | -0.979958 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please address the weaknesses above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper proposes a novel notion to measure causal fairness. This notion makes intuitive sense and seems easy to implement. \n- The paper proposes a new algorithm to train a model, where the causal fairness notion is cast as a regularization term.\n- On several empirical datasets, the proposed algorithm seems to perform best in terms of causal fairness, as compared to several benchmarks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel causality-based fairness notion called post-intervention Cumulative Ratio Disparity (ICRD) to assess the causal fairness of the decision models, and then presents a causal framework based on ICRD. The theoretical and empirical results show that ICRD can assess causal fairness and the causal framework can better balance accuracy and fairness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- It seems that the proposed algorithm is not very competitive as compared to benchmarks if one primarily cares about conventional metrics e.g., K-fair and accuracy. \n- The theoretical results are quite intuitive, and the proof is straightforward. It would be helpful to the contributions of the paper, and why the contributions are nontrivial to obtain. \n- The references of this paper do not contain a single ICLR paper. It would be helpful to better demonstrate the fit of this paper to ICLR."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Since the main contribution of this paper is to build on the K-fair definition, what could you do specifically to include a more comprehensive and clear explanation of K-fair? How is the set of contexts C chosen, and which contexts were used in the experiments?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The overall setting is well-chosen and the contribution appears to be solid."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper adds to the causal fairness literature by proposing a new metric to measure unfairness and a strategy for training fair models. It follows previous work by Salimi et al (2019) and Ling et al (2023) on interventional fairness or K-fairness. An algorithm is K-fair if interventions on the sensitive attribute do not change the predictions, while also causally conditioning on a given context K. The current paper extends this definition by applying a 1-Wasserstein distance to the difference between the interventional distributions, with interventions on the sensitive attribute. The proposed training strategy is empirical risk minimization with a penalty term added using the aforementioned 1-W. distance. The paper includes a few basic theoretical results and experiments comparing the method with several alternate methods on several datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Compared to the existing work I believe this paper is somewhat incremental. The novelty is not high. The experiments are OK. The presentation and explanations of both current work and its context in related literature are not very clear."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the questions in Weaknesses part."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The authors clearly illustrated the limitations in existing interventional fairness metrics, and the related works section is comprehensive and easy to follow.\n\n2. The proposed formulation of ICRD is sound and the authors provide the theoretical analysis on how ICRD addresses the limitations of existing causal fairness metrics.\n\n3. The authors proposed a fairness framework, ICCFL, which incorporates a differentiable approximation of the ICRD metric to enable efficient training."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a new fairness metric called Intervention-based Cumulative Ratio Disparity (ICRD), which aims to address limitations in existing causal fairness metrics (K-Fair) by measuring cumulative causal effects along prediction probabilities by intervening on sensitive attributes. Additionally, the authors propose a fairness framework, ICCFL, which incorporates the ICRD metric to train fairer models. Through theoretical and empirical analyses, the paper demonstrates that ICCFL better balances fairness and accuracy than existing fairness-aware algorithms across multiple datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The proposed method assumes the causal model is known, which may be a strict assumption. It would be great for the authors to discuss the sensitivity of the proposed metric and framework to potential causal graph misspecification.\n\n2. This paper assumes the sensitive attribute is binary. Could the proposed metric be extended to handle multiple sensitive attributes?\n\n3. The method leverages causal generative models to infer the distribution of exogenous variables. It would be useful to explore the robustness of the approach when estimating interventional distributions with different causal generative models."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How does the ICRD notion address limitation 1?\n\n2. Why does interventional fairness have fewer identifiability challenges compared to the counterfactual fairness and path-specific fairness, as mentioned on line 228, page 5? \n\n3. Can the ICCFL method be compared with any benchmark methods using other causal fairness notions, such as, path-specific fairness? This might reveal interesting observations on the comparison between ICRD and other non-intervention based causal fairness measures."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper improved the existing interventional fairness notion, K-Fairness, in a comprehensive way that both develops a new fairness notion and proposes an algorithm for applying the new fairness notion. The authors also provided relevant theoretical support for the validity of both ICRD and ICCFL, which add to the technical soundness of the paper. \n\n2. The paper provided useful details in the experiment evaluation of the ICCFL method: Section 5.3 offered empirical evidence for the benefit of ICRD, and Section 5.4 discussed observations related to hyperparameter choice in ICCFL."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Motivated by shortcomings of existing interventional fairness notions, this paper proposed a new causality-based fairness notion called post-Intervention Cumulative Ratio Disparity (ICRD). ICRD measures the cumulative causal effects along prediction probabilities by intervening on the sensitive attribute. The authors explained ICRD’s superior properties over existing intervention causal fairness notions. Additionally, they developed a new fairness framework based on ICRD: Intervention-based Cumulative Causality Fairness Learning approach (ICCFL) formulates a constrained optimization problem where the ICRD metric is included in the prediction loss of the model. Empirical evidence from comparing ICCFL with several benchmark methods demonstrated that ICCFL could attain better causal fairness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Given that the differences between the K-Fair notion and the new ICRD notion are somewhat subtle, the paper could benefit from clearer explanations. For example, Example 1 used to discuss limitation 1 might be applied again after introducing ICRD to illustrate how ICRD applies here, such as, what are the possible contexts C in this example. On a related note, although the ICRD notion has clear advantages over the K-Fair notion, it is unclear whether these advantages alone justify adding ICRD to the already large number of causal fairness definitions. It would be helpful to discuss the benefits of ICRD as a causal fairness definition in general. \n\n2. The ICRD notion centers on disparity in the cumulative causal effects. This is not necessarily desirable for understanding discrimination, as we may be more interested in dissecting the causal effects associated with specific scenarios. It would be helpful to discuss potential insufficiencies of the ICRD notion, for example, when ICRD may not be identifiable, when enforcing ICRD to be 0 may be too restrictive for fairness."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please refer to the above weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- It is reasonable and meaningful to uncover the limitations of the existing fairness notions and propose a new one.\n- The experimental results show the effectiveness of the proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper demonstrates the limitations of the existing interventional fairness and then proposes a new causal fairness metric called Intervention-based Cumulative Rate Disparity (ICRD). ICRD aims to measure the post-intervention cumulative causal effects along the prediction probabilities for any intervention on the context. In addition to defining this metric, the authors propose an algorithm designed to achieve ICRD."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The motivation behind ICRD is somewhat ambiguous. Specifically, regarding sufficiency, how is a condition defined as ‘sufficient’ for evaluating causal fairness? It seems that the sufficiency aspect depends significantly on the particular causal fairness definition in use, and the current explanation feels unclear on this point. The insufficiency aspect could benefit from greater elaboration.\n- Lines 313–314 state that “ICRD encompasses K-Fair and represents the cumulative causal effect of K-Fair across all decision thresholds,” but this claim is difficult to interpret without additional clarification. Similarly, it is unclear how Table 1 was generated or how the decision threshold impacts outcomes. Could the authors further clarify these aspects?\n- Finally, I am unconvinced that the decision threshold’s impact constitutes a limitation of K-Fair. K-Fair requires two distributions to be equivalent; hence, it is unclear how the decision threshold would influence this requirement. More discussion on this would be valuable to fully understand the claimed limitation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024interventionbased,\ntitle={Intervention-based Causal Discrimination Discovery and Removal},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1XzTxtezgj},\nnote={under review}\n}"
},
"abstract": {
"value": "Causal inference is a recent and widely adopted paradigm to deal with algorithmic discrimination. Building on Pearl's structure causal model, several causality-based fairness notions have been developed, which estimates the unfair causal effects from the sensitive attribute to the outcomes by incorporating the intervention or counterfactual operators. Among them, interventional fairness (i.e., $K$-Fair) stands out as the most fundamental and broadly applicable concept that is computable from observantional data. However, existing interventional fairness notions fail to accurately evaluate causal fairness, due to their following inherent limitations: (i) the causal effects evaluated by interventional fairness cannot be uniquely computed; (ii) the violation of interventional fairness being zero is not a sufficient condition for a causally fair model. To address these issues, we firstly propose a novel causality-based fairness notion called post-Intervention Cumulative Ratio Disparity (ICRD) to assess causal fairness of the decision models. Subsequently, we present a fairness framework (ICCFL) based on the proposed ICRD metric. ICCFL firstly generates interventional samples, and then computes the differentiable approximation of the ICRD to train a causally fair model. Both theoretical and empirical results demonstrate that the proposed ICRD effectively assesses causal fairness, and ICCFL can better balance accuracy and fairness."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Fairness",
"Causal inference",
"Intervention-based metric"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/87b17721caab96660a85c1b975f662864c76561f.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Intervention-based Causal Discrimination Discovery and Removal"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1Y5hMMuCFU | Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch | main | Active | large language models;mathematical reasoning;data synthesis | generative models | 3;5;5;6 | 4;3;5;4 | 3;3;2;3 | 2;2;2;2 | 2;2;2;3 | 4.75 | 4 | 2.75 | 2 | 2.25 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- The authors should compare different base models in Figure 5 and Table 2.\n\n- The experimental setup in the experimental module should be clearly presented; for instance, in Table 2, did the responses corresponding to questions from other datasets involve generating five responses and filtering down to one based on the reward model, or was only one response generated?\n\n- The authors might discuss the effects of optimizing different question data volumes during QPO. Additionally, since the authors note that optimizing for both solvability and difficulty simultaneously in QPO is challenging, are there corresponding experimental results to support this?\n\n- The author should probably compare the generated questions with the questions in the test set (n-grams or other methods) to prevent potential data leakage."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- This article focuses on synthesizing mathematical problems using open-source large language models, which is an important topic. The fine-tuning and filtering techniques proposed by the authors demonstrate some effectiveness.\n- The article presents a thorough and detailed set of experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel framework for generating high-quality reasoning datasets using smaller open-source models. The primary focus is on addressing the challenges of synthesizing high-quality data at scale with affordable costs. \n\nKey contributions of the paper include:\n\n- The authors present a scalable data synthesis method that enables the generation of 1 million question-answer pairs without relying on extensive seed data or complex augmentation techniques.\n\n- The framework incorporates a two-stage process consisting of Question Fine-Tuning (QFT) and Question Preference Optimization (QPO), which enhances the question generation capabilities of the models.\n\n- The paper demonstrates that models fine-tuned with the ScaleQuest dataset achieve significant performance gains compared to baseline models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The proposed Question Preference Optimization (QPO) appears to be less effective; as shown in Figure 5, the difference between QPO and QFT is minimal, raising questions about the validity of QPO.\n\n- This paper attempts to extract training data from models, similar to the approach of MAGPIE. Therefore, the authors should conduct a more fair and detailed comparison between Question Fine-Tuning (QFT) and direct prompting methods. In Figure 5, the authors generated 1 million question-response pairs using MAGPIE with Qwen2-Math-7B-Instruct as the \"raw data\" setting. However, the other settings filtered 2 million( from DeepSeekMath-QGen and Qwen2-Math-QGen) questions down to 1 million and applied a reward model to filter the responses. Consequently, it is difficult to determine whether QFT is more effective than the MAGPIE method or if the filtration of questions and responses is more effective.\n\n- The ablation experiments are insufficient. The authors conducted experiments only on Llama3-8B, rather than comparing all four base models as presented in the main table. \n\n- The authors suggest that the data generation method proposed in this paper can produce diverse and high-quality questions at a lower cost. However, with advancements in open-source models, previous sample-driven and knowledge-driven question synthesis models can also be replaced with open-source models. Moreover, Qwen2-Math, as a response synthesizer, demonstrates superior mathematical capabilities compared to earlier versions of GPT-4. Therefore, it is difficult to assert that the data synthesis approach presented in this paper is superior to other methods in cost."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "There are plenty of LLMs used in the data synthesis pipeline: DeepSeekMath- 7B-RL , Qwen2-Math-7B-Instruct, GPT-4o-mini, GPT-4o, DeepseekMath-7B-Base, InternLM2-7B-Reward. Can you provide a Table for all the settings? Is there any specific reason to select different LLMs for different stages?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper provides a cost-effective data synthesis method for math reasoning problems. \n2. The synthetic dataset can boost the performance of multiple open-source models in both in-domain and out-of-domain evaluation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a scalable data synthesis method, ScaleQuest, for math reasoning. The augmented math datasets can enhance the model performance of mainstream open-source models such as Mistral, Llama3, DeepSeekMath, and Qwen2-Math. After finetuning the proposed dataset, the small open-source models can even outperform closed-source models such as GPT-4-Turno and Claude-3.5 Sonnet"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The main weakness of this paper is, that the proposed data synthesis pipeline is too complex and may be domain-specific. It includes the training in question fine-tuning, question preference optimization, the inference for solvability and difficulty check, reward scoring, etc. Although the API and training cost is not as expensive as GPT-4, this method is more time-consuming and requires extra effort to adapt to other domains. \n2. The proposed data synthesis method is only evaluated in the math domain. It is unsure whether this method can be easily adapted to other domains such as code or logical reasoning. Specifically, can the question finetuning and question preference optimization trained on the math domain be directly used for other domains, or the extra finetuning for each domain and each stage is needed? \n3. The experimental results are not easy to interpret: \n(i) For the baselines with different synthetic datasets, are they finetuned on the same scale of training examples? \n(ii) What does the Percentage and Accuracy in Figure 5 mean? Where is the legend of the left plot of Figure 5? \n(iii) What does the question quality in Table 2 refer to? \n4. There are many components in the data synthesis pipeline, but the impact of each component is not clear. For example, what if removing the question preference optimization and directly using the solvability filtering and difficulty sampling? This is different from the ablation study, which compares the performance w/ and w/o reward filtering while keeping all other components the same."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. How did the authors select the base difficulty filtering model for fine-tuning (Lines 222-239) and the reward filtering model (Lines 251-252)? Considering that filtering significantly impacts final data quality (Figure 5), further discussion of criteria for model selection, along with any experimental comparisons, would enhance clarity on whether these models represent optimal choices.\n\n2. In Table 1, the term “Synthesis Model” in the header needs clarification. Does it refer to the model used for both question and response generation, or only response generation? This ambiguity is notable, especially as fine-tuned models such as Deepseek-QGen and Qwen2-Math-QGen are absent from the table. \n\n3. The left bar chart in Figure 5 has a confusing y-axis. Does the percentage indicate solvable/non-solvable or easy/difficult ratios? If it reflects these ratios, how does this relate to the five difficulty levels introduced in Lines 377-406? Detailing this connection would make the difficulty and solvability metrics clearer.\n\n4. Lastly, while evaluating synthesized data via difficulty distribution and solvability is helpful, a rigorous human evaluation on a random subset would better demonstrate ScaleQuest’s quality. Including human assessments of clarity, coherence, and real-world relevance could provide a nuanced verification of the synthesized data's effectiveness."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. ScaleQuest targets data synthesis for instruction tuning, focusing on affordable low-cost methods. This approach demonstrates significant cost savings (Section 3.4), making large-scale data creation more accessible for open-source communities.\n\n2. The study includes thorough experimentation with multiple baselines, assessing both question and response quality across a total of four mathematical problem-solving benchmarks, thereby increasing the credibility of ScaleQuest.\n\n3. The paper is well-structured and quite easy to follow, with sufficient implementation details to enhance reproducibility."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents ScaleQuest, a scalable and cost-effective data synthesis framework designed to enhance the mathematical problem-solving capabilities of large language models (LLMs). Motivated by the need for high-quality, large-scale data, the authors propose a two-stage synthesis process. Specifically, ScaleQuest employs Question Fine-Tuning (QFT) to activate question-generation (QG) capabilities in small base models and Question Preference Optimization (QPO) to improve question solvability and difficulty. This is followed by filtering for language clarity, difficulty, and solvability, as well as reward-based response selection to ensure high-quality outputs. Experiments demonstrate that models fine-tuned with the ScaleQuest dataset outperform several baselines on benchmarks, achieving substantial improvements in accuracy across in-domain and out-of-domain mathematical reasoning tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. As claimed by the authors in Lines 17-20 and 76-80, the main contribution of the paper is the scalable synthesis method, ScaleQuest. However, the method heavily depends on domain-specific fine-tuning and specialized models, which raises questions about its generalizability and applicability to domains beyond mathematical reasoning. For instance, the authors use Question Fine-Tuning (QFT) and Question Preference Optimization (QPO) to optimize the question generation process within the target domain of mathematical reasoning. Furthermore, the method involves components like solvability filtering, difficulty sampling, and reward filtering, each relying on different models and a specialized fine-tuned difficulty scorer, which appear tailored to mathematical data construction. This reliance on fine-tuned, domain-specific models, while effective in the tested domain, makes it challenging to adapt ScaleQuest to broader applications, potentially limiting its utility as a general-purpose data synthesis method.\n\n2. Additionally, the paper appears to make some overclaims regarding its scope and efficiency. While the title suggests an enhancement of \"reasoning capability,\" the paper narrowly addresses mathematical reasoning tasks, with little consideration given to other reasoning types, such as causal, commonsense, or logical reasoning. The claim of using “small-size” models (Lines 18-19) is also somewhat misleading. Specifically, the QPO stage (Lines 199-202) requires a larger model, GPT-4o-mini, to achieve better preference-based filtering, suggesting that smaller models alone may not fully support the quality goals of ScaleQuest. The ablation results (Figure 5) further highlight the critical role of QPO, reinforcing the notion that the trade-off between model size and final data quality is not fully acknowledged, which impacts the efficiency claims of the method.\n\n3. Lastly, despite the authors’ assertions that ScaleQuest-generated data significantly enhances performance across various benchmarks, the observed improvements are marginal. For instance, Table 1 shows only a slight average increase from 62.7 to 62.9 when comparing Qwen2-Math-7B-ScaleQuest to its baseline Qwen2-Math-7B-Instruct, even with a decrease in performance on the CollegeMath benchmark. These limited gains suggest that the effectiveness of ScaleQuest’s synthesized data may not justify its complexity. Consequently, these modest gains raise concerns about the practical value and impact of the ScaleQuest approach."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Typo: Filering -> Filtering in line 215\n\nIn Figure 5, it seems that QPO is less effective. Does the author try the combination of QFT and reward filtering only?\n\nI am curious about the effectiveness of Solvability Filtering and Difficulty sampling. For Solvability Filtering, it seems that the final dataset still does not have perfect quality but produces a good performance. So I am curious about the influence of the quality. For difficulty sampling, I am not sure why we need to fit certain difficult distributions."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "As for the method, ScaleQuest generates questions independently from scratch, removing dependency on existing question datasets, which enhances question diversity and supports scalability. Also, the paper integrates comprehensive filtering techniques, including language, solvability, and difficulty sampling, which could be a good reference for future efforts in data filtering.\n\nThe presentation is very clear, the workflow of the method is easy to follow. All the details such as prompts are all clearly given. The authors said they will release the data and code, which will be a useful resource to the community."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a synthetic training data generation method for mathematical LLMs. Based on two small models at a 7B scale, the authors achieve state-of-the-art performance than other models trained with the data from larger LMs. The proposed method including question supervised fine-tuning, question preference tuning and reward-score-based selection."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The main experiments in Table 1 are somehow not very fair. Some of the baseline methods contain less data than the used dataset in the paper. \n\nIn Table 1, it seems that Qwen2-Math-7B-ScaleQuest achieves similar performance with Qwen2-Math-7B-Instruct, I am wondering if their performance is similar on OOD test sets like GSM-hard (https://huggingface.co/datasets/reasoning-machines/gsm-hard) and MathChat (https://github.com/Zhenwen-NLP/MathChat). I would like to see if Qwen2-Math-7B-ScaleQuest is over-fitting on GSM and MATH style questions.\n\nFor the efficiency result, it seems that the cost is similar to (even slightly higher) GPT-4o mini if we put that in the table. I am wondering why the authors choose models like Qwen2-Math-7B instead of GPT-4o mini for solvability & difficulty check, etc."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024unleashing,\ntitle={Unleashing Reasoning Capability of {LLM}s via Scalable Question Synthesis from Scratch},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1Y5hMMuCFU},\nnote={under review}\n}"
},
"abstract": {
"value": "The availability of high-quality data is one of the most important factors in improving the reasoning capability of LLMs. \nExisting works have demonstrated the effectiveness of creating more instruction data from seed questions or knowledge bases.\nRecent research indicates that continually scaling up data synthesis from strong models (e.g., GPT-4) can further elicit reasoning performance.\nThough promising, the open-sourced community still lacks high-quality data at scale and scalable data synthesis methods with affordable costs.\nTo address this, we introduce ScaleQuest, a scalable and novel data synthesis method that utilizes ``small-size'' (e.g., 7B) open-source models to generate questions from scratch without the need for seed data with complex augmentation constraints.\nWith the efficient ScaleQuest, we automatically constructed a mathematical reasoning dataset consisting of 1 million problem-solution pairs, which are more effective than existing open-sourced datasets.\nIt can universally increase the performance of mainstream open-source models (i.e., Mistral, Llama3, DeepSeekMath, and Qwen2-Math) by achieving 29.2\\% to 46.4\\% gains on MATH.\nNotably, simply fine-tuning the Qwen2-Math-7B-Base model with our dataset can even surpass Qwen2-Math-7B-Instruct, a strong and well-aligned model on closed-source data, and proprietary models such as GPT-4-Turbo and Claude-3.5 Sonnet."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"large language models",
"mathematical reasoning",
"data synthesis"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b1675f039fadbf1f1605ba8da3f57c72a9e783e6.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/b0686ba48b361d2ab050bccbd8998f70b8f1a693.zip"
},
"title": {
"value": "Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1YTF7Try7H | Implicit Bridge Consistency Distillation for One-Step Unpaired Image Translation | main | Active | image translation;consistency distillation;unpaired;one-step;diffusion models | generative models | 3;5;6 | 1;3;5 | 2;3;4 | 1;2;3 | 2;2;3 | 4.666667 | 3 | 3 | 2 | 2.333333 | 0.981981 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 1
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "as above"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Pro:\n\n- The sampling speed in image-to-image translation is a critical problem in this area.\n- The paper combines various techniques, including DDIB and consistency models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Diffusion models are widely used for image translation. This paper identifies limitations in existing approaches: slow inference, need for paired data, and one-way translation constraints. It introduces Implicit Bridge Consistency Distillation (IBCD) to address these issues. IBCD extends consistency distillation with a diffusion implicit bridge model. The paper proposes two improvements: Distribution Matching for Consistency Distillation (DMCD) and a distillation-difficulty adaptive weighting method. Experimental results show IBCD achieves state-of-the-art performance in one-step generation for bidirectional translation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Con:\n\n- The main concern is that the method seems too incremental, appearing to be merely a combination of DDIB and consistency models.\n- In Table 3, the FID improvement from adding Cycle and DMCD is marginal. Is the author aware of what a 0.1 FID change means? If you repeat the experiment twice, the variance might be even larger. This becomes a significant issue when the FID is so high. Also, most baselines in Table 2 show the variance of FID, while the author didn't. As you can see, the variance of other methods is quite large, further undermining the ablation study in Table 3.\n- With only a single step, the stochasticity is significantly reduced. The authors should include several other related metrics that highlight diversity, such as the Inception Score. Additionally, more failure cases should be provided for better understanding of the method's limitations."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "see weakness"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) The paper is well-written and it clearly explains the proposed method.\n2) The visualizations of component’s cumulative contributions on the toy dataset in Fig. (3) help appreciate the role of each part.\n3) Experiments on both toy and highdimensional datasets demonstrate the effectiveness of IBCD."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a framework called Implicit Bridge Consistency Distillation (IBCD) for unpaired image to image translation. IBCD connects PF-ODE trajectories from any distribution to another one by extending consistency distillation with a diffusion implicit bridge model. It introduces Distribution Matching for Consistency Distillation (DMCD) and distillation-difficulty adaptive weighting method to deal with the distillation errors and mean prediction problems from the consistency distillation. Experiments on translation benchmarks demonstrate the effectiveness of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) Missing comparison of the results for bidirectional translation.\n2) Missing comparison of computation cost with the existing methods to show the efficiency of the proposed method.\n3) The results in Tab. 3 show the model which added DMCD loss, cycle loss and adaptive DMCD degrades the performance in terms of PSNR and SSIM compared to the method using IBCD only.\n4) The zero in the first row of Eq. (6) might be \\chi_A\\cap\\chi_B."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- The authors propose to use one generator for two domains, which may be unreasonable or hard to achieve in practice. I think the whole pipeline is compatible with two independent pre-trained DMs, i.e., one ADM on LSUN Cat and one ADM on ImageNet with some specific class."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The paper provides a versatile pipeline for unpaired image-to-image translation within only one step, which outperforms most previous methods even with large NFEs.\n\n- The theory part is clear and intuitive, and the toy data showcases the instability of vanilla IBCD clearly.\n\n- The experimental results are convincing and impressive, demonstrating directly the outperformance.\n\n- The novel adaptive weighting is interesting and effective, encouraging further study in diffusion model distillation with insight."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes to apply consistency distillation (CD) on previous DDIB, achieving a one-step generative model for unpaired image-to-image translation. The authors manage to extend the CD theory, which is applicable to two arbitrary distributions. The novel distribution matching and adaptive weighting techniques further stabilize and facilitate the training process. Both qualitative and quantitative experiments confirm the efficacy of the pipeline and the outperformance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- CD highly bases on PF-ODE, i.e., it needs to follow the score function trajectory. In Eq. (6), two PF-ODEs starting from different domains are connected together at $\\sigma_{max}$, how to guarantee the smoothness of score function (i.e., gradient) at this point (since one directly uses noisy $x_a$ to solver attached to domain B)? If not smooth, how will the error be like? The authors may provide analysis here similar to original CD paper.\n\n- In L261, the authors claim one of the challenge is to employ only local consistency. However, CTM [1] refers to local consistency as the case when applying PF-ODE solver with extremely smaller step. On the contrary, when using two adjacent timesteps, CTM names it global consistency, similar to original CD. So in the paper, this should also be called a global consistency. I can hardly understand why such strategy is a challenge, given that most distillation works use such a loss.\n\n[1] Learning Probability Flow ODE Trajectory of Diffusion. Kim et al., ICLR 2024.\n\n- The authors state that vanilla IBCD faces mean prediction phenomenon, but provides no convincing analysis on it. Original CD seems not to face such challenge. Does it come from the mismatch of two PF-ODEs? The visualization in Fig. 3(a) fails to convince me. The synthesized samples are not at the mean of domain B. Besides, I cannot see the efficacy of DMCD and cycle loss.\n\n- The ablation study is somewhat confusing. Why vanilla IBCD is only a point rather than a broken line like the others? From Tab. 3 and Fig. 6, it seems that adaptive weighting may harm the performance, which is not consistent with conclusion in toy data. Conversely, DMCD is helpful in real data but fails in toy data. The authors may need further clarification."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024implicit,\ntitle={Implicit Bridge Consistency Distillation for One-Step Unpaired Image Translation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1YTF7Try7H},\nnote={under review}\n}"
},
"abstract": {
"value": "Recently, diffusion models have been extensively studied as powerful generative tools for image translation. However, the existing diffusion model-based image translation approaches often suffer from several limitations: 1) slow inference due to iterative denoising, 2) the necessity for paired training data, or 3) constraints from learning only one-way translation paths. To mitigate these limitations, here we introduce a novel framework, called Implicit Bridge Consistency Distillation (IBCD), that extends consistency distillation with a diffusion implicit bridge model that connects PF-ODE trajectories from any distribution to another one. Moreover, to address the challenges associated with distillation errors and mean prediction problems from the consistency distillation, we introduce two unique improvements: Distribution Matching for Consistency Distillation (DMCD) and distillation-difficulty adaptive weighting method. Experimental results confirm that IBCD for bidirectional translation can achieve state-of-the-art performance on benchmark datasets in just one step generation."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"image translation",
"consistency distillation",
"unpaired",
"one-step",
"diffusion models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/96ceb39175e320725b7cc3c4439a14a64fa873bb.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Implicit Bridge Consistency Distillation for One-Step Unpaired Image Translation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1YXkDXIqVw | Guided Stream of Search: Learning to Better Search with Language Models via Optimal Path Guidance | main | Active | planning with language models;supervised fine-tuning with self-generated data;reinforcement learning fine-tuning | reinforcement learning | 3;3;5;5 | 3;4;3;4 | 3;2;2;3 | 2;2;2;3 | 3;3;2;3 | 4 | 3.5 | 2.5 | 2.25 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- When conditioning the optimal path with a partial exploration path, is it equivalent to a self-reflection process (and the the self-reflection succeeds with one reflection trial)? If so, what is the novelty of GSoS over a RL reflection-tuning method, or RL finetuning with Chain-of-Hindsight [1]?\n - Furthermore, have the authors tried to compile the trajectories by exploring more than one non-subgoal node in advance of the subgoal node, and ablate the effect with those containing only one non-subgoal node ahead of each corresponding subgoal node?\n\n [1] Liu et al., Chain of Hindsight Aligns Language Models with Feedback. ICLR 2024.\n\n- The effectiveness of GSoS is only demonstrated on one benchmark. The proposed method should be benchmarked on more scenarios to demonstrate its superiority.\n\n- In Lines 192-194, it is claimed that \"Fine-tuning on these trajectories may lead to significant changes in the model’s weights, potentially degrading its search and planning abilities. Therefore, it is crucial to explore methods for effectively integrating optimal solutions to produce trajectories that maintain both high likelihood and quality.\" It would be beneficial if the authors provide more experimental supports for why direct finetuning leads to the degradation of the search and planning abilities. Specifically, if it is supported by the main experiments where GSoS outperforms SoS, additional qualitative analysis and case studies are needed for the direct comparison between GSoS and SoS, and it would be helpful to provide cases where GSoS+finetuning succeeds while SoS+finetuning fails.\n\n- In Lines 306-307, it is demonstrated that \"even when multi-step returns with GAE are used for training the value function.\" It would be beneficial if the authors could show the experiments that verify this claim.\n\n- In Line 5 of Algorithm 2: what is M(y|x)?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- This paper studies complex reasoning and planning of LLMs, which is an important topic in LLM research.\n- The idea of integrating more exploratory trajectory segments into the context of the optimal subgoal makes sense, as it steers LLMs to learn to pivot to the optimal path.\n- Setting up the RL training on the operation level effectively accelerates the learning process, which is supported by comparison experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes Guided Stream of Search (GSoS), a novel method that combining the optimal path as well as the search trajectories of a searching scenario into a sequence, which is used as the training data instance for LLMs to acquire better planning and search performances. The authors have conducted experiments on Countdown, a mathematical reasoning benchmark with branching factor in square complexity of the inputs at each searching step. The experimental results demonstrated the effectiveness of GSoS, especially with RL that functions on the operation level."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Please refer to the Questions listed below."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper presents a simple and intuitive approach for improving planning tasks in LLMs by incorporating optimal solutions into trajectory generation process, which enhances the quality of generated trajectories and overall training outcomes.\n2. The paper is clearly written, making the proposed method and experiment findings accessible."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce GSoS, a method to improve the planning and reasoning capabilities of language models by integrating optimal solutions within search processes. Unlike prior approaches that rely solely on self-generated, often suboptimal search trajectories, GSoS incorporates optimal solutions progressively, guiding the model toward more structured search trajectories. These trajectories are distilled through SFT, which, combined with subsequent RL training, enhances performance on the planning task Countdown."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. A key baseline—using SFT with the optimal solutions (BC)—is missing. While the authors discuss BC's limitations on unseen tasks, including it in the evaluation would provide a more comprehensive comparison, especially since the main contribution of this approach is incorporating optimal solutions into the data construction process. \n2. The proposed approach is only validated on a single test bed, Countdown, which may leave readers questioning its generalizability to other planning tasks. Including an additional test bed, such as those from Beyond a* [1], would strengthen the paper’s claims, particularly as this work builds on and seeks to improve upon SoS (Gandhi et al., 2024).\n\n**Minor Issue:**\n- Line 111: The purpose of transforming $x$ through a series of operations to obtain $\\\\hat{y}$ is unclear, as $x$ already contains both input and output states? \n\n\n[1] Beyond a*: Better planning with transformers via search dynamics bootstrapping."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper is well-written and easy to follow\n\n2. The proposed method is simple and intuitive\n\n3. Good experimental results and detailed analysis on Countdown benchmark"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work explores how to leverage optimal solutions to enhance the search and planning abilities of language models. The authors propose guided stream of search (GSoS), which seamlessly incorporates optimal solutions into the self-generation process in a progressive manner, producing high-quality search trajectories for training. GSoS can significantly enhance the search and planning abilities of language models on Countdown, a simple yet challenging mathematical reasoning task."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The experiments are only conducted on one single benchmark. There are many other datasets requiring complex reasoning. At least one of them, such as LogiQA2, should be investigated.\n\n2. The authors use a 250M model for experiments which is quite small. For complex planning and reasoning, larger language models should be considered.\n\n3. How about the comparison to this simple baseline? For the given query, we sample plenty of trajectories from the model and construct a DAG using the sampled trajectories. Then we can sample different types of search paths from the DAG for training."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How does the performance of GSoS compare with other state-of-the-art algorithms in terms of search and planning capabilities? Can leading search and planning algorithms be transferred to this benchmark and be evaluated?\n\n2. What's the reason behind choosing GPT-2 as the backbone model? Is it possible to replicate the experiments with more advanced open-sourced models?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "[+] The overall presentation and structure are well-organized. The introduction, preliminary, and method sections are well-written. The threads are easy to follow.\n\n[+] The results and analysis of the experiments are detailed and comprehensive. The authors provide extensive experiment results and analyze them in detail. In my opinion, this paper is fine with its empirical results and analysis. \n\n[+] All the codes and hyperparameters are open-sourced for reproducibility.\n\n[+] I believe this method has potential applications for larger problems and more advanced models. Augmenting search and planning trajectories could be a crucial step in training models like o1."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduced the Guided Streat of Search, which integrates optimal solutions into the self-generation process of LLMs to improve their search and planning capabilities. The main contribution is extending the existing Stream of Search (SoS) approach to Guided Stream of Search (GSoS), which incorporates optimal solutions into the self-generation process in a progressive manner. GSoS uses unsuccessful search trajectories as contexts to integrate each intermediate action from the optimal solution, producing high-quality search trajectories that are then used for SFT. GSoS is evaluated on a search benchmark and demonstrates outperformance in comparison to both SFT and RLHF baselines, regarding both seen targets and unseen targets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "[-] The evaluation benchmark is not convincing to me. It appears that this benchmark can easily be formulated as a real search problem, making the use of an LLM unnecessary. I think the authors should consider testing their framework on a more complex benchmark.\n\n[-] It's doubtful that the unseen targets in Countdown can be considered a valid evaluation of generalization, given the high similarity between the supposedly different tasks in the dataset.\n\n[-] The backbone model, GPT-2, is somewhat outdated. Additionally, I could not find an explanation provided for choosing GPT-2 over other models.\n\nOn a minor note, I do not observe any planning capability (i.e., the ability to plan ahead of actions) from this method or within the benchmark, despite its repeated emphasis in the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024guided,\ntitle={Guided Stream of Search: Learning to Better Search with Language Models via Optimal Path Guidance},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1YXkDXIqVw},\nnote={under review}\n}"
},
"abstract": {
"value": "While language models have demonstrated impressive capabilities across a range of tasks, they still struggle with tasks that require complex planning and reasoning. Recent studies have proposed training language models on search processes rather than optimal solutions, resulting in better generalization performance even though search processes are noisy and even suboptimal. However, these studies overlook the value of optimal solutions, which can serve as step-by-step landmarks to guide more effective search. In this work, we explore how to leverage optimal solutions to enhance the search and planning abilities of language models. To this end, we propose guided stream of search (GSoS), which seamlessly incorporates optimal solutions into the self-generation process in a progressive manner, producing high-quality search trajectories. These trajectories are then distilled into the pre-trained model via supervised fine-tuning. Our approach significantly enhances the search and planning abilities of language models on Countdown, a simple yet challenging mathematical reasoning task. Notably, combining our method with RL fine-tuning yields further improvements, whereas previous supervised fine-tuning methods do not benefit from RL. Furthermore, our approach exhibits greater effectiveness than leveraging optimal solutions in the form of subgoal rewards."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"planning with language models",
"supervised fine-tuning with self-generated data",
"reinforcement learning fine-tuning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/934f37ce5f9ae39ea9004df38d8864814f28d600.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/20bb2dd6a18f3303097fd8557b8f74aad3882a57.zip"
},
"title": {
"value": "Guided Stream of Search: Learning to Better Search with Language Models via Optimal Path Guidance"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1YYp1rPRlm | Differentially Private Deep Model-Based Reinforcement Learning | main | Active | machine learning;reinforcement learning;privacy;differential privacy;deep learning;model-based;offline | reinforcement learning | 5;5;5;6 | 4;4;4;2 | 3;2;3;2 | 2;2;2;3 | 3;3;3;3 | 5.25 | 3.5 | 2.5 | 2.25 | 3 | -1 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "N/A"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "As a summary, this work investigates a natural setting and models the problem in a reasonable way. The work guarantees DP through model learning and post processing, which is intuitive. The performance of the algorithm seems decent."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work extends model-based offline reinforcement learning, namely MORL, to its differentially private variant, namely PriMORL. The work intends to guarantee trajectory level DP, which means it treats two datasets that differ in at most one (entire) trajectory as neighboring datasets and asks the algorithm to output indistinguishably between them.\n\nThe differential privacy is achieved by model privacy, i.e. if the model is learned in a DP way, then by post processing the algorithm is also DP. This though comes with the limitation that the learning algorithm can no longer access the dataset once the model is obtained from the previous phase. To achieve DP model learning, it randomly draws a subset of trajectories and uses this batch of data to estimate a gradient. A clip is then applied to the gradient which bounds the sensitivity of the gradient. The work discusses different clipping techniques which fine tune the clipping threshold. The DP guarantee is then proved by moment accountant.\n\nOnce a model is obtained the policy is trained through pessimistic private MDP, which is by the intuition that being aware of the model uncertainty requires pessimisty. This intuition inspires the authors to run soft actor-critic on the pessimistic variant of MDP, where the reward is reduced by the uncertainty level. This to some extent mitigates the cost of not being able to access the data once the model training is complete.\n\nMany experiments are provided on tasks like pendulum, cartpole, and halfcheetah. The authors didn't seem to compare their algorithms with baseline methods, while there seems to be many. Though, the performance of the proposed algorithm is reasonably good on its own."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Model/Techniques are not exciting.\n2. No baseline comparison.\n3. Limited testbeds"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "It seems that the number of ensembles $N$ plays a vital role in the results. This work has already reduce the dependency to $\\sqrt{N}$ in the sensitivity. I am just curious is it possible to further reduce it in training ensemble models."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The proposed method that provides privacy guarantees for training ensemble models is novel.\n2. The investigation of the problem is thorough."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studied differentially private model-based offline reinforcement learning (RL). The paper proposed a new algorithm that provide differential privacy for training ensemble models. Besides theoretical guarantees, the experiments also demonstrate the effectiveness of the proposed algorithm."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. There exist several typos. For example, in line 274, 'it does entirely remove...' I assume it should be 'it does not entirely remove...' because increasing $N$ will degrade the model performance.\n2. The major weakness is that there is a lack of a more explicit discussion on each term in the theoretical results, such as $\\epsilon_p$ and $\\epsilon_p^{DP}$. I would be curious about if these terms depend on privacy parameter or $N$, if so, what should be the approximate dependency."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.In Theorem 4.2, a general formula for \\epsilon^{MA}(z, q, T, \\delta) is presented, but neither the main text nor the appendix provides a complete, detailed expression of this formula. Could you include a full derivation of this formula so we can clearly understand how \\epsilon^{MA} is calculated based on inputs like z, q, T, and \\delta?\n2.In Section 4.3.2, the paper discusses handling model uncertainty under a private setting but appears to apply existing uncertainty-handling techniques from non-private settings directly to the private setting. Could you clarify any special considerations or unique aspects of handling uncertainty in the private setting? Specifically, how might model error and uncertainty differ under a private setting, given its unique constraints? We would appreciate any further insights on this point."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper is clearly organized. The authors first introduce the trajectory-level differential privacy framework within the offline RL setting, then explain the method for training private models using a model ensemble approach, covering both implementation details and theoretical guarantees. Finally, they describe how policy optimization is achieved by incorporating uncertainty techniques, with theoretical support provided as well."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces PRIMORL, a differentially private (DP) model-based reinforcement learning algorithm for offline settings. PRIMORL trains policies for continuous control tasks while ensuring trajectory-level privacy by learning an ensemble of DP models from offline data. This approach protects against privacy leaks in RL, especially critical in applications where individual trajectories may contain sensitive information. PRIMORL operates in an offline, infinite-horizon setting, leveraging private models to optimize policies without further interaction with the environment. Empirically, PRIMORL demonstrates competitive performance on deep RL tasks, advancing private RL beyond simpler tabular and linear MDPs and addressing practical privacy-performance trade-offs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tIt is unclear how the private-variant experiments control for \\epsilon. In the theoretical guarantees section, \\epsilon is presented as a theoretical bound, yet here it seems to be treated as a tunable hyperparameter, with little explanation connecting these two perspectives. \n2.\tWhile the motivation for the work is compelling, the experimental design is relatively simple and basic. I would have liked to see experiments that address the unique challenges of applying DP frameworks within the RL domain, yet this paper lacks a broader experimental analysis to underscore the real-world relevance of introducing DP into RL."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the weaknesses above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The problem of offline RL with DP is important and well-motivated.\n2. This paper proposes a practical solution to the problem.\n3. The authors do experiments to support their algorithm.\n4. The paper is well-written in general."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors consider private deep offline reinforcement learning (RL), where the goal is to train a policy on standard control tasks that is differentially private (DP) with respect to individual trajectories in the dataset. To achieve this, they introduce PriMORL, a model-based RL algorithm with formal differential privacy guarantees. PriMORL first learns an ensemble of trajectory-level DP models of the environment from offline data. It then optimizes a policy on the penalized private model, without any further interaction with the system or access to the dataset. In addition to theoretical guarantees, they empirically demonstrate that PriMORL enables the training of private RL agents on offline continuous control tasks with deep function approximations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The main concern is about technical novelty. The definition of Trajectory-level DP is directly adapted from [1]. The first part directly applies DP-FEDAVG, while the second part is about learning from the private model with pessimism. To the best of my knowledge, [1] is based on the same idea of private model + pessimism. The DP guarantee for the private model is from previous results, and the DP guarantee for learning the policy is from standard post-processing. I do not see any technical challenge in the process. It will be better if the authors could discuss about the challenges.\n\n[1] Dan Qiao and Yu-Xiang Wang. Offline reinforcement learning with differential privacy.\n\n2. Proposition 4.4 only provides an error bound for estimating the value function of $\\hat{\\pi}$, which is not standard. Is it possible to derive any results about the sub-optimality gap $V^\\star-V^{\\hat{\\pi}}$?\n\n3. In the experiments, the privacy protection is very weak (some $\\epsilon$ being close to 100). What will happen for more practical choices of $\\epsilon$? E.g. $\\epsilon \\approx 1$."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We address deep offline reinforcement learning with differential privacy guarantees, using a model-based approach."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024differentially,\ntitle={Differentially Private Deep Model-Based Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1YYp1rPRlm},\nnote={under review}\n}"
},
"abstract": {
"value": "We address private deep offline reinforcement learning (RL), where the goal is to train a policy on standard control tasks that is differentially private (DP) with respect to individual trajectories in the dataset. To achieve this, we introduce PriMORL, a model-based RL algorithm with formal differential privacy guarantees.\nPriMORL first learns an ensemble of trajectory-level DP models of the environment from offline data.\nIt then optimizes a policy on the penalized private model, without any further interaction with the system or access to the dataset. \nIn addition to offering strong theoretical guarantees, we empirically demonstrate that PriMORL enables the training of private RL agents on offline continuous control tasks with deep function approximations, whereas current methods are limited to simpler tabular and linear Markov Decision Processes (MDPs). We furthermore outline the trade-offs involved in achieving privacy in this setting."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"machine learning",
"reinforcement learning",
"privacy",
"differential privacy",
"deep learning",
"model-based",
"offline"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8c63b4c6f87fc38a78d80c9ce8dd01a3cc22bee5.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/715fdf070b28062bb6578bcddf11ff4dace339e9.zip"
},
"title": {
"value": "Differentially Private Deep Model-Based Reinforcement Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1YZw3RK2kg | Integrating State Space Model and Transformer for Global-Local Processing in Super-Resolution Networks | main | Withdraw | Computer Vision and Pattern Recognition;image super-resolution | applications to computer vision, audio, language, and other modalities | Yukai Sun;Zheng Chen;Yulun Zhang;Jinjin Gu | ~Yukai_Sun2;~Zheng_Chen11;~Yulun_Zhang1;~Jinjin_Gu1 | 3;3;5;5 | 5;3;4;5 | 3;2;3;3 | 2;2;3;2 | 3;2;3;4 | 4 | 4.25 | 2.75 | 2.25 | 3 | 0.301511 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Please see weakness."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper is well written and organized.\n\n2. The combination of Mamba and Transformer is promising for improving the performance of image super-resolution tasks.\n\n3. The ablation and main experiments are extensive and comprehensive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a hybrid network based on Mamba and Transformer for image super-resolution. Register tokens and SE-Scaling attention mechanisms are introduced to improve performance and reduce computation. The experimental results demonstrated the effectiveness of the method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The novelty of this paper is limited, and the main contribution seems to be just combining Mamba and Transformer. SE-Scaling did not show significant improvement over previous work.\n\n2. Since Mamba models perform poorly in capturing local information, why not integrate Mamba and CNNs which are good at local modeling. \n\n3. In addition to the parameters and FLOPs, it is necessary to compare the inference latency of the different methods on the device.\n\n4. From Figure 10, there is no significant difference between the proposed SST and MambaIR on the LAM attribution map. Does this indicate that the Transformer provides limited benefit?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Why do the authors not compare their method with HAT and discuss the advantages and disadvantages between them?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This work effectively integrates Mamba and Transformer, and the visualization results show that the hybrid structure can activate a wider range of pixels.\n2. This work made some improvements to Mamba to alleviate the feature artifact problem."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work introduces SST, a new model that integrates Mamba and Transformer to extract global and local information, respectively. This work is well-written, and the method is clear and easy to understand."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The authors claim the proposed SE-Scaling can significantly reduce the computational cost, but in Table 3, the Macs of using SE-Scaling are higher than Channel-Attention.\n2. Although this work integrates Mamba and Transformer, the proposed network SST simply uses Mamba and Transformer alternately and lacks deeper exploration.\n3. The performance of SST shows only a slight improvement compared to existing state-of-the-art models, such as SRFormer. The author should add comparisons with HAT, OmniSR, etc."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The impact of different proportions of VSSM and MSA on model performance was only verified on Urban100 and Manga109. It remains uncertain whether similar results would be observed on the other three datasets.\n2. Similar to the previous question, it is uncertain whether the number of registration tokens also produces similar results on the other three datasets.\n3. In paper [1], it is noted that Vision Transformer has artifact issues. This raises the question of whether the window attention in this paper exhibits similar phenomena and whether it also utilizes registration tokens.\n\n[1] Vision Transformers Need Registers, ICLR2024"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is very easy to follow. \n2. Extensive experiments were conducted to explore combinations of Mamba and Window Attention. \n3. Experiments were also conducted to investigate the impact of the number of registration tokens on model performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a single-image super-resolution network called SST, which combines Mamba and Window Attention mechanisms. The authors observed that some existing Mamba models do not effectively capture local dependencies in 2D images. Therefore, they leverage window attention to address these limitations. Additionally, the authors also observed that Mamba models tend to produce artifacts. To mitigate this issue, they introduced registration tokens before the SSM scan in SST. The authors conducted extensive experiments to explore different combinations of window attention and Mamba, and compared their method with current mainstream super-resolution networks to validate its effectiveness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Lack of comparison with some closely related approaches. With a larger number of parameters, SST-light shows lower performance metrics on almost all benchmarks compared to ATD-light[1].\n2. Super resolution is a very local computation, (at the range of a pixel). It is not demonstrated what is the advantage of exploring global interaction for such a problem.\n\n[1] Transcending the Limit of Local Window: Advanced Super-Resolution Transformer with Adaptive Token Dictionary, CVPR2024"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See the weakness."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tBy introducing the SE-Scaling mechanism, the model reduces the computational burden typically associated with channel attention mechanisms, making it suitable for lightweight applications. \n2.\tThe paper reports that SST achieves state-of-the-art results on both classical and lightweight super-resolution tasks, demonstrating its effectiveness through extensive experiments on various benchmark datasets."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a super-resolution network called SST (State Space Transformer), which integrates Mamba State Space Models (SSM) with Transformer self-attention layers. The authors aim to leverage the strengths of both architectures to enhance single image super-resolution (SISR) tasks. The Mamba model is noted for its ability to process global information efficiently due to its linear complexity, while Transformer models, particularly the Swin Transformer, excel in local region representation but suffer from quadratic complexity, limiting their receptive fields. The proposed SST model addresses the shortcomings of both approaches by combining their advantages. The authors introduce an updateable register to mitigate feature map artifacts commonly found in Mamba models and propose a new attention mechanism called SE-Scaling to reduce computational costs while improving performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe combination of Mamba SSM and Transformer architectures allows the model to capture both global and local contextual information effectively, however, they are both from current knowledge. Besides, the idea of combining Mamba and Transformer has been already proposed.\n2.\tCompared to SOTA methods, the improvement is not significant, such as ERF in figure 2 against MambaIR and quantitative results against DAT.\n3.\tMissing in-depth motivation. This article seems to be just a simple attempt at an IR mission by combining Mamba and current Transformer structures.\n4.\tThe model complexity is still large, what is the merit by using Mamba?\n5.\tMore recent mamba-based image SR works should be referenced and compared."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@misc{\nsun2024integrating,\ntitle={Integrating State Space Model and Transformer for Global-Local Processing in Super-Resolution Networks},\nauthor={Yukai Sun and Zheng Chen and Yulun Zhang and Jinjin Gu},\nyear={2024},\nurl={https://openreview.net/forum?id=1YZw3RK2kg}\n}"
},
"abstract": {
"value": "Single image super-resolution aims to recover high-quality images from low-resolution inputs and is a key topic in computer vision. While Convolutional Neural Networks (CNNs) and Transformer models have shown great success in SISR, they have notable limitations: CNNs struggle with non-local information, and Transformers face quadratic complexity in global attention. To address these issues, Mamba models introduce a State Space Model (SSM) with linear complexity. However, recent research shows that Mamba models underperform in capturing local dependencies in 2D images. In this paper, we propose a novel approach that integrates Mamba SSM blocks with Transformer self-attention layers, combining their strengths. We also introduce register tokens and a new SE-Scaling attention mechanism to improve performance while reducing computational costs. The resulting super-resolution network, SST (State Space Transformer), achieves state-of-the-art results on both classical and lightweight tasks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Yukai_Sun2",
"~Zheng_Chen11",
"~Yulun_Zhang1",
"~Jinjin_Gu1"
]
},
"authors": {
"value": [
"Yukai Sun",
"Zheng Chen",
"Yulun Zhang",
"Jinjin Gu"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Computer Vision and Pattern Recognition",
"image super-resolution"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "sun|integrating_state_space_model_and_transformer_for_globallocal_processing_in_superresolution_networks"
},
"pdf": {
"value": "/pdf/a8ef1e869f8f4c5b61dcea3af0e3e3948a9aeb16.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Integrating State Space Model and Transformer for Global-Local Processing in Super-Resolution Networks"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||
1YlfHUVq7q | Error Broadcast and Decorrelation as a Potential Artificial and Natural Learning Mechanism | main | Active | Error Broadcasting;Biologically Plausible Neural Networks;Backpropagation Alternative;Direct Feedback Alignment | applications to neuroscience & cognitive science | 3;3;5;6 | 4;4;4;3 | 1;2;3;3 | 2;2;2;3 | 2;3;2;2 | 4.25 | 3.75 | 2.25 | 2.25 | 2.25 | -0.777778 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "Thank you for your questions. Let me clarify.\n\nBy “orthogonal,” I mean uncorrelated—like vector orthogonality. My concern is that the performance gains you’re seeing might be due to decorrelating representations of different classes at each layer and not due to decorrelating them with the error signal.\n\nThis came up because I had trouble following the logic of applying the reverse of the theorem about error signal orthogonality.\n\nTo check this, you might compare your method to a baseline that decorrelates class representations at each layer using only the class labels without involving the error signal. I don’t have a specific algorithm in mind, but this could help show whether the key factor is the decorrelation with the error signal or just between classes.\n\nI hope this helps."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "Dear Reviewer,\nWe would like to thank you for the comprehensive review. Before drafting our response, we want to ensure we fully understand the reviewer’s points. Therefore, we kindly request clarification on the following:\n\nIn Weakness Point 2 and Question 2, the reviewer suggests that the performance gains reported in our article may result from 'orthogonalization of layer representations' and/or 'orthogonal representations of different classes.' We would be grateful if the reviewer could clarify precisely what is meant by these terms and indicate any mechanisms within our method that might be contributing to this effect. We note that in our framework, 'orthogonality' refers to being uncorrelated in a statistical sense. However, we believe the reviewer’s use of orthogonality pertains to vector orthogonality with respect to the Euclidean inner product.\n\nAdditionally, could the reviewer provide more details about the exact implementation of the “comparative baseline” that we are asked to compare against: for example, what is meant by and how is local class orthogonalization implemented in each layer? Is there an available article/codebase that the reviewer can point us to for this comparison baseline."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "A kind request for clarification"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See the above weaknesses. Moreover:\n\n1.\tWhy does BP in Table 2 show such a low performance? Did the authors try to use a different CNN architecture to get a better performance? \n2.\tThe MMSE estimator-based derivation looks great, but in terms of network training, is optimal MMSE estimator the best objective of an arbitrary defined network? (since different tasks might have different loss functions)\n\nRef:\n\n[1] Athanasios Papoulis and S Unnikrishna Pillai. Probability, Random Variables, and Stochastic Processes. 2002\n\n[2] Journe et.al., Hebbian deep learning without feedback, ICLR 2023"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tThe paper proposes a novel idea that uses the orthogonality property of the optimal MMSE, which avoids weight symmetry problem in conventional backpropagation.\n2.\tCompared with direct feedback alignment (DFA) method, the proposed EBD method provides better theoretical illustration and the results in MNIST and CIFAR-10 tasks are better."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new learning framework for neural networks that directly broadcasts output error to individual layers. The main idea is to minimize the correlation between the layer activations and output errors, which is based on the orthogonality property of minimum mean square error estimators developed by Papoulis&Pillai 2002 [1]. The framework is implemented on MNIST and CIFAR10 benchmark tasks for MLP and CNN."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe method still faces the critical issue of scaling up, as most of non-BP learning frameworks exist.\n2.\tThe experiments show EBD is only slightly better than DFA, while it is not comparable with other SOTA biologically plausible methods (e.g. Hebbian base method [2])"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The premise of this work hinges on the orthogonality of the error of the optimal estimator to the neural representations. However, the reverse is not addressed: a vector that is orthogonal to a set of functions of the input is not necessarily indicative of an optimal estimator, and functions orthogonal to the estimator may not be meaningful. Could you clarify the theoretical foundation of your algorithm and its precise connection to the theorem you reference?\n\n2. To strengthen your claims, it would be helpful to demonstrate that the performance improvements are not solely due to orthogonalizing representations in each layer. Your experimental results primarily focus on training fully connected networks on MNIST and CIFAR-10, raising the possibility that similar gains could be achieved through layer-wise orthogonal class representations with SGD applied only at the output. Can you comment on this possibility or provide additional evidence?\n\n3. Is there a potential for an online version of your algorithm that eliminates the need for batch learning? If so, how would this be implemented?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. **Compelling Biological Motivation**: Exploring error broadcasting and feedback alignment as methods for implementing gradient descent in neural networks holds significant promise for biological plausibility. These approaches circumvent the weight transport problem by directly transmitting error signals to deeper layers. This work introduces an innovative algorithm within this framework, which exhibits improved performance compared to previous error broadcasting methods.\n\n2. **Normative Approach and Theoretical Foundation**: The algorithm’s development, rooted in the theoretical orthogonality property of an optimal MMSE estimator, is intriguing. Framing this method as a normative approach that leverages optimal predictor properties is commendable. However, as noted below, a potential misuse of this theorem raises concerns.\n\n3. **Practical Demonstration with Numerical Results**: The empirical findings showcase the proposed algorithm's practicality, albeit with limitations. Its reported performance on benchmark datasets suggests that it is competitive with state-of-the-art alternatives under certain conditions."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors introduce the Error Broadcast and Decorrelation (EBD) algorithm as a novel method for implementing gradient descent in deep neural networks. This approach addresses the limitations of traditional backpropagation (BP), which requires biologically unrealistic components such as weight transport and symmetric feedback pathways. The EBD algorithm builds on a key theorem from minimum mean square error (MMSE) estimation, which states that the output error of an optimal estimator is orthogonal to any function of the input. Leveraging this property, the authors propose that the activations in each layer, as functions of the input, be orthogonal to the output error. This orthogonality condition forms the basis for their weight update rule, which aims to decorrelate activations and output errors at each layer. The proposed EBD framework demonstrates competitive performance with BP on benchmark datasets like MNIST and CIFAR-10, particularly in fully connected and simpler architectures. The authors also explore potential extensions to the algorithm, including regularization techniques and forward projection of activations, to enhance stability and performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Theoretical Assumptions and Interpretation**: The reliance on the theorem regarding the orthogonality of an optimal estimator’s error to any function of its input, while foundational, is problematic when extended in reverse. The paper does not adequately explain the consequences of requiring orthogonality between output error and hidden layer activations. Furthermore, in most applications and architecture, the dimensionality of the hidden layers is very large. Constraining a solution to be orthogonal to a single direction is weak, and its benefits are poorly defined. This gap leaves uncertainty regarding how orthogonality aids learning or inference. Thus, the theoretical basis appears tenuous and potentially misapplied.\n\n3. **Ambiguity in Performance Implications**: Although the algorithm performs well on real datasets, this success might stem from something other than the stated theoretical premise. The observed gains could be attributed to a different mechanism, such as orthogonal representations of different classes, rather than the error signal’s orthogonality. It would be valuable for the authors to test whether the performance improvement is due to orthogonalized class representations or if it is indeed a result of their premise. A comparative baseline using local class orthogonalization with an SGD-trained readout on the penultimate layer would provide insights into the true contribution of the proposed mechanism.\n\n4. **Biological Plausibility of Batch Learning**: The paper’s claim of biological relevance is weakened by the batch learning requirement, which necessitates retaining and normalizing the entire batch at each layer. While replacing weight transport and feedback pathways with error broadcast is a step toward biological realism, the reliance on batch-based updates undercuts this claim. The authors should consider the feasibility of online, more biologically plausible approaches and address whether the proposed method truly enhances biological plausibility. Alternatively, the paper can focus on the mathematical foundation of error broadcasting and not on biological realistic implementations of gradient descent.\n\n5. **Lack of Clarity in Possible Extensions**: In Section 4, the authors introduce several extensions to the EBD algorithm. The first involves regularization techniques aimed at preventing layer collapse. While preventing collapse is essential for maintaining active and diverse representations, the specific normalization methods proposed are neither novel nor particularly informative. Their inclusion does not substantially enhance the originality of the work.\n The second extension discussed is the forward projection of neural activations onto the output or penultimate layer, followed by an orthogonalization process at that stage. The rationale behind this step remains unclear. The manuscript provides no compelling biological basis for this projection mechanism, suggesting that its primary motivation is performance optimization rather than biological plausibility. Notably, the statement in line 448—“This projection facilitates the optimization of the decorrelation loss by adjusting the parameters of the final layer”—is ambiguous. It lacks a clear, rigorous mathematical explanation that would elucidate how this projection supports the training process. A more detailed formulation or analysis is necessary to justify the inclusion and clarify the impact of this component on the algorithm’s overall functionality.\n\n6. **Performance Analysis on Complex Architectures**: The results presented in Section 5 show that the algorithm's performance is almost on par with backpropagation. However, demonstrating performance close to BP on simpler datasets, such as MNIST or fully connected networks trained on CIFAR-10, is not sufficiently informative. While useful as initial proof-of-concept validations, these comparisons do not substantiate the broader claims of the algorithm’s novelty or practical utility. Combined with the previously mentioned theoretical limitations, the results fall short of convincingly demonstrating the value and distinct advantages of the proposed EBD approach."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- in line 156 is N_k the size of layer k? this should be specified\n- that g can be any nonlinear function seems a powerful result. How much did you explore its possibilities? It seems a big hyperparameter to choose but I didn't get a feel for what it should be\n- the error epsilon is a vector so why is it not in bold? (it currently appears as a scalar)\n- for non-linear networks the error landscape is typically non-convex and has many local optima which are found during learning instead of one global optimum. How does the main theoretical results (lemmas A.1/A.2 tie in with this?\n- for the equation in line 199 R is defined recursively, but what is R[0]?\n- does the forgetting factor lambda lie in [0,1]?\n- In section 2.3 what do W_1, W_2 mean? Do they directly related to W (e.g. the first/second column). I presume not given equations 6/7 but if they don't they should be called something else\n- to what extent does EBD depend on batch size? It seems like it would require large batches to get a good correlation estimate, but this doesn't seem to fit in with the biological plausibility of the algorithm?\n- Why is EBD a 3-factor learning rule but not backprop? is it not possible to consider the post-synaptic/modulatory signal as the error gradient with respect to the pre-synaptic neuron?\n- in 3.2 why are the corinfomax equations which involve determinants etc biologically plausible? It's not clear to the non-expert reader. Given there are lateral connections, are we also dealing with RNNs instead of feedforward nets now?\n- in algorithm 1 why are activations H and errors E and bias B now in caps? Also the link to the corinfomax equations above is not clear to me at all\n- In section 4 line 393 it's written that these extensions are 'building on the CorInfoMax-EBD algorithm', but I don't understand why they can't also be applied to standard MLP?\n- Could the power normalization equation in 4.1.1 not be written as a norm over the batch. I personally find the notation with [n] confusing\n- out of interest is 4.1.2 itself bio-plausible?\n- typos: line 398: period after stability; line 709: linear -> non-linear."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- the text is generally well written\n- the theoretical building block in which this is built - that optimal nonlinear MMSE estimators hvae error orthognoal to functions of input - is interesting and in my view certainly deserves the attention given by the authors. Its implementation - and therefore this paper - should be of value to both ML and neuroscience researchers.\n- it is clear the authors have a good grasp of the theory and technical details of the models considered, and the approach in general seems well thought out\n- relevant literature appears to be duly cited and compared, though I am a non-expert in this field\n- the numerical results presented in section 5 appear impressive, though I would prefer they were elaborated upon a bit more."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a method for training neural networks based on decorrelating layer activities with the output error. This method avoids the need for backpropagation and is a potential solution to the weight transport problem."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper is rather dense, and I worry that it harms its accessibility. It seems to me some of the technical details/results can be sacrificed to the appendix in place of more motivation/clarification. For example, section 3.2, and the relationship to corinfomax in general, is very difficult to grasp. The motivation seems to be that corinfomax is a biologically plausible model, but I don't understand why corinfomax-EBD is more biologically plausible than the implementation in 2.3. What was the original implementation lacking that corinfomax-EBD addresses?\n- As per above, I would appreciate any more insight into the results and comparison vs other models. E.g. do you have any intuition as to why EBD outperforms NN-GEVB and MS-GEVB? For the corinfomax models it seems that the benefit of EBD is that it avoids the two-phases (?), but is there a reason it makes significantly improvements on the CIFAR-10 dataset?\n- the notation can sometimes be sloppy (see below)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "The weaknesses section above contains most of my concerns which should be addressed for an increased score. Here a few additional questions are posed.\n- How might the mechanisms for power normalization and layer entropy be a plausible addition to biological neural networks?\n- Can this framework be extended beyond the MSE loss case? In practice, loss functions in the deep neural network literature are often very different than an MSE loss. In Appendix E.2, correlation curves are shown for the Categorical Cross Entropy loss, however it is unclear if this was used in practice to train networks. Clarity would be appreciated.\n- The computational complexity of the method and the proposed additional learning of correlation structures is not much discussed. How much might such a method cost in this regard?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- This work contributes a new perspective for measurement of a final optimum of network training based upon the MMSE estimator in a principled manner based upon the orthogonality of error and layer-activations.\n- The paper describes this method and its drawbacks within the methods section at length and covers a set of failure modes and extensions.\n- Detailed descriptions of the mathematical steps and hyperparameters of models tested are given."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes a novel optimization method called the “Error Broadcast and Decorrelation” (EBD) algorithm. This algorithm attempts to obtain the optimal minimum mean square error (MMSE) estimator by meeting its core criteria: that at optimum, there is zero correlation between an error signal and non-linearly transformed encodings of data. This enables a new perspective on error broadcasting which attempts to capture correlations between errors and neural network layer activations and uses this correlation (covariance) structure to propagate and thereafter minimise correlation. This effectively results in a decorrelation between error and layer-wise activations once converged. This method is combined with a number of additions to stabilize network activity norms and encourage activation entropy within network layers, as well as being integrated into the CorInfoMax framework. This method is finally tested by training of multilayer perceptrons and convolutional networks with the MNIST and CIFAR10 tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Of major and significant concern are the claims of this paper in comparison to existing work by Clark et al. 2021. Specifically, the results of this paper are presented in direct comparison to trained models in earlier work (Clark et al.) while claiming outperformance. However, this paper integrates a number of elements to the training scheme that are not present in the original comparison work. For example, the code and tables of this paper suggest that among other additions this work makes use of a learning rate scheduler, as well as potentially using many more epochs of training (unclear). In comparison, the original work by Clark et al. has no learning rate scheduler and far fewer training hyperparameters in general. This suggests that the comparison is entirely inappropriate. To provide a genuine comparison, I would encourage the authors to carry out a reimplementation and rigorous test against existing methods (at least against BP) in which the same degree of parameter searching/sweeping is carried out for all methods compared. Otherwise comparison is uninformative at best, and misleading at worst. For these reasons, the results in Tables 1 and 2 cannot in their current form be trusted to provide understanding of the relative efficacy of training methods.\n- This paper claims to provide a novel biologically plausible learning mechanism (even in the title), however to make this method work, a power normalizing and entropy encouraging mechanism is added to network dynamics. It is not discussed whether these are reasonable mechanisms within a biologically plausible context.\n- The current set of simulations results are sufficiently limited that it is not clear whether this method would scale. In particular, biologically-plausible rules can succeed at tasks such as MNIST or CIFAR-10 level but fail completely at large scale (see Bartunov et al. 2018, Neurips). Currently, there are no explanations of how well this method might do when scaled to harder datasets, or even how well it scales when network width or depth is modified. Without measures of performance across network scale and task complexity, it is not possible to know whether this method’s performance is robust to depth/task-complexity.\n- The description of this work’s method is comprehensive but requires a reader to go back and forth to understand it well. For example, the added extensions to EBD, which are used during training, are described with some distance after the main method (in Section 4) making it difficult to understand all moving parts of the simulation in a single read. Furthermore, the paper in general is too heavy on the methods aspects leaving zero room for interpretation of results and discussion. A refactoring of the paper in these respects would greatly help its readability and contribution as well as enabling a more complete discussion on the implications of the work."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a principled error broadcasting framework to serve as a more biologically realistic and flexible alternative to the backpropagation algorithm, based on the orthogonality property of nonlinear MMSE estimators."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024error,\ntitle={Error Broadcast and Decorrelation as a Potential Artificial and Natural Learning Mechanism},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1YlfHUVq7q},\nnote={under review}\n}"
},
"abstract": {
"value": "We introduce the Error Broadcast and Decorrelation (EBD) algorithm, a novel learning framework that addresses the credit assignment problem in neural networks by directly broadcasting output error to individual layers. The EBD algorithm leverages the orthogonality property of the optimal minimum mean square error (MMSE) estimator, which states that estimation errors are orthogonal to any nonlinear function of the input, specifically the activations of each layer. By defining layerwise loss functions that penalize correlations between these activations and output errors, the EBD method offers a principled and efficient approach to error broadcasting. This direct error transmission eliminates the need for weight transport inherent in backpropagation. Additionally, the optimization framework of the EBD algorithm naturally leads to the emergence of the experimentally observed three-factor learning rule. We further demonstrate how EBD can be integrated with other biologically plausible learning frameworks, transforming time-contrastive approaches into single-phase, non-contrastive forms, thereby enhancing biological plausibility and performance. Numerical experiments demonstrate that EBD achieves performance comparable to or better than state-of-the-art methods on benchmark datasets. Our findings suggest that EBD offers a promising, principled direction for both artificial and natural learning paradigms, providing a biologically plausible and flexible alternative for neural network training with inherent simplicity and adaptability that could benefit future developments in neural network technologies."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Error Broadcasting",
"Biologically Plausible Neural Networks",
"Backpropagation Alternative",
"Direct Feedback Alignment"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/246039d9d8ca7789d7b3c2f33914c31bc0ffdeb3.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to neuroscience & cognitive science"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/237402f4a565b806ab9c040fc7013ee2ca9b8466.zip"
},
"title": {
"value": "Error Broadcast and Decorrelation as a Potential Artificial and Natural Learning Mechanism"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1Z3C49JQVf | Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks | main | Active | backdoor attack;data selection | alignment, fairness, safety, privacy, and societal considerations | 3;3;5;6 | 4;4;4;5 | 2;3;2;3 | 2;1;2;3 | 3;2;3;3 | 4.25 | 4.25 | 2.5 | 2 | 2.75 | 0.777778 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The authors should clarify the novelty, choice of limited datasets, the use of older defense strategies, and the dependency on pretrained models."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper improves traditional clean-label backdoor attacks by proposing a threat model that is more applicable in real-world scenarios.\n\n2. The method is claimed to achieve higher attack success rates with a lower poisoning rate, showcasing efficient use of resources."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a method for enhancing clean-label backdoor attacks on deep neural networks. Unlike traditional clean-label attacks that apply triggers randomly, this approach selectively poisons challenging samples within the target class, boosting attack success rates with fewer poisoned samples. The authors introduce two strategies: using pretrained models to identify \"hard\" samples and leveraging out-of-distribution data for sample selection. Tested on CIFAR-10 and GTSRB datasets, this method outperforms random poisoning and is resilient against popular defenses like STRIP and Neural Cleanse, highlighting a need for stronger countermeasures against selective clean-label attacks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In my opinion, the method primarily introduces a data selection strategy, which lacks sufficient novelty.\n\n2. The evaluation is conducted only on CIFAR-10 and GTSRB datasets, limiting insight into the method's performance across other dataset types and application domains.\n\n3. The paper primarily tests against older defense strategies. Implementing more recent and sophisticated defenses, including adaptive methods like sample-specific anomaly detection, would strengthen the evaluation.\n\n4. The pretrained model strategy relies on the availability of pretrained models in similar domains, which may not always be accessible in real-world applications."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see the questions in weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The main strengths of the paper lie in its studied threat model, proposed sampling strategies and experimental evaluation.\n\n- I think that the proposed threat model is important as it exposes yet another backdoor threat where an attack only needs access to the data of the target class. The demonstrations that existing backdoor attacks under this threat model are not satisfactorily effective are an important contribution of the paper.\n- The proposed sampling strategies are novel, especially when they could be used with existing backdoor attacks, such as BadNets, SIG, Narcissus, etc…) to boost their backdoor performances under the studied threat model. It’s also interesting finding where the effectiveness of the proposed strategies even when there is less and less assumptions on the pretrained models or the OOD datasets. \n- The paper includes thorough analysis of the proposed strategies, demonstrating their effectiveness when they’re used with several existing backdoor attacks across different datasets. The evaluation also shows the effectiveness against several defenses."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies clean-label backdoor attacks in a very constrained setting, where the attacker only needs access to the training data from the target class and has no prior knowledge of the victim model, training process, and the other classes, and focuses on data-selection strategies to boost the performance of existing clean-label attacks in this constrained setting. The proposed data selection strategies include (1) the use of a pretrained model (when such exists) or (2) the use of an OOD dataset (when the pretrained model is not available) to train a surrogate model. The experimental results demonstrate the proposed strategies significantly enhance the ASR and several existing clean-label backdoor attacks, compared to random selection strategies. In addition, the paper demonstrates that the proposed strategies are resilient against several existing defenses."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I find that the paper has the following concerns:\n* The Narcissus results, while interesting, are different from what reported in their original paper. Can the authors explain why there are such differences? \n* The OOD approach rely on out-of-distribution data but it’s not clear how this dataset could be obtained, or whether there are any specific requirements of the datasets to maintain the effectiveness of the attacks?\n* Assuming that the victim could distribute the target class data collection to multiple sources, how does the proposed attacks perform in this case? \n* Do the authors have any suggestions about potential mitigation approaches against the proposed attacks in the studied threat model?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Why would an attacker use the OOD strategy proposed in section 4.4, as it requires training a surrogate model and appears to work worse than using a pretrained model?\n- Why use a latent space clustering approach instead of using the loss from a pretrained zero-shot image classifier like CLIP?\n- Why use VICReg instead of a more general feature extractor like CLIP?\n- Where are the training settings used in experiments adapted from?\n\nReferences\n\n[1] Alexander Turner, Dimitris Tsipras and Aleksander Madry. \"Label-Consistent Backdoor Attacks.\", 2019. \n\n[2] He, Kaiming, et al. \"Deep Residual Learning for Image Recognition,\" 2016. \n\n[3] Gao, Yinghua, et al. \"Not All Samples Are Born Equal: Towards Effective Clean-Label Backdoor Attacks.\", 2023."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The proposed method is a widely applicable technique to enhance to clean label attacks.\n- The experiments do a good job differentiating the surrogate model from victim model and therefore the attack shows convincing transferability."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a method for improving the effectiveness of clean-label attacks. It introduces a threat model where the attacker only has access to data belonging to a specific class and has no knowledge about other classes in the dataset. The paper proposes a method for using samples with hard to learn features to create poison-efficient clean label attacks. The proposed method finds these samples by clustering the latent features of a surrogate model. The paper explores using a pretrained model and a model trained on OOD data as the surrogate model. The paper evaluates the clean-label attack against backdoor defenses and data cleaning methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper trains for 300 epochs which is significantly longer than it should take to train the model on CIFAR-10/GSTRB [1,2] and makes attack success due to over-fitting very likely. Around 100 epochs seems to be more standard. Ideally, to simulate a competent defender early stopping should be employed. I.e. stopping the run when validation loss plateaus.\n\n- The experiments use very weak baselines. The paper only evaluates how the method performs compared to random sampling. At minimum the paper should compare against [3]. Especially because [3] could easily be adapted to adhere to this paper's threat model by using a pretrained model. Therefore, the experiments are not sufficent to jusifty that the proposed method is stronger than a slightly adapted version of [3].\n\n- The paper claims that it's threat model represents *\"the **most** constrained data-poisoning threat.\"* However, there are other perfectly reasonable threat models that would make this attack unrealistic. For example, an opportunistic attacker that doesn't get to choose the subset of samples in the dataset they are able to manipulate.\n\n- When evaluating the attack against defenses the paper does not describe the hyperparameter settings used by each defense nor how those settings were derived.\n\nMinor:\n- Bolding of best methods or aggregation would make Tables 2 and 3 more interpretable.\n- There are many typos in the manuscript."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The authors should include more related works and advanced baselines in their paper.\n2. The authors should better clarify their main contributions than those introduced in existing works.\n3. The authors should avoid overclaims.\n4. The authors should conduct more comprehensive experiemnts. \n\nMore details are in the 'Weakness' section."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper is easy to follow to a large extent.\n2. The motivation is clear and with empirical support.\n3. The paper introduces a clean-label backdoor attack that works effectively in a constrained scenario where the attacker has limited data access (only one target class). This approach is realistic for scenarios with privacy or geographical constraints, enhancing the practical relevance of the attack model."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores a practical scenario for clean-label backdoor attacks, where an attacker’s access is limited to a single class of data within a decentralized training setup. This constrained threat model reflects real-world data collection challenges, such as privacy restrictions and geographical limitations. To enhance poisoning efficiency under these conditions, the paper introduces two sample selection methods specifically designed for this limited-access scenario."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Numerous studies [1,2,3,4,5,6,7,8] have addressed sample selection in backdoor attacks, several of which [6,8] specifically focus on sample selection for clean-label backdoor attacks. Omitting these key relevant works is a significant oversight and should be addressed to ensure a comprehensive discussion of the literature.\n2. The novelty of this paper is limited, as it leverages a pre-trained model to identify \"hard samples\" for poisoning—a concept already explored in several studies [6,7,9]. However, the distinctions between this approach and prior work are not clearly articulated.\n3. The first contribution claimed by this paper is the introduction of a new backdoor threat model, where an attacker, acting as a data supplier, has access only to the target class data yet can still execute effective clean-label backdoor attacks. However, previous studies [10,11] have already examined this threat model in depth, providing detailed discussions on \"Why are dirty-label attacks more effective than clean-label attacks?\" Consequently, the originality and contribution of this paper raise some concerns.\n4. The discussion of backdoor attacks and defenses in the related work sections of this paper is outdated. \n5. There are some potential over-claims. For example, Line 156-159: Accessing only samples from a single non-target class is more difficult setting than yours.\n6. Missing some important experiments.\n- Main Experiments\n - The authors should also include the results of methods using all training samples for references, although you have a different setting.\n - It would be better to include the results of Narcissus here instead of in the appendix.\n - I would like to see whether the proposed method is also effective for untargeted clean-label backdoor attacks (e.g., UBW-C in [12])\n- The Resistance to Defenses: The authors should evaluate their methods on more advanced backdoor defenses (such as [13, 14] and their baselines). \n\n\n\n\n**References**\n1. Computation and data efficient backdoor attacks\n2. Explore the effect of data selection on poison efficiency in backdoor attacks\n3. Boosting backdoor attack with a learnable poisoning sample selection strategy\n4. A proxy-free strategy for practically improving the poisoning efficiency in backdoor attacks\n5. Minimalism is King! High-Frequency Energy-based Screening for Data-Efficient Backdoor Attacks\n6. Large Language Models are Good Attackers: Efficient and Stealthy Textual Backdoor Attacks\n7. Confidence-driven Sampling for Backdoor Attacks\n8. Clean-label Backdoor Attacks by Selectively Poisoning with Limited Information from Target Class\n9. Not all samples are born equal: Towards effective clean-label backdoor attacks\n10. Efficient backdoor attacks for deep neural networks in real-world scenarios\n11. Narcissus: A practical clean-label backdoor attack with limited information\n12. Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection\n13. Towards Reliable and Efficient Backdoor Trigger Inversion via Decoupling Benign Features\n14. IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a strategy that select data to poison to improve the success rate of clean label backdoor attacks."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024wicked,\ntitle={Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1Z3C49JQVf},\nnote={under review}\n}"
},
"abstract": {
"value": "Deep neural networks are vulnerable to backdoor attacks, a type of adversarial attack that poisons the training data to manipulate the behavior of models trained on such data. \nClean-label attacks are a more stealthy form of backdoor attacks that can perform the attack without changing the labels of poisoned data.\nEarly works on clean-label attacks added triggers to a random subset of the training set, ignoring the fact that samples contribute unequally to the attack's success. This results in high poisoning rates and low attack success rates.\nTo alleviate the problem, several supervised learning-based sample selection strategies have been proposed.\nHowever, these methods assume access to the entire labeled training set and require training, which is expensive and may not always be practical.\nThis work studies a new and more practical (but also more challenging) threat model where the attacker only provides data for the target class (e.g., in face recognition systems) and has no knowledge of the victim model or any other classes in the training set.\nWe study different strategies for selectively poisoning a small set of training samples in the target class to boost the attack success rate in this setting. \nOur threat model poses a serious threat in training machine learning models with third-party datasets, since the attack can be performed effectively with limited information. Experiments on benchmark datasets illustrate the effectiveness of our strategies in improving clean-label backdoor attacks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"backdoor attack",
"data selection"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/3f2db51fbde69adef73921cfda0af2f4b493125e.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/e3fe55fbf7cd909ff7cee301c0bffd58e8b34195.zip"
},
"title": {
"value": "Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1Z6PSw7OL8 | BiGR: Harnessing Binary Latent Codes for Image Generation and Improved Visual Representation Capabilities | main | Active | Image generation;Generative model;Representation learning | generative models | 3;5;6;8 | 5;4;3;3 | 2;2;3;4 | 2;2;3;3 | 2;2;3;3 | 5.5 | 3.75 | 2.75 | 2.5 | 2.5 | -0.919866 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed.",
"Yes, Other reasons (please specify below)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- The SiT architecture reports improved results [1]; could authors clarify more about the SotA claim? \n\n- Could the authors please fix the citation format at L196 by using \\citet{}?\n\n[1] Ma et al. ECCV 2024, SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Paper is clear and well-written.\n2. The binary latent idea is new regarding Image Generation through LLMs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "BiGR is a novel conditional image generation model that uses compact binary latent codes to enhance both generative and representation capabilities. It unifies generative and discriminative tasks within the same framework, featuring a binary tokenizer, a masked modeling mechanism, and a binary transcoder for binary code prediction. BiGR introduces an entropy-ordered sampling method for efficient image generation and demonstrates superior performance in generation quality and representation capabilities."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- While the idea introduced is novel, it is hard for me to reason why the design choices used in the paper are leading to improving representation capabilities. It would be great if authors shed light on this much more.\n\n- The idea of the diffusion process in Binary seems interesting, however the motivation of why it should improve the overall results could be clearer.\n\n- The authors claim that they have replaced causal attention with bi-directional attention. I need help understanding how this can be done at the inference stage and what fine-tuning was done to make it work.\n\n- The LlamaGen paper reports better results for the ImageNet (256x256). So could the authors please clarify the discrepancy in the results reported?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.\tHow does BiGR handle scenarios where binary latent codes introduce quantization artifacts? \n2.\tDoes entropy order sampling prioritize representative features (attribute) of an object or class? Is there any relation in order of sampling and semantic characteristics?\n3.\tIt would be insightful to include examples of failure cases."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- Unified Framework for Generation and Representation. The proposed method effectively combines generative and discriminative capabilities within a single model, demonstrating strong performance in both areas.\n- Strong Experimental Validation. The model's performance is validated through extensive experiments, showing improvements over previous methods in generation quality and discriminative accuracy.\n- Fast Inference Speed. The model’s entropy-ordered sampling strategy accelerates the generation process by iteratively unmasking tokens in a confidence-guided manner. This is significantly faster compared to autoregressive models.\n- Various Applications. BiGR's ability to perform tasks such as inpainting, outpainting, and image enrichment in a zero-shot setting validates its flexibility and generalization capabilities.\n- Extensive Ablations. The paper provides thorough ablation studies that detail the impact of various components and settings on the model's performance.\n- Well written. The paper's motivation is clear and well-connected to the approach. Although some technical parts can be improved with more detail, the paper is well-written overall.\n- The limitations are discussed in the paper."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces BiGR, a conditional image generation model that leverages compact binary latent codes to achieve both high-quality image generation and strong visual representation capabilities. BiGR integrates a binary tokenizer, a masked modeling mechanism, and a binary transcoder to generate binary codes, to achieve efficient generation through an entropy-ordered sampling strategy. The model's design allows it to perform favorably in both generative and discriminative tasks. BiGR demonstrates strong performance on generation metrics, e.g., FID-50k and representation tasks evaluated via linear-probe accuracy. Additionally, the proposed method demonstrates its versatility in applications including image editing and zero-shot generalization on several tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Hyperparameter Complexity. The proposed method relies on several hyperparameters for both training and inference, such as the CFG scale, Gumbel temperature, number of sampling iterations, and number of diffusion steps. This complexity increases the time and resources required for tuning. This is discussed in the limitation section of the paper.\n\n- Fixed Sequence Length: The model’s architecture enforces a fixed sequence length during training, which restricts its flexibility to handle inputs of varying sizes. Generating images at different resolutions requires retraining the model with the new sequence length configuration. This is also discussed in the limitation section of the paper.\n\n- The diffusion and denoising process is a bit confusing. It took me a while to figure out where the noise and denoising process is applied. Clarifying that the binary transcoder is the component responsible for denoising the noise introduced in the \"Bernoulli diffusion\" section would make the flow more understandable and easier to follow."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the weakness."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Please refer to the summary."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces BiGR, a novel conditional image generation model that unifies generative and discriminative tasks within a single framework. This model is notable for several key advantages:\n\nUniformity: BiGR is the first model to integrate both generative and discriminative tasks, leveraging compact binary latent codes to achieve strong performance in both areas. This unification allows BiGR to handle tasks that typically require separate models.\n\nEfficiency: The model is designed to generate images quickly, making it more efficient than existing models. This efficiency is achieved without compromising the quality of the generated images.\n\nFlexibility and Scalability: BiGR is adaptable to various tasks, including zero-shot generalized tasks, showcasing its potential for a wide range of applications. The model's scalability is demonstrated through its performance across different model sizes and configurations.\n\nPerformance: Extensive experiments show that BiGR delivers decent performance in terms of generation quality and linear separability. The model's performance is evaluated using metrics like FID (Fréchet Inception Distance), and it is shown to perform well compared to other models like LlamaGen.\n\nInference Hyperparameters: The paper discusses the impact of hyperparameters such as the number of sampling iterations and diffusion timesteps on the model's performance. It is noted that larger models tend to achieve lower FID values, but with increased sample time, and that optimal performance varies with model size.\n\nComparison with Other Models: BiGR is compared against other models, including LlamaGen, across different settings involving tokenizers, training objectives, and modeling types. The paper highlights that while the unconditional version of the model shows better representation capabilities, the conditional version excels in generative tasks.\n\nOverall, BiGR represents a significant advancement in the field of image generation by combining generative and discriminative capabilities in a single, efficient model, with promising applications for future research and development."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Lack of Comprehensive Benchmarking: While the paper compares its model against LlamaGen and a few other settings, the scope of comparison is limited. The paper could benefit from a more extensive benchmarking against a wider range of state-of-the-art models to better establish its relative performance.\n\nSampling Strategy Issues: The paper mentions a \"nan\" issue in the sampling strategy due to logarithmic operations. Although a workaround is provided, this indicates potential instability in the model's implementation. A more robust solution to this problem would enhance the reliability of the model.\n\nLimited Exploration of Model Configurations: The paper primarily focuses on a few configurations (S0, S1, S2, S3) and does not explore a broader range of hyperparameters or architectural variations. This limits the understanding of the model's capabilities and its adaptability to different tasks or datasets.\n\nEvaluation Metrics: The paper emphasizes generative performance but does not provide a detailed analysis of other important aspects such as scalability, robustness, or efficiency. Including these metrics would provide a more holistic view of the model's strengths and weaknesses.\n\nAssumptions and Limitations: The paper acknowledges that surpassing state-of-the-art models across all metrics is not the goal, but it does not clearly outline the specific scenarios or applications where the proposed model excels. A clearer articulation of the model's intended use cases and limitations would help in understanding its practical applicability.\n\nTheoretical Justification: While empirical results are presented, the paper could strengthen its theoretical foundation by providing more in-depth explanations or proofs of why certain design choices, such as the non-deterministic binary transcoder, lead to better performance."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Comments in Weaknesses should be resolved."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The method is easy to follow and the paper is easy to read\n- The architecture of this model seems to work very well on specific tasks such as inpainting, outpainting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a language model based image generation/discrimination model. Using binary latent code autoencoder, the model can learn binary codes from the image representation. Llama is originally a decoder-only model but this method use it as encoder-only model. The generation of an image is conducted by sampling from the Bernoulli distribution of outputs of the model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The biggest problem of this approach is that it is unable to conduct text-to-image generation. Early stage of diffusion models were constrained to class-conditional generation but now it is hard to find models that is unable to do t2i generation. Even LlamaGen can receive various types of condition (especially text condition) since it is a decoder-only model.\n- I think that's why binary latent code has been enough to encode image representation. Even VQ-VAE inevitably suffers from loss of information because the latent variable is not continuous. But as the problem setting of this paper is limited to class conditional image generation, the amount of information is not large enough to see the malfunction of the binary code.\n- Also, I think it is not fair to directly compare with LlamaGen since it is designed to handle multiple modalities, not focusing on image generation. And also with an appropriate prompt, LlamaGen is able to conduct image discrimination task as well.\n- In conclusion, limiting the scope of the problem enabled the binary code to work well."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We present a novel conditional image generation model that unifies generative and discriminative tasks effectively."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024bigr,\ntitle={Bi{GR}: Harnessing Binary Latent Codes for Image Generation and Improved Visual Representation Capabilities},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1Z6PSw7OL8},\nnote={under review}\n}"
},
"abstract": {
"value": "We introduce BiGR, a novel conditional image generation model using compact binary latent codes for generative training, focusing on enhancing both generative and representation capabilities. \nBiGR is the first conditional generative model that unifies generation and discrimination within the same framework. \nBiGR features a binary tokenizer, a masking modeling mechanism, and a binary transcoder for binary code prediction. \nAdditionally, we introduce a novel entropy-ordered sampling method to enable efficient image generation. \nExtensive experiments validate BiGR's superior performance in generation quality, as measured by FID-50k, and representation capabilities, as evidenced by linear-probe accuracy. \nMoreover, BiGR showcases zero-shot generalization across various vision tasks, enabling applications such as image inpainting, outpainting, editing, interpolation, and enrichment, without the need for structural modifications. Our findings suggest that BiGR unifies generative and discriminative tasks effectively, paving the way for further advancements in the field."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Image generation",
"Generative model",
"Representation learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d2b985d3f51fb18462eac1aea6d72e32bf5a1534.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "BiGR: Harnessing Binary Latent Codes for Image Generation and Improved Visual Representation Capabilities"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1ZAqAmK6BM | Improving Tabular Generative Models: Loss Functions, Benchmarks, and Iterative Objective Bayesian Approaches | main | Active | generative adversarial network;synthetic data;correlation- and distribution-aware loss function;iterative objective refinement Bayesian optimization;benchmarking framework | generative models | 3;3;5;6 | 4;4;3;4 | 2;2;2;4 | 3;3;2;3 | 4;3;2;3 | 4.25 | 3.75 | 2.5 | 2.75 | 3 | -0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "What will the performance if using Standard Bayesian optimization rather than IORBO proposed by this paper?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The experiments are comprehensive. Hyperparameters are chosen reasonably."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces two regularization terms for improving the performance of the tabular generative model. The authors further propose to use ranking-based Bayesian Optimization to choose the hyperparameter. They finally evaluate the proposed method in Twenty tabular datasets on 10 base generative models by using TSTR, augmentation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The proposed method is heuristic. The paper does not provide an optimality or convergence guarantee of the proposed loss. These two proposed losses are reasonable for tabular data but not general enough for other types of data. The hyperparameters are chosen by the new proposed Bayesian Optimization without theoretical guarantees."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The individual contributions of the paper are good. However, my main concern is the overall theme of the paper. I am unable to determine the overall research question the paper is trying to address. Please see weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "**Originality and Quality**\n\nThe correlation- and distribution-aware loss function is new and interesting to me. I have not encountered works that display the effectiveness of enforcing correlation and high-order moments in the loss function to improve generative models. It is nice to see an improvement in existing hyperparameter tuning algorithms such as Standard Bayesian Optimization by adding an iterative refinement process.\n\n**Clarity**\n\nIndividual sections of the paper are well written.\n\n**Significance**\n\nTabular data generation is gaining traction in real-world applications such as electronic health records. This work helps bring progress to tabular data generation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "- Introduced a correlation- and distribution-aware loss function designed as a regularizer for DGMs in tabular data synthesis that displays promising results\n- Introduced a hyperparameter tuning approach, IORBO, that leverages rank-based aggregation. (concerns of units\n- They introduce a benchmarking system evaluating statistical similarity, ML TSTR performance, and ML augmentation performance, with robust statistical tests."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- I am struggling to find a central theme/research question the paper is trying to answer. It provides solutions from three different perspectives: 1) Loss Function Regularization: Improving generative model outputs by enforcing statistical properties (e.g., correlation, distribution); 2) Hyperparameter Tuning: Using methods like IORBO for iterative optimization; 3) Statistical Tests: Providing a framework for assessing model performance across metrics. I am unable to determine a flow to link the three ideas together/how one idea enforces the other.\n- L486: How does IORBA perform against other hyperparameter tuning methods such as [Randomised Optimization, GridSearch etc.](https://scikit-learn.org/1.5/modules/grid_search.html#tuning-the-hyper-parameters-of-an-estimator) in terms of performance? What about the computational cost for IORBA vs. SBO and other mentioned baselines, what is this tradeoff? Additionally, what are the optimized hyperparameters that you obtain from your method? Ablation studies of the aforementioned would make your case stronger.\n- In [TabSyn](https://arxiv.org/abs/2310.09656), the authors provided a comprehensive evaluation of synthetic tabular data using over five distinct evaluation metrics. Their metrics are straightforward and easy to comprehend. It will be nice to compare and justify why your metrics are more convincing and better than their proposed benchmark so that users should use your metrics instead of/in addition to TabSyn’s.\n- Privacy is also crucial in synthetic tabular generation. How does your proposed loss function affect privacy-preserving metrics such as DCR and C2ST?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. **Significance Levels and Decision-Making:**\n In Table 1, the column for significance levels presents $p$-value ranges. A more detailed description of the decision based on the test statistic (or $p$-value obtained) may be helpful in understanding the experiment since a two-sided test is concerned.\n\n2. **Distribution matching loss:**\nIt is possible for non-converging distributions to have similar moments, especially in lower orders. And, moment estimators of higher order moments introduce instability in the finite sample sense, and this instability goes up when the moment order goes up. It would be helpful if the author could justify using moments for distribution rather than the usual distance/score-based metrics for distribution similarity.\n\n\n\n**Some Suggestions:**\n ***Reordering Loss Components:***\n For clearer presentation, consider swapping the order of the two proposed loss components to explain what $\\mu$ and $\\sigma$ are before presenting them in Eq.2."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper presents an approach to enhancing Deep Generative Models (DGMs) for synthetic data generation on tabular data. Introduction of a correlation- and distribution-aware loss function, iterative objective refinement Bayesian optimization, and a detailed benchmarking framework are presented."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "**Review of \"Improving Tabular Generative Models: Loss Functions, Benchmarks, and Iterative Objective Bayesian Approaches\"**\n\nThis paper proposes several methods to enhance deep generative models (DGMs) for synthetic data generation with a particular focus on tabular data. While the work presents promising results in experiment, certain aspects need further clarification and further improvement."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **\"Moment Generating Function (MGF)\":**\nThe term \"Moment Generating Function (MGF)\" appears to be misused. The paper discusses empirical moments themselves rather than the empirical MGF $\\hat{M_X}(t)$ from which the $n$-th moments can be obtained by taking $n$-th derivatives wrt $t$ at $t=0$. [See *Casella, Statistical Inference, 1990* (pp61)]\n\n2. **Biased Estimator in Synthetic Data:**\n A biased estimator is used to calculate the standard deviation. This includes the estimator on synthetic data sampled at size $B$, which is not enough for the biased estimator to converge to the unbiased one. It would be beneficial for the paper to address or justify this choice. \n\n3. **Hyperparameter $\\lambda$:**\nHyperparameter $\\lambda$ in Eq.6 scales the $L_{\\text{distribution}}$ in a manner the same as $\\beta$ in custom losses, since $\\lambda$ is \n proportional to $L_{\\text{distribution}}$ in Eq.6. Simultaneous inclusion of $\\lambda$ and $\\beta$ in the hyperparameter search may lead to issues such as multi-collinearity for Bayesian optimization."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See above"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The added regularizers are described clearly and intuitively, with a well-defined methodology and comprehensive benchmark design. This approach encompasses various generative models and employs Bayesian optimization to identify optimal hyperparameter configurations. Consistent improvements over baseline models are demonstrated."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work introduces correlation and moment-matching loss functions to regularize the loss function of different deep generative models for tabular data. Its results show that with proper selection of hyperparameters, its approach consistently improves the baselines. A Bayesian optimization procedure is introduced for hyperparameter tuning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "What I miss from the paper is a discussion on how to tune the method in the case of data heterogeneity and its performance and robustness in missing data scenarios. How do the regularizers formulate in the case of counting distributions (e.g., Poisson likelihood) or ordinal variables? Do they consistently improve the results in the case of large fractions of missing entries in the database? I set my score to 6 since I feel that without a proper discussion on these aspects, the impact of the paper is limited."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a novel loss function and optimization method that significantly improve the ability of deep generative models to create high-quality synthetic tabular data for better machine learning performance."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024improving,\ntitle={Improving Tabular Generative Models: Loss Functions, Benchmarks, and Iterative Objective Bayesian Approaches},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1ZAqAmK6BM},\nnote={under review}\n}"
},
"abstract": {
"value": "In many applications of deep learning (DL), more data is essential to enhance model performance and generalization. A promising avenue to increase data availability is to use deep generative models (DGMs) to create synthetic data. However, existing DGMs struggle to capture the complexities of real-world tabular data, which often contain diverse variable types with potential imbalances and dependencies. To address these challenges, we propose a novel correlation- and distribution-aware loss function that works as a regularizer for DGMs. Additionally, to address the limitations of standard Bayesian optimization (SBO), which struggles to aggregate multiple metrics with different units--resulting in unreliable direct averaging and sub-optimal decisions--we introduce iterative objective refinement Bayesian optimization (IORBO) to rank metrics to enable more meaningful comparisons across diverse objectives. To ensure a rigorous evaluation, we establish a comprehensive benchmarking framework using twenty real-world datasets along with ten established tabular DGM baselines. The proposed loss function demonstrates statistically significant improvements over existing methods in capturing the true data distribution, significantly enhancing the quality of synthetic data generated with DGMs. The benchmarking framework shows that the enhanced synthetic data quality leads to improved performance in downstream DGMs tasks. Further, the proposed IORBO outperformed the SBO with mean aggregation in terms of win rate and outperformed the SBO with median aggregation overall."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"generative adversarial network",
"synthetic data",
"correlation- and distribution-aware loss function",
"iterative objective refinement Bayesian optimization",
"benchmarking framework"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/9cd1a0cd15dd9853b20aebf9d9629538320babca.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Improving Tabular Generative Models: Loss Functions, Benchmarks, and Iterative Objective Bayesian Approaches"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1aF2D2CPHi | Open-Vocabulary Customization from CLIP via Data-Free Knowledge Distillation | main | Active | Data-Free Learning;CLIP Model;Customization | applications to computer vision, audio, language, and other modalities | 5;5;6;10 | 4;3;4;3 | 3;2;3;4 | 3;2;3;4 | 3;2;3;3 | 6.5 | 3.5 | 3 | 3 | 2.75 | -0.485071 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Could the authors elaborate on potential methods to mitigate noise introduced by style dictionary diversification, especially in fine-grained tasks?\n2. Are there specific aspects of CLIP’s architecture that are essential to this approach, or could it be adapted to other VLM architectures?\n3. In Figure 6, the style differences are not very apparent—could the authors clarify how style diversification manifests visually?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper provides a meaningful contribution to open-vocabulary customization for VLMs, especially under data-free constraints. It addresses practical issues in adapting CLIP without original data, proposing a unique approach to handle limitations posed by BatchNorm layers. Techniques like style dictionary diversification and meta knowledge distillation are well-conceived, though the performance improvements are modest. While the theoretical analysis is detailed, the practical gains might benefit from further validation. Overall, the paper offers useful insights but may require more refinement and broader evaluation to strengthen its impact."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a novel approach for open-vocabulary customization in vision-language models like CLIP, utilizing Data-Free Knowledge Distillation. The authors address limitations of existing DFKD methods, which depend heavily on BatchNorm layers incompatible with CLIP. Their method incorporates image-text matching to invert a surrogate dataset, enabling text- and image-based customization. Key innovations include style dictionary diversification, class consistency maintaining, and meta knowledge distillation to enhance the generalizability of a student model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper's writing could be improved for clarity, as the relevance of BatchNorm (BN) statistics to the later-introduced contrastive learning method is somewhat confusing. The presentation would benefit from clearer contextualization and integration with recent advancements in VLM customization to help situate the contributions more effectively. While the proposed techniques are valuable, additional clarity around specific limitations—such as the potential for style dictionary diversification to introduce noise—could strengthen the paper. Additionally, the reliance on the CLIP model may limit generalizability across other VLM architectures. Expanding future work to include broader applications of the method across diverse vision-language architectures would help validate its adaptability."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Why did you use VQGAN? Will the generated data have enough diversity? What other architectures did you consider?"
},
"rating": {
"value": 10
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "This paper presents a novel approach to open-vocabulary customization of Vision-Language Models (VLMs) like CLIP. The authors identify the limitations of existing Data-Free Knowledge Distillation (DFKD) methods and propose a novel solution to address these limitations.\n\nThe paper is well-written and easy to follow. The authors provide a clear motivation for their work and a concise overview of their proposed technique. The introduction effectively sets the stage for the paper, with a clear articulation of the research gap and the proposed solution.\n\nThe authors provide a comprehensive analysis of related work, demonstrating the novelty of their approach. The experimental findings are compelling, particularly the observation that CLIP's BN layers tend to favor faces, highlighting their unsuitability for DFKD.\n\nThe proposed framework's ability to handle both text-based and image-based customization enhances its applicability and significance. The use of instance-level contrastive loss for increased diversity is well-justified, both in practice and through theoretical analysis (Theorem 4.1).\n\nThe experimental setup and training details are described thoroughly, which is commendable. The choice of the ImageNet dataset is appropriate, given its scale and diversity. The result analysis is comprehensive and insightful, with the authors exploring various aspects of their approach, including the unique \"warm-up\" strategy."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper shows that the existing works that use DFKD methods using CLIP do not perform well. This is blamed on their use of BatchNorm that biases towards faces. The paper introduces an alternate technique for performing DFKD well using CLIP - a text-image matching technique."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While Figure 1 provides a good overview of the framework, consider replacing the \"frozen\" and \"not frozen\" symbols with more intuitive icons, such as a lock and an unlocked lock. Additionally, ensure the frozen symbol is clearly visible in the blue boxes, perhaps by changing its color.\n2. Tables 2, 4, and 5 don’t have any units for the numbers or any text mentioning the metric used for those results. Please consider adding metrics and units."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "My concern mainly lies in technique novelty. Can you summarize your contribution again based on my concern?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "-\tThis paper is well written.\n-\tThis paper is well motivated to study DFKD for vision-language foundation models.\n-\tExperiments show the effectiveness of the proposed framework."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper delves into Data-Free Knowledge Distillation for CLIP so as to distill a compact student model with customized zero-shot or few-shot image classification capacity. Specifically, the proposed framework is composed of surrogate dataset generation and knowledge distillation. For the former component, this paper uses model inversion and style dictionary diversification based on the framework of VQGAN-CLIP. For the latter component, this paper designs a meta method for knowledge distillation. Experiments validate the effectiveness of the proposed framework."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- My concern mainly lies in technique novelty. In Fig.1, the proposed framework is composed of dataset inversion process and knowledge distillation process. However, in dataset inversion process, the proposed method is mainly similar to [1] and [2], especially [2], which is also a related work to study DFKD in CLIP. In knowledge distillation, the proposed method is mainly similar to [3], which uses a MAML-like meta learning to enhance cross-domain generalization capacity. \n\n[1] VQGAN-CLIP: Open domain image generation and editing with natural language guidance. ECCV 2022.\n\n[2] Distilling vision-language foundation models: A data-free approach via prompt diversification. ACMMM 2023.\n\n[3] Learning to Generalize: Meta-Learning for Domain Generalization. AAAI 2018."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See the weakness section. I would like to increase my rating, if the proper justification of my questions will be given."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The method eables model customization without accessing the original data, preserving privacy of the users.\n2. The proposed approach captures invariant representations through style diversification and meta knowledge distillation, which is interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the customization of CLIP for specific user-defined tasks without using original data. The proposed approach involves generation of synthetic images using VQGAN in different styles to increase diversity, while following a data-free meta learning based knowledge distillation technique to adapt a lioghtweight student encoder from teacher CLIP. It aims to overcome the reliance on BatchNorm layers, which hinder customization for ViT variants of CLIP model. The authors have shown extensive experiments with significant improvement of performance of the proposed method compared to CLIP."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. As CLIP is already a very good domain-aware model, what is the motivation behind generating style tranferred images? The diversification could be better and challenging with generation of very fine-grained realistic images.\n\n2. Can pretrained diffusion models be used instead of VQGAN, as it can generate more diverse datasets very easily? What are the pros and cons of using a diffusion model?\n\n3. Why meta learning based knowledge distillation over traditional supervised learning? Any theorectical reason?\n\n experiments of distillation techniques like TinyCLIP [1], CLIP-KD [2], LP-CLIP [3] are likely to be preferable.\n\n\n [1] TinyCLIP: CLIP Distillation via Affinity Mimicking and Weight Inheritance, ICCV 2023.\n\n [2] CLIP-KD: An Empirical Study of CLIP Model Distillation, CVPR 2024.\n\n [3] Improving CLIP Robustness with Knowledge Distillation and Self-Training"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Could we distill models from CLIP without data to meet customized tasks?"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024openvocabulary,\ntitle={Open-Vocabulary Customization from {CLIP} via Data-Free Knowledge Distillation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1aF2D2CPHi},\nnote={under review}\n}"
},
"abstract": {
"value": "Vision-language models such as CLIP have demonstrated strong zero-shot performance, but their considerable size and inefficient inference limit customizable deployment for users. While knowledge distillation is a solution, it still requires the original data, which is not always available due to copyrights and privacy concerns. For many users seeking open-vocabulary customization, Data-Free Knowledge Distillation (DFKD) emerges as a promising direction. Upon rethinking DFKD, we find that existing methods fail on CLIP due to their heavy reliance on BatchNorm layers, which are unexpectedly unusable in CLIP. Based on our findings, we adopt image-text matching to achieve DFKD for CLIP, enabling customization based on arbitrary class texts. This involves (i) inversing a surrogate dataset from CLIP based on text prompts; and (ii) distilling a student model from CLIP using the surrogate dataset. Specifically, we introduce style dictionary diversification to enhance the diversity of synthetic images. To prevent uncontrollable semantics introduced by diversification, we propose a class consistency maintaining strategy to ensure the consistency of synthetic images. Based on synthetic images with various styles, we further propose meta knowledge distillation to train the student model with good generalization ability. Moreover, we introduce a simple yet effective method to enable customization based on few example images. Comprehensive experiments showcase the superiority of our approach across twelve customized tasks, achieving a 9.33\\% improvement compared to existing DFKD methods."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Data-Free Learning",
"CLIP Model",
"Customization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/5344d1ee8334909cd8ec9a48790f795aecc1f4f4.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Open-Vocabulary Customization from CLIP via Data-Free Knowledge Distillation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1auB9yeB9a | Composing Global Optimizers to Reasoning Tasks via Algebraic Objects in Neural Nets | main | Active | landscape analysis;modular addition; gradient dynamics; reasoning; symmetry; representation learning | interpretability and explainable AI | 3;5;6 | 2;2;2 | 3;3;3 | 4;3;3 | 1;3;2 | 4.666667 | 2 | 3 | 3.333333 | 2 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See the Weaknesses above."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The work provides theoretical insights into neural network learning mechanisms for group operations. The discovery of algebraic structures (semi-ring) in the weight space and monomial potentials in the loss function offers a fresh perspective on how networks learn structured tasks. \n- There's strong empirical validation of the theoretical results. As shown in Table 2, around 95% of gradient descent solutions exactly match their theoretical constructions, with very small factorization errors. This provides concrete evidence that the theoretical framework accurately captures the learning behavior.\n- The analysis of training dynamics (Theorem 5 and 6) provides insights into why networks prefer low-order Fourier solutions over perfect memorization. The paper shows that gradient descent with weight decay naturally favors simpler solutions due to topological connectivity between different-order solutions, which is an interesting finding."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces CoGO (Composing Global Optimizers), a theoretical framework for analyzing how 2-layer neural networks learn group operations with quadratic activation and L2 loss. The key insight is discovering a semi-ring algebraic structure in the solution space that allows the construction of global optimizers by composing partial solutions. The authors prove that the weight space has a semi-ring structure and that the loss function consists of monomial potentials with ring homomorphism properties. They also analyze training dynamics to explain why networks prefer simpler Fourier-based solutions over perfect memorization. The theoretical predictions align well with empirical results, showing that about 95% of gradient descent solutions match their constructed solutions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- My major concern is that the loss decomposition approach (Theorem 1) seems limited to scenarios where we already understand the underlying group structure of the data. The paper doesn't address how this framework might generalize to real-world scenarios where the data's algebraic structure is unknown or unclear. This limits the practical applicability of the theoretical insights, e.g., can we decompose the next token prediction loss easily?\n- While the training dynamics analysis (particularly around Fourier feature learning and Theorem 5) is interesting, [1] also introduced that the NN prefers to learn Fourier features by gradient descent. Can the author give a more detailed comparison of connections and differences to [1]? The paper could better contextualize its findings with existing work by providing a more detailed comparison of the mechanisms and insights, which would strengthen the paper's contribution. \n- The paper mentions connections to grokking in the Conclusion but doesn't fully explore this direction. It would be good to discuss more, e.g., why there is a gap between train loss and test loss in the beginning under the paper’s analysis framework. Given that grokking is a significant phenomenon in neural network learning, especially for arithmetic tasks, a more detailed discussion of how CoGO might explain or relate to grokking would enhance the paper's impact.\n\n[1] Depen Morwani, Benjamin L Edelman, Costin-Andrei Oncescu, Rosie Zhao, and Sham Kakade. Feature emergence via margin maximization: case studies in algebraic tasks. ICLR 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Line 140: Should mention l[i] is the embedding of the true label for the i-th data point.\n- Line 145: I guess l[i] should be the in d-dimension, ie, the embedding of the element g_1[i] g_2[i], rather than the element itself. \n- Line 145: How is g_1[i] g_2[i] embedded into l[i]? g_1[i] is using U_{G_1} and g_2[i] is using U_{G_2}, while it's unclear how l[i] is obtained.\n- Experiment: how to generate the training data (ie how g_1[i] and g_2[i] are sampled)? The data distribution can significantly impact the solution reached by training, so it needs to be specified for interpreting the empirical result that most solutions reached in experiments match the theoretical construction."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The work provided a new angle on analyzing the global optimizers for the considered algebraic problem. It analyzed algebraic properties of the weight space and the loss, and then gave sufficient conditions for the global optimizers. \n- The study is quite solid and thorough. It provided detailed characterization of the sufficient condition, and also gave a systematic approach to construct global optimizers."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work considered 2-layer neural networks with quadratic activation and L2 loss on learning group multiplication (an extension of modular addition). It showed that global optimizers can be constructed algebraically from small partial solutions that are optimal only for parts of the loss, due to (1) a semi-ring structure over the weights space and (2) L2 loss being a function of monomial potentials allowing composition of partial solutions into global ones. (2) is shown by representing the network weights and then the loss function using Fourier bases. \n\nIt then proposed a systematic approach using the above algebraic structure to construct global optimizers. It used this theoretical framework named CoGO to construct two distinct types of Fourier-based global optimizers of per-frequency order 4 and 6, and a global optimizer of order that correspond to perfect memorization. It empirically showed that most solutions via gradient descent match such constructions. It also analyzed the gradient dynamics, showing that it favors simpler solutions under weight decay, and that overparameterization asymptotically decouples the dynamics."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The theoretical setup is quite specific: quadratic activation and learning group multiplication. While the analysis is interesting, it is unclear if the results can provide insights into more general settings, in particular those more related to practical scenarios. The work can be strengthened if it can provide some empirical study on more realistic datasets verifying the insights (ie composition structure of the solutions), or provide generalization to more general settings (at least discussion about potential generalization and why). \n- The global optimizers constructed by CoGO is only a subset of all possible global optimizers, so the approach only partially characterizes the problem solutions. This weakens the contribution a bit, though the work does provide empirical evidence that most practically obtained solutions are in their construction. \n- The presentation can be improved. See several comments below."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "What is l[i] in (1)?\nIs it important in Section 4.1 that you are looking at solutions in a weight space, or can they just be any fixing of parameters?\nDoesn't the loss function itself change when you change the shapes of the parameters?\n\nClarify the relationship between Input and Output paragraph with what follows. \nBe consistent with subscripts with commas or multindices. I'm confused now if they have different meanings. \nClarify the construction alluded to at the beginning of 5.1. \nThe relationship between weights, w, z, and r should be better clarified. This seems to me like a lot of notation and I don't have the intuition to understand the claims. \nPlease also explain the essence of the constructions of solutions in 5.2. What is really \"going on\"?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Studies a simple and interesting class of neural networks. \nProves many nice properties of a new mathematical space.\nThere is probably a nice interpretation of the construction of the solutions in Section 5.2 (but a weakness is that I don't see this expressed in a simple way). Interesting results about behavior of gradient descent in Section 6."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The work analyzes the 2-layer network training dynamics when learning Abelian group multiplication. Gradient descent matches an analytical solution for optimality."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Numerous grammatical errors (\"which are ring homomorphism\", \"goes to infinite\", \"is called semi-ring\"...)\nOn the whole, the presentation of technical results is not clear enough to get a good picture of what is happening mathematically."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Semi-ring structure exists in 2-layer neural nets for reasoning tasks on Abelian group (e.g., modular addition), trained with L2 loss, which enables constructing global solutions analytically from non-optimal ones instead of gradient descent."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024composing,\ntitle={Composing Global Optimizers to Reasoning Tasks via Algebraic Objects in Neural Nets},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1auB9yeB9a},\nnote={under review}\n}"
},
"abstract": {
"value": "We prove rich algebraic structures of the solution space for 2-layer neural networks with quadratic activation and $L_2$ loss, trained on reasoning tasks in Abelian group (e.g., modular addition). Such a rich structure enables \\emph{analytical} construction of global optimal solutions from partial solutions that only satisfy part of the loss, despite its high nonlinearity. We coin the framework as \\ours{} (\\emph{\\underline{Co}mposing \\underline{G}lobal \\underline{O}ptimizers}). Specifically, we show that the weight space over different numbers of hidden nodes of the 2-layer network is equipped with a semi-ring algebraic structure, and the loss function to be optimized consists of \\emph{monomial potentials}, which are ring homomorphism, allowing partial solutions to be composed into global ones by ring addition and multiplication. Our experiments show that around $95\\%$ of the solutions obtained by gradient descent match exactly our theoretical constructions. Although the global optimizers constructed only required a small number of hidden nodes, our analysis on gradient dynamics shows that overparameterization asymptotically decouples training dynamics and is beneficial. We further show that training dynamics favors simpler solutions under weight decay, and thus high-order global optimizers such as perfect memorization are unfavorable."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"landscape analysis",
"modular addition; gradient dynamics; reasoning; symmetry; representation learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ac5155b9225cdb2886a75e3edb2fa0d802114b64.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Composing Global Optimizers to Reasoning Tasks via Algebraic Objects in Neural Nets"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1c73HCZpbo | REVEAL-IT: REinforcement learning with Visibility of Evolving Agent poLicy for InTerpretability | main | Active | Reinforcement Learning;Interpretability | reinforcement learning | 3;3;5;5 | 2;4;3;4 | 2;2;2;2 | 2;1;2;2 | 1;1;2;2 | 4 | 3.25 | 2 | 1.75 | 1.5 | 0.301511 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- in Section 5, what is the precise format of the explanation the authors intend to provide? Is the optimal training task sequence itself considered the explanation, as suggested in Section 5.2 (lines 429-471)?\n\n- what is the objective of the controller, and what purpose does the control policy serve? This remains unexplained.\n\n- in lines 100-102, does \"updating the policy\" equate to \"updating the agent’s learning process\"? Could the authors clarify this distinction?\n\n- could the authors elaborate on the terms “nodes linked to significant updates” and “activated nodes during the test” in Section 4.2, specifically how their correlation is analyzed?\n\n- where is Figure 3 referenced in the main text?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "the framework provides a structured approach for interpreting the learning progress of agents in long-horizon tasks using a GNN-based model."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents an interpretability framework to understand an agent’s learning process in complex tasks (e.g., ALFWorld) through a GNN-based explainer. This method examines policy updates across predefined subtasks and highlights critical sections of the policy network."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- the paper's clarity could be improved. Certain terms are referenced repeatedly early in the text (e.g., introduction) but are defined too late or not at all—examples include \"node-link diagram,\" \"policy structure,\" \"structure training tasks,\" and \"problem\" (line 91).\n- the three claimed benefits of \"understanding the agent’s performance post-training from the learning process of the policy and the sequences of training tasks\" are difficult to grasp. Benefit 1 is too abstract, and lacks contextual detail. In contrast, Benefit 3 includes broad statements unsupported by references (e.g., \"which can not deal with the big and complex problems that can not be seen in the real world\").\n- several key concepts lack references, including SCM and counterfactual methods (lines 95-96), MDP (line 167), and the node-link diagram representation (line 162).\n- the paper motivates the question \"why an agent can succeed or fail in a task\" but lacks examples or case studies that would provide a unique takeaway on RL agents' interpretability.\n- section 3’s \"Structural visualization of the policy\" is hard to understand. Goals are listed, but it is unclear how they are grounded or justified. For instance, it is mentioned that the policy visualization should use a node-link diagram to depict network architecture, but the rationale behind this choice is not explained. Additionally, it is unclear how this visualization allows users to judge the network’s robustness to translational and rotational variances or ambiguous inputs. The \"gap\" between visualization requirements and actual results remains unaddressed.\n- in Figure 1, the authors introduce the GNN-explainer as part of the proposed framework, but Section 4.2 later introduces a GNN-predictor (also in Algorithm 1) without clarifying where it fits within Figure 1, creating confusion.\n\n- the related work in explainable reinforcement learning (XRL) is not up-to-date, lacking recent advances in XRL.\n- given that this work offers neuron-level visualization, it would benefit from referencing related literature in mechanistic interpretability (which is for understanding the inner workings of neural networks).\n- the claim that prior explanation algorithms cannot model complex behaviours (lines 44-47) lacks evidence. Although (Puiutta & Veith, 2020) is cited to support this claim, it is a survey paper, which weakens the argument.\n\n- how do the authors ensure there is any semantical interpretation w.r.t. part of the policy weights (so that humans can understand) when using GNN-explainer to visualise the policy (section 4.2)? in other words, how could users understand the visualised section of the policy? how could users link the \"part of the edges (updated weights)\" to the success of the RL agent?\n\n- the GNN-based explainer is suggested to provide an understanding of each subtask’s value in training, yet this explanation seems limited to high-level progress indicators rather than deep rationales behind actions. This contradicts some of the authors’ statements like \"a proficient explanation enhances understanding of the agent’s actions and helps improve performance\" (lines 60-62). Moreover, the reliance on predefined subtasks limits the framework's applicability in real-world scenarios.\n\n- step 1 in Section 4.2 is difficult to follow, particularly the authors' claim that variability does not affect GNN training. Additionally, the connection between \"nodes linked to significant updates\" and \"activated nodes during the test\" remains unclear. The assertion that \"REVEAL-IT is distinguished from other explainable RL methods by its independence from external data and structure\" is also debatable, as saliency maps do not impose environment or algorithm-specific constraints either.\n\n- in Algorithm 1, it is unclear how the GNN optimizes the training task sequence; the sequence sampling appears to be based only on $P$ (see line 7 in Algorithm 1).\n\n- a brief comparison of REVEAL-IT with baselines is missing, which is important for understanding the reasons behind its performance advantages—whether due to improved planning steps or better-learned low-level behaviours.\n\n- figure 4, relevant to the discussion in Section 5.2, is placed in the appendix. Moving it (or parts of it) to the main text would improve readability and flow.\n\n- the first question in Section 5 (\"learning process of an RL agent\") does not appear to be fully answered. It’s unclear where this process is visualized—Figure 2 or Figure 3. How could the nodes in Figure 2 be interpretable for users, what are the verbs in Figure 3 (are they subtasks?) and which final task is Figure 3 about?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "What is the GNN Explainer’s training objective? If it is only trained to preserve the predictor’s accuracy, then it could just output the full graph. \n\nWhat is the definition of “active nodes” in Step 1 of section 4.2? \n\nHow exactly does the GNN explainer choose the distribution of subtasks to train on? That does not seem to be a direct byproduct of classifying the most critical weight updates. And how does it help on the OpenAI gym environments which do not involve any subtasks? \n\nThe conclusion states that REVEAL-IT can’t adapt to multi-modal challenges. Why would it not be able to handle non-visual modalities? It seems like it can be applied wherever a neural network policy network is used, which does not seem to be constrained to image inputs. \n\n\nIn Figure 2, what do the gray shaded regions correspond to? “Thicker connections indicate larger updates in weight amplitude (selected by GNN explainer)” - does this mean that thicker connections indicate weights that were both selected by GNN explainer, and had large updates in amplitude? How were the portions of the policy that are common to several sub-tasks identified? “as the training progresses toward the latter stage, there is a greater overlap between the region with a larger update amplitude and the region triggered by evaluation.” - what does “the region triggered by evaluation” refer to?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "REVEAL-IT addresses an important challenge in deep RL. \n\nThe method is broadly applicable, as it is agnostic to the environment or (online) RL algorithm.\n\nThe performance appears to be quite impressive for Alfworld."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a framework for interpreting the training process of RL algorithms. Policy updates are visualized with node-link graphs, where the nodes are the neurons in the policy network and the edges are the weights that were updated. A GNN predictor is then trained to predict the RL algorithm’s learning progress, defined as the increase in return on a task after one policy update. A GNN explainer is trained to find which updated weights are most critical for the success of the RL agent, by finding the subset of weights that preserves the GNN predictor’s output given only that subset. The authors demonstrate that REVEAL-IT's explanations can be used to improve training efficiency and performance of various RL algorithms in ALFWorld and OpenAI gym environments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The writing is very difficult to follow due to excessive verbosity, vague language and grammar issues e.g. “you can record and correspond to the changes in the value of a specific part of the weights” (line 186) or “the understanding the critical nodes in the RL agent’s evaluation is a crucial pre-requisite for determining the significance of weights updating.” (line 220) The authors should review the paper for conciseness and grammatical accuracy\n\nThe GNN Explainer does not seem to provide much human-interpretability. Figure 2: “we will observe that the sections with more significant policy updates will undergo modifications” seems to be a trivial observation rather than something illuminating. It is not uncommon for deep RL models to have millions of weights, so the ability to highlight a subgraph of most important weight updates would still leave the user with far too many to interpret. Could the authors provide concrete examples of helpful insights gained from the GNN Explainer? \n\nResults tables are missing standard deviations; particularly for Table 2 it is unclear whether the improvements are significant. \n\nThe link to view the project on line 311 is broken. \n\nSome figure references are broken, e.g. the distribution of training subtasks is figure 3 but referenced as figure 4 on line 430"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "I listed the questions and suggestions together with weaknesses in the above section."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- **[Motivation]**: The motivation is generally sound; learning from policy behavior appears to be a promising approach for developing interpretable and generalizable policies in complex environments.\n\n- **[Empirical Evaluation]**: The empirical evaluation is relatively comprehensive, and the reported results show the potential of this approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a GNN-based explainer to identify critical nodes or components for reinforcement learning (RL) tasks, with the goal of improving interpretability and enhancing the learning efficiency of RL agents. The approach involves visualizing a node graph that represents the RL training process for sub-tasks, and training the GNN-based explainer to approximate the true learning process in order to identify important components (key weights or edges) for the tasks. The GNN explainer is then used to guide policy learning. Results show improvements over standard RL models and language-based approaches (tested on ALFworld and other RL benchmarks).\n\nOverall, the paper presents an interesting direction by learning critical components across multiple RL tasks through policy network responses. However, some technical aspects of the method are unclear and could benefit from further justification and improvement. I have outlined specific questions and concerns below and will give a borderline reject in this initial review."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**[About Problem Definition, Methodology, and Experiments:]**\n- 1. The authors assert that variability in learning across models will not pose an issue since the learned explainer is model- or task-specific. However, for applications in multi-task learning, pre-training, or other forms of generalization, this variability is crucial and challenging. For instance, training the same network multiple times may yield high variance in the training process data due to permutation invariance. Empirical evaluation or theoretical justification for this would be useful.\n\n- 2. To ensure the framework is generalizable and learns universal, principled representations, it would be beneficial to further explore the alignment between the learned structural information and the actual policies or concepts, either empirically or theoretically. Approaches could include using sparse autoencoders (potentially with larger models) [1] or examining the alignment between individual components and their corresponding concepts, modularities, interactions, and causal relations [2-5].\n\n- 3. Building on point 2, utilizing these representations could facilitate compositional and hierarchical structures in policy adaptation and generalization. Including evaluations that focus on different levels of generalization would be uesful.\n\n- 4. The impact of network size on the results should be investigated through ablation studies. If the network size is small, do the same phenomena observed in Figure 2 still occur?\n\n- 5. What are the primary benefits of using GNNs for contribution analysis of each node or weight? Why not directly use magnitude, partial derivatives, or conditional mutual information to assess the importance of each weight?\n\n**[About Clarity]**\n\n- 1. It would be helpful to list all objective functions in a separate subsection, particularly the objectives for the GNN predictor and explainers, along with an explanation of how guidance information is provided for policy updates.\n\n- 2. In line 115, the process is mentioned as being similar to a POMDP; please formulate this for clarity.\n\n- 3. There are some typos to address, such as in line 11 of Algorithm 1—should $\\pi_0$ be $\\pi_t$? Also, in line 432, figure 4 should likely be referenced as figure 3 instead.\n\n**Others**: As a side note, many causal RL works focus on learning world models, akin to a subgroup of model-based RL with interventions, rather than behaviors or policies/reward structures, which differ from the goals of this paper. The authors mention their inability to handle complicated tasks, but a more justified statement regarding this limitation should be provided.\n\n\n\n\n[1] Gao, Leo, et al. \"Scaling and evaluating sparse autoencoders.\" arXiv preprint arXiv:2406.04093 (2024).\n\n[2] Marks, Samuel, et al. \"Sparse feature circuits: Discovering and editing interpretable causal graphs in language models.\" arXiv preprint arXiv:2403.19647 (2024).\n\n[3] Gandikota, Rohit, et al. \"Erasing Conceptual Knowledge from Language Models.\" arXiv preprint arXiv:2410.02760 (2024).\n\n[4] Geiger, Atticus, et al. \"Causal abstraction: A theoretical foundation for mechanistic interpretability.\" Preprint (2024).\n\n[5] Lippe, Phillip, et al. \"Biscuit: Causal representation learning from binary interactions.\" Uncertainty in Artificial Intelligence. PMLR, 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "In general the authors need to be clearer about the system they have developed, and if curriculum learning is indeed part of the system, it should be discussed more thoroughly and introduced early on.\n\nAlso, very little is said about the Gym tasks, so it is difficult to understand what went on in those experiments, particularly as the standard environments do not have subtasks."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The authors introduce a new framework for RL interpretability which can provide better insights than other methods. The method also includes a curriculum learning component, which produces very strong results on the ALFWorld environment. The authors do also spend some time analysing the results from their interpretability framework, in order to showcase its capabilities."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce an interpretable RL agent, in the context of environments with sub-tasks. The algorithm uses a GNN-based approach to visualise the updates to the policy, and also uses another GNN to implement curriculum learning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The authors provide a valid criticism of more generic interpretability methods e.g. post-hoc methods like saliency maps. However, this work assumes (unless I am wrong - see below) the environment provides a set of subtasks that can be trained on, which is a large assumption. Therefore the generality of this method is somewhat limited. To what extent do subtasks need to be provided/can be inferred?\n\nThe methodology is a bit unclear, as the focus appears to be on interpretability, and then halfway through the paper the authors introduce a GNN predictor and curriculum learning. Curriculum learning is not discussed at all in the Related Works section. And yet the authors show that their method significantly outperforms other methods in ALFWorld (Table 1), so it is clear that this is not just about interpretability. If this is the case, then the authors should be mentioning this in the abstract and from the introduction.\n\nI am also under the impression that environment subtasks are needed for REVEAL-IT, but the authors perform experiments on OpenAI Gym MuJoCo environments, which don't have them? Table 2 indicates that adding REVEAL-IT to existing methods improves their performance, but in Table 1 the authors present REVEAL-IT as its own algorithm, so once again it is unclear what is going on here. Is REVEAL-IT standalone or an addition to existing algorithms?\n\nThe paper should be checked for spelling mistakes, e.g., \"Strucutral\" on page 4."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024revealit,\ntitle={{REVEAL}-{IT}: {RE}inforcement learning with Visibility of Evolving Agent poLicy for InTerpretability},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1c73HCZpbo},\nnote={under review}\n}"
},
"abstract": {
"value": "Understanding the agent's learning process, particularly the factors that contribute to its success or failure post-training, is crucial for comprehending the rationale behind the agent's decision-making process. Prior methods clarify the learning process by creating a structural causal model (SCM) or visually representing the distribution of value functions. Nevertheless, these approaches have constraints as they exclusively function in 2D-environments or with uncomplicated transition dynamics. Understanding the agent's learning process in complicated environments or tasks is more challenging. In this paper, we propose REVEAL-IT, a novel framework for explaining the learning process of an agent in complex environments. Initially, we visualize the policy structure and the agent's learning process for various training tasks. By visualizing these findings, we can understand how much a particular training task or stage affects the agent's performance in the test. Then, a GNN-based explainer learns to highlight the most important section of the policy, providing a more clear and robust explanation of the agent's learning process. The experiments demonstrate that explanations derived from this framework can effectively help optimize the training tasks, resulting in improved learning efficiency and final performance."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Reinforcement Learning",
"Interpretability"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/75eeb1a99f6500eb89ef27865f2bd896a4c35956.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "REVEAL-IT: REinforcement learning with Visibility of Evolving Agent poLicy for InTerpretability"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1cM0yQe3pO | Variational Rectified Flow Matching | main | Active | Flow Matching;Diffusion Model;Generative Model | generative models | 5;5;5;5 | 4;5;4;2 | 3;3;2;2 | 3;2;2;2 | 2;2;3;3 | 5 | 3.75 | 2.5 | 2.25 | 2.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- I would like the authors to respond to the points I raised as concerns regarding the above weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper is well-structured and easy to read.\n- The proposed method integrates VAE and flow matching in a straightforward manner, offering novelty in its ability to learn vector fields with uncertainty. Furthermore, the high performance on MNIST and CIFAR-10 datasets suggests that the hypotheses and approaches of this study are reasonably valid."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel method using variational inference to address the ambiguity of velocity vector fields that classic rectified flow matching fails to capture. By introducing latent variables and modeling ambiguity through a mixture model of velocity vector fields, the method enables more accurate data distribution capture and efficient integration. Experimental results demonstrate that the proposed approach achieves comparable performance to existing methods with fewer steps on synthetic data and image datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The proposed method in this paper introduces latent variables and their inference model, enabling the capture of overlapping vector fields when they occur. However, it is necessary to clarify the setting that assumes such overlapping vector fields. The overlap and ambiguity in question here initially seemed to imply that while the flow from a specific $x_0$ to $x_1$ is uniquely determined, there exist different vector fields at a particular time and spatial location that may intersect. However, during inference with the proposed method, a data point $x_0$ is sampled from the source distribution, followed by the sampling of a latent variable, which is then used as the initial value to integrate an ODE based on a vector field determined by it. This implies that there is uncertainty in the direction from the initial value $x_0$, meaning there is an assumption that it could lead to a different $x_1$. Is this setting reasonable? The uncertainty involving deterministic flows crossing and the possibility of tracing different flows from the initial value (i.e., the $x_0$ and $x_1$ pairings are not unique) appear to be mixed. The authors should clearly distinguish between these and clarify which aspect they are aiming to address.\n- Considering the above points, while the proposed method may indeed enable faster transitions to the target due to the learned flow being linear even when vector fields overlap, during inference, $z$ is sampled from the prior and thus is not determined solely by $x_0$. As a result, the model could reach a different $x_1$, which may not be desirable from the perspective of two-sided conditioning flow matching. \n- The proposed method requires an inference model and employs separate encoders for each of $x_0,x_1$, and $x_t$ with the same structure as the encoder in $v_\\theta$. This implies a significantly larger number of learnable parameters compared to existing models, and although the encoders are not used during inference, it is not entirely fair in terms of parameter count relative to previous research (even if the speed remains similar). Therefore, it would be necessary to evaluate the impact of the size of the inference model’s encoders by modifying their size and investigating how it affects performance."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Refer to the Weaknesses section for my concerns and questions."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The strengths of the paper are as follows:\n\n- Clear, easy-to-follow presentation with strong empirical performance compared to baseline methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel framework, Variational Rectified Flow Matching, which addresses the limitations of conventional Rectified Flow (RF) methods in capturing ambiguity in velocity distributions. By incorporating an additional latent variable z drawn from a Gaussian prior, the framework models multiple modes of ambiguity. An encoder is used to derive the posterior distribution p(v∣xt,t,z) at a specific sample xt and time t. This approach is claimed to better capture ambiguity and improve the empirical performance of diffusion model generation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The weaknesses of the paper are listed below:\n- The motivation of the paper is unclear, particularly in the Introduction. The statement at line 73, “Importantly, variational rectified flow matching differs in that it enables modeling ambiguity in the data-space-time-space domain, i.e., the goal is a model where flow trajectories can intersect,” does not clarify how allowing flow trajectories to intersect resolves the ambiguity problem.\n- As I understand, using an additional latent variable z when modeling the velocity at (xt,t) can partially address the ambiguity problem, as different values of z capture different modes or sources of variation in the velocity distribution. However, VAEs are known to sometimes experience mode collapse, where the same high-density velocities may be generated from multiple modes of z. How does the proposed method handle this issue? To further address this concern, I suggest including a “diversity metric” in the experimental protocol to measure the variety of generated samples. \n- Furthermore, I believe RF can address the ambiguity problem by performing multiple rectifications. When training stabilizes, the optimal velocity at each sample xt at time t becomes unique, eliminating ambiguity. Even without rectification, existing methods such as OT-FM can mitigate ambiguity by improving the coupling between x0 and x1, resulting in less ambiguous directions at (xt, t). Are there any theoretical or methodological benefits of the proposed approach compared to these methods? Without such justification, it’s difficult to attribute the improved performance to the additional variable z.\n- Can different values of z affect the visual quality of the generated samples? If x0 is kept constant, does varying z introduce significant variance in the generated samples? Or is there a specific value of z that results in low-quality samples? \n- There is no theoretical guarantee that the proposed approach will achieve a better straight flow when addressing the ambiguity problem."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "I am curious whether Variational Rectified Flow Matching can enhance performance through \"reflow\", similar to classic rectified flow matching as discussed in [1].\n\n\n[1] Liu, Xingchao, and Chengyue Gong. \"Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow.\" The Eleventh International Conference on Learning Representations."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper presents a valuable observation: because the vector field is parameterized via a Gaussian at each data-domain-time-domain location, ambiguity cannot be captured.\n\n2. The analytical experiments with visualizations are well-executed and contribute significantly to validating the theoretical analysis."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Variational Rectified Flow Matching as a method to model multi-modal velocity and ambiguity in the data-space-time-space domain. The properties of Variational Rectified Flow Matching are studied and validated through experiments with visualizations on low-dimensional synthetic data. Compelling results are demonstrated on the synthetic data, MNIST, and CIFAR-10 datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Current evaluations are too weak and this paper lacks enough experiments on more real-world and complex datasets, such as the AFHQ, CelebA and ImageNet datasets. The evaluations on these benchmarks are necessary for demonstrating the effectiveness of the proposed method.\n\n2. The authors should sufficiently discuss these missing related works [1][2][3][4][5], and compare with them in the experiments.\n\n[1] Nguyen B, Nguyen B, Nguyen V A. Bellman Optimal Stepsize Straightening of Flow-Matching Models[C]//The Twelfth International Conference on Learning Representations. 2024.\n\n[2] Song, Yang, et al. Consistency models. arXiv preprint arXiv:2303.01469 (2023).\n\n[3] Yang, Ling, et al. Consistency flow matching: Defining straight flows with velocity consistency. arXiv preprint arXiv:2407.02398 (2024).\n\n[4] Yan, Hanshu, et al. Perflow: Piecewise rectified flow as universal plug-and-play accelerator. arXiv preprint arXiv:2405.07510 (2024).\n\n[5] Kim, Dongjun, et al. Consistency trajectory models: Learning probability flow ode trajectory of diffusion. arXiv preprint arXiv:2310.02279 (2023)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Is there any reason for designing the input of the posterior encoder as [x0, x1, xt, t]?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Significance: This paper addresses a notable limitation in flow matching models, specifically the ambiguity at path intersections that results in curved sampling trajectories. By tackling this ambiguity, the proposed approach demonstrates clear improvements over existing rectified flow models, particularly in low NFE settings.\n\n- Originality and clarity: The paper is well-written and easy to follow, clearly presenting concepts. Interpreting the flow-matching objective through variational inference to reduce directional ambiguity is conceptually sound, adding a meaningful perspective to the flow-matching framework."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents Variational Rectified Flow Matching (VRFM), a framework that improves classic rectified flow matching by incorporating multi-modal velocity vector fields based on a variational perspective. Previous flow matching approaches average out directions, leading to curved integration paths and hindering accurately fitting the target distribution. VRFM, by contrast, captures the multi-modality of flow directions, thus preserving directional diversity. Experimental results on synthetic data, MNIST, and CIFAR-10 demonstrate that VRFM achieves promising results with fewer integration steps."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- In line 53, the authors claim, “This results in trajectories that are more straight”. It is unclear why reducing ambiguity would inherently lead to straighter flows. Including a theoretical proof or a detailed explanation to clarify this result would strengthen the argument.\n\n- For completeness, the paper should include proofs demonstrating that the learned distribution from VRFM preserves the marginal data distribution, as established in Theorem 3.3 in [1].\n\n- The most concerning part of this paper is limited evaluation and performance compared to the recent papers. The empirical evaluation is restricted to MNIST and CIFAR-10, which limits the generalizability of the findings. Extending the evaluation to additional datasets, such as ImageNet 64x64, would improve the generalizability of the findings. Furthermore, the reported results of VRFM in low NFE regimes (e.g., 104 FID for 2 NFE on CIFAR-10) are less compelling, given the recent advances [1,2,3] in reducing sampling costs in diffusion (or rectified flow) models. For instance, reflow on rectified flow (e.g., 2-rectified flow) achieves a 4.85 FID with a single step [1]. Results showing VRFM’s performance with the reflow technique would provide a more competitive comparison. \n\n- It would be valuable if the authors could provide results on conditional generation setting."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024variational,\ntitle={Variational Rectified Flow Matching},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1cM0yQe3pO},\nnote={under review}\n}"
},
"abstract": {
"value": "We study Variational Rectified Flow Matching, a framework that enhances classic rectified flow matching by modeling multi-modal velocity vector-fields. At inference time, classic rectified flow matching 'moves' samples from a source distribution to the target distribution by solving an ordinary differential equation via integration along a velocity vector-field. At training time, the velocity vector-field is learnt by linearly interpolating between coupled samples one drawn from the source and one drawn from the target distribution randomly. This leads to ''ground-truth'' velocity vector-fields that point in different directions at the same location, i.e., the velocity vector-fields are multi-modal/ambiguous. However, since training uses a standard mean-squared-error loss, the learnt velocity vector-field averages ''ground-truth'' directions and isn't multi-modal. Further, averaging leads to integration paths that are more curved while making it harder to fit the target distribution. In contrast, the studied variational rectified flow matching is able to capture the ambiguity in flow directions. We show on synthetic data, MNIST, and CIFAR-10 that the proposed variational rectified flow matching leads to compelling results with fewer integration steps."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Flow Matching",
"Diffusion Model",
"Generative Model"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/0060edc66ce8cc07ae9fbfae4bb8ea83f08cf582.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Variational Rectified Flow Matching"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1ctV3yry3B | MazeNet: An Accurate, Fast, & Scalable Deep Learning Solution for Steiner Minimum Trees | main | Active | Recurrent Convolutional Neural Networks (RCNNs);Obstacle-Avoiding Rectilinear Steiner Minimum Tree (OARSMT);Deep learning for maze-solving;Search algorithm for termination condition;Graph-to-image transformation | other topics in machine learning (i.e., none of the above) | 1;3;3;3;5 | 4;4;4;3;4 | 1;2;2;2;3 | 1;2;2;2;3 | 2;2;2;3;3 | 3 | 3.8 | 2 | 2 | 2.4 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "can the authors comment on how does the method compare to other recent works?\n\ncan the authors clarify the discrepancy between figure 14 and the perfect accuracy claims made in the main text\n\n_our method reaches the solution in very few iterations, as seen in Figure 15. This contrasts with the competing methods, which often rely on loops that repeat for many more iterations to arrive to a solution_ - I could not understand the significance of this claim. Can the authors provide additional insight?"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- The authors propose a novel image-based pipeline for the OARSMT problem\n- The synthetic dataset generation is interesting\n- Superior runtimes are reported on a variety of synthetic benchmarks compared to classic methods"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a neural network-based framework named Mazenet for the Obstacle Avoiding Rectilinear Steiner Minimum Tree problem, an important combinatorial problem associated with circuit routing. \n\nMazenet is derived from an image classification perspective. The algorithm involves mapping an input graph and set of terminals to an image. An recurrent convolutional network is then trained on synthetic data to sequentially predict elements of the steiner tree. A termination condition module is trained to detect once a candidate path is detected. \n\nThe authors demonstrate that Mazenet recovers the OARSMT faster than classical exact algorithms and highlight its ability to generalize to problem settings beyond its training set. Some ablation experiments detailing Mazenet’s test accuracy and training time are provided. Superior runtimes are reported and perfect test accuracy."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- weak experimental results. The authors evaluate their method on synthetic benchmarks and compare to old methods.\n- some confusing results. figure 14 does not imply perfect test accuracy despite the claims made in the paper.\n- the authors may consider a more rigorous evaluation with the current state of the art, FLUTE or any number of other recent methods, e.g. Chen et al., A Reinforcement Learning Agent for Obstacle-Avoiding Rectilinear Steiner Tree Construction, 2022, Kahng et al., NN-Steiner: A Mixed Neural-algorithmic Approach for the Rectilinear Steiner Minimum Tree Problem, 2023, etc.\n- evaluation on real datasets is critical to understand the performance benefit of the proposed method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How is the threshold 0.65 decided as the TC threshold? Is there ablation study to find the optimal value?\n2. What is the step size of the solver, i.e., how many cells are the trees extended in each iteration? How many one entries are contained in the predicted binary matrix?\n3. Curious what is the performance of MazeNet on large mazes, e.g., 256 x 256?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper formulates the OARSMT into a binary image prediction problem, which is easy to understand and reasonable.\n\n2. The experimental results show that MazeNet is able to achieve an impressive 100% test accuracy.\n\n3. The experimental results show that MazeNet scales well with an increasing number of terminals."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes MazeNet, a learning-based algorithm that leverages a recurrent convolutional neural network to predict a single-channel binary matrix iteratively, thereby solving the Obstacle Avoiding Rectilinear Steiner Minimum Tree (OARSMT) problem. The algorithm is evaluated on different mazes with 2-8 terminals, showing 100% test accuracy and competitive planning speed."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The mazes that MazeNet is evaluated on are too small, of only 11 x 11 kernels. There is not strong evidence that MazeNet can perform well on larger mazes. \n\n2. This work only compares MazeNet with classical solvers like Dijkstra, Mehlhorn and Kou, etc. However, there are some more recent algorithms that are either learning-based or CPU-based, e.g., [1], [2]. [3]. Comparison with more and stronger baselines is needed to consolidate the conclusion.\n\n3. It is not new to learn to predict the future images, e.g., [4] also formulated the grid-like motion planning problem into a video prediction problem. From this paper, I can not see how the specific domain knowledge from OARSMT is incorporated into the network design.\n\n\n[1] Lin, Zhenkun, et al. \"Obstacle-Avoiding Rectilinear Steiner Minimal Tree Algorithm Based on Deep Reinforcement Learning.\" 2023 International Conference on Artificial Intelligence of Things and Systems (AIoTSys). IEEE, 2023.\n\n[2] Chen, Po-Yan, et al. \"A reinforcement learning agent for obstacle-avoiding rectilinear steiner tree construction.\" Proceedings of the 2022 international symposium on physical design. 2022.\n\n[3] Huang, Tao, and Evangeline FY Young. \"An exact algorithm for the construction of rectilinear Steiner minimum trees among complex obstacles.\" Proceedings of the 48th Design Automation Conference. 2011.\n\n[4] Zang, Xiao, et al. \"Robot motion planning as video prediction: A spatio-temporal neural network-based motion planner.\" 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In what ways does the proposed method differ from prior work that applies Recurrent Convolutional Neural Networks (RCNNs) to solve maze-related problems?\n\n2. Does MazeNet require separate training for different grid and terminal configurations, such as an 11×11 versus a 9×9 node grid, or can a single model handle multiple setups?\n\n3. What strategies can be employed to reduce the time and computational complexity involved in generating training data?\n\n4. Training MazeNet reportedly took around 48.12 hours across four GPUs, which is considerable. How does training time scale with increased problem complexity and size, and what optimizations could help reduce this duration?\n\n5. In Figure 8, is the runtime of MazeNet measured with parallelization applied?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. MazeNet is designed for scalability and adaptability, making it effective for solving mazes of varying sizes and numbers of terminals that need connection.\n\n2. While RCNNs alone may struggle to identify and verify a correct solution to terminate the process, MazeNet addresses this by incorporating a search-based algorithm that reliably detects a correct solution. This approach combines the speed of graph-based approximate algorithms with the precision of exhaustive graph-based methods.\n\n3. RCNNs provide step-by-step interpretability of the method’s operations, as the head module can be applied at any iteration, allowing for observation of intermediate solution stages. These stages can be visualized as image outputs, providing insight into the solution process at each step."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "MazeNet, a recurrent convolutional neural network (RCNN) for the Obstacle Avoiding Rectilinear Steiner Minimum Tree (OARSMT) problem, shows promise with 100% accuracy in initial tests but requires further validation on larger grids and more terminals to confirm scalability. Questions remain on its novelty, given similar RCNN applications in maze-solving, and on its high training time (48.12 hours on four GPUs), along with the need to reduce training data complexity and evaluate the TC module's computational overhead. Additional context through a more detailed literature review would also strengthen the work."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The proposed approach of using a recurrent convolutional neural network (RCNN) to solve the Obstacle Avoiding Rectilinear Steiner Minimum Tree (OARSMT) problem may lack novelty, as RCNNs have previously been applied to similar maze-solving problems.\n\n2. Although MazeNet demonstrated 100% accuracy in the reported experiments, additional proof is needed to confirm it can consistently achieve this level of accuracy across all problem instances.\n\n3. The experimental setup appears limited; testing just on a grid of 11 × 11 nodes with up to 8 terminals may not be sufficient to thoroughly assess MazeNet’s performance, particularly regarding its scalability.\n\n4. While the TC module improves MazeNet's accuracy, it introduces significant computational overhead, which has not yet been systematically evaluated.\n\n5. The paper lacks a dedicated related work section, and a more comprehensive discussion of relevant literature would strengthen the context for this research."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The article only mentions the number of samples in the test set. What is the number of samples in the training set?\n2. In terms of problem scale, for instance in the field of chip design where there are tens of thousands of nodes with connections that must adhere to certain constraints, can this algorithm achieve good results in larger-scale tasks?\n3. The testing accuracy can reach 100%, could this be a result of overfitting?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The application is interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The article establishes a MazeNet model to solve the OARSMT problem. Specifically, it first converts the graph representation of the maze into image representation, then processes the image data using the RCNN model, and finally reduces the model's running time through a termination condition."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe motivation is not clear, as the article does not explicitly outline the problems with previous solutions to the OARSMT problem, nor does it explain how this article addresses these issues.\n2.\tThe experimental evaluation metric design is unreasonable. The OARSMT problem is an NP-hard problem. However, the evaluation metric used in this article's experimental section is accuracy. While for small-scale problems, the shortest path can be obtained using Dijkstra's algorithm for comparison to calculate precision, for large-scale problems, it is challenging to solve using Dijkstra's algorithm. \nFurthermore, the second part of the article clearly states that the optimization goal is to minimize path length. However, the evaluation metric in the experimental section does not use path length as a measure, which is confusing.\n3.\tIn line 164 of the text, it is stated that \"However, these problems were in domains where traditional methods are both fast and accurate, leaving open the question of whether RCNNs can provide similar advantages for more complex graph-based problems.\" Given that traditional algorithms can achieve good results, what is the significance of this research? Moreover, the question of whether RCNNs can provide similar advantages for more complex graph-based problems remains unresolved. How does this study address or prove this issue?\n4.\tThe resolution of figures 2b and 2c is too low. Although the generated data size is 48x48, clear images should still be placed in the article.\n5.\tThe author's proficiency in English is lacking, and the translation traces are too obvious.\nThe innovation in this article is weak. Regardless of whether it is RCNN or the conversion of graph representation to image representation, the innovation is very limited. From both a writing and experimental perspective, it resembles more of an experimental report and is not suitable for publication as a research paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "## Questions\n\nHow would researchers replicate your work?\n\nL111 How is O(T!) permutations determined for exhaustive methods?\n\nWhat is the purpose of the paragraph at L174-182? Is the progressive training algorithm of Bansal et al. used in this work? If so, be explicit and state that.\n\nAt L224, \"...position, indicating a cycle, it is terminated to prevent redundant processing.\" After finding a cycle and terminating, which single path is chosen?\n\nAlgorithm 1 L245-250 is a bit difficult to follow. \"junction found\" can only be understood by referencing back to the text. Also, what if the \"Move to the direction with highest 'whiteness'\" is in the backwards direction?\n\nL269 Why are mazes of 2, 3, or 4 terminals chosen for training? (e.g., as opposed to 5, 6, 7)\n\nL293-295 reference random variables n,k. What distributions are these sampled from?\n\nParallelization for Scalability Section 3.4 is missing specific details.\nHow many sections are images divided into? (L320)\nHow many pixels are \"sufficient\" overlap? (L322)\nFor a section with two or more terminals, what is the incentive to find additional paths to other unknown sections?\nWhat is the goal of a section with only one terminal?\nHow does parallelization work for sections without terminals?\n\nL378 What does \"20 MazeNet iterations\" refer to? Earlier sections indicated that 30 module iterations are used before checking terminal conditions (L261) and 16 training epochs are used (L310). There is no explanation in the text or table.\n\n## Feedback\n\nL55 describes a 11x11 maze, but the paper does not clarify what \"11\" refers to until L125 in Section 2.1. Explain what 11x11 means at L55 (e.g., \"11x11 node graph\").\n\nFigure 5 is first referenced at L266 but provides almost no detail or context for what the \"Projection,\" \"Batch,\" and \"Head\" blocks are. Projection was referenced once at L176 when discussing another paper's work. Multiple configurations of the batch and head modules are referenced earlier, but all blocks are uniformly labeled without any specification of the differences between them. For example, the first \"Batch\" represents 30 RB iterations and subsequent \"Batch\" represents 10 iteration (L261) but these are labeled as the exact same module in Figure 5. As another example, L177-180 reference a \"Head\" module that produces the output and a \"final head module\" that transforms the network's output to single-channel prediction. Why not add these details to Figure 5 to be more informative and accurate?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Approach for converting Obstacle Avoiding Rectilinear Steiner Minimum Tree (OARSMT) problem to image-based Recurrent Convolutional Neural Network (RCNN) with extensible training images and more than 2 terminals.\n\n100% empirical accuracy on test cases (40,000 total mazes for 2-5 terminals and 3,000 mazes for 6-8 terminals). Alternatively, graph-based approximation methods of Kou et al. 1981 and Mehlhorn 1988 have errors with 3 or more terminals.\n\nMazeNet is computationally faster than Dijkstra's algorithm when 5 or more terminals are used. \n\nMaze figures are straightforward and informative (e.g., Figure 4)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the Obstacle Avoiding Rectilinear Steiner Minimum Tree (OARSMT), which seeks to find a set of horizontal and vertical connections between a set of points while avoiding obstacles using the minimum overall connection length. The paper's technical approach is to convert OARSMT graphs to images then use a Recurrent Convolutional Neural Network (RCNN) to iteratively highlight the solution. RCNN-based solutions to OARSMT were introduced in previous work, but this paper uniquely extends RCNN-based maze solving to larger maze domains with more terminals where traditional methods are computationally inefficient. In addition, this paper develops a termination condition to avoid both premature termination and excessive runtimes. Finally, this paper includes experimental results with 2-7 terminals in 11x11 mazes with 100% accuracy."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Several details technical details are unclear (see specific feedback below).\n\nDoes not provide any limitations or failure cases. For example, what happens if >> 8 terminals are used? This is only discussed as future work. Does algorithm run indefinately for unreachable terminals?\n\nA lot of overlap with Schwarzschild et. al. 2021, but with additional terminals and the terminal condition module.\n\nThe paper emphasizes that their approach is parallelizable (L23, L155, L315) but does not provide key details on how this approach works or report accuracy of experimental results on larger mazes to verify it's utility. Instead, the paper provides a vague description of the parallelization process (Section 3.4, L315) and reports only on runtime performance from parallelization on larger mazes (Figure 9, L466)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024mazenet,\ntitle={MazeNet: An Accurate, Fast, \\& Scalable Deep Learning Solution for Steiner Minimum Trees},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1ctV3yry3B},\nnote={under review}\n}"
},
"abstract": {
"value": "The Obstacle Avoiding Rectilinear Steiner Minimum Tree (OARSMT) problem, which seeks the shortest interconnection of a given number of terminals in a rectilinear plane while avoiding obstacles, is a critical task in integrated circuit design, network optimization, and robot path planning. Since OARSMT is NP-hard, exact algorithms scale poorly with the number of terminals, leading practical solvers to sacrifice accuracy for large problems. We propose and study MazeNet, a deep learning-based method that learns to solve the OARSMT from data. MazeNet reframes OARSMT as a maze-solving task that can be addressed with a recurrent convolutional neural network (RCNN). A key hallmark of MazeNet is its scalability: we only need to train the RCNN blocks on mazes with a small number of terminals; mazes with a larger number of terminals can be solved simply by replicating the same pre-trained blocks to create a larger network. Across a wide range of experiments, MazeNet achieves perfect OARSMT-solving accuracy, with significantly reduced runtime compared to classical exact algorithms, and with the ability to handle larger numbers of terminals than state-of-the-art approximate algorithms."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Recurrent Convolutional Neural Networks (RCNNs)",
"Obstacle-Avoiding Rectilinear Steiner Minimum Tree (OARSMT)",
"Deep learning for maze-solving",
"Search algorithm for termination condition",
"Graph-to-image transformation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b3fdee7635f9ff4e53a1172af594ba890923c409.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "MazeNet: An Accurate, Fast, & Scalable Deep Learning Solution for Steiner Minimum Trees"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1d8Egv45of | Learning Multiple Semantic Views For Self-explaining Physiological Signal Stratification | main | Active | Explainable artificial intelligence (XAI);Interpretable machine learning;Interpretability;Deep learning;Time Series Analysis;Segmentation;End-to-end;Self-explaining models;Physiological signals;Photoplethysmogram (PPG);Electrocardiogram (ECG);Obstructive sleep apnea (OSA);Atrial fibrillation (AF);Heart rate variability (HRV);Blood pressure (BP) | interpretability and explainable AI | 3;5;5;5 | 4;4;4;3 | 1;2;2;2 | 1;2;2;2 | 3;3;3;3 | 4.5 | 3.75 | 1.75 | 1.75 | 3 | -0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Major: \n\nSee above (\"Weaknesses\").\n\nMinor:\n\n1- Given the reliance on task labels to optimize segmentation, how does the model perform on tasks with sparse or noisy labels? Does this affect interpretability? It would be interesting if authors could've addressed that and potentially compare it with the SOTA method."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1- The multi-view segmentation approach is an innovative contribution, adding potential value to explainability in machine learning for healthcare, where interpretability is crucial.\n\n2- The paper proposes a unified architecture applicable to both classification and regression tasks, which shows adaptability to a variety of physiological signal processing tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a multi-view deep learning model aimed at self-explaining predictions for various physiological signal-based tasks, such as obstructive sleep apnea (OSA) and atrial fibrillation (AF) detection. The proposed model generates “semantic views” by using mask networks to isolate task-relevant regions of the input signals. These views are used to enhance interpretability and yield clinically relevant insights."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1- The paper introduces multiple semantic views (2, 3, or 4 views) but does not explain why these specific numbers of views are optimal across tasks. This arbitrary choice may limit the interpretability and generalizability of the approach. Further discussion or empirical testing regarding the impact of varying the number of views on interpretability and performance would strengthen the approach.\n\n2- The experimental setup could have greatly benefited from ablation studies that justify the architectural decisions, such as the number of mask networks or the use of shared embedding networks. These studies would help clarify the impact of each component on the model’s performance and interpretability, providing a stronger empirical basis for the architectural choices.\n\n3- The authors claim alignment between the generated views and clinical knowledge, yet this is primarily presented through visual inspection. Providing more robust, quantitative evaluations of interpretability, ideally verified with domain experts, would lend credibility to these claims.\n\n4- The results are not entirely convincing, as the proposed model fails to outperform current state-of-the-art implementations on 3 out of 4 datasets. Additionally, the ablation study in Table 1 indicates that the multi-view architecture offers only a marginal performance improvement. \n\n5- There is a lack of comparison with established explainability approaches like SHAP or LIME. Although these methods may not offer the same level of task-specific interpretability, a comparison would clarify the relative benefits of the proposed model. \n\nMinor: \n\n1- The naming conventions in Table 1 could be clearer. Terms like “SOTA” and “ablation” could be replaced with more descriptive labels that specify the method or configuration used, making it easier for readers to understand the comparison."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Abstract\n\nMethods in the Abstract do not need to be so extensive e.g., The section: “Specifically, the proposed network… to the task labels” could be omitted.\n\nMethods\n\nLine 181. “… each sample x in the time series S to be attributed to one of the N semantic states”.\nWhy should one sample be attributed to only one semantic state? Intuitively, it seems that specific parts of a physiological signal could reflect several latent semantic states.\n\nEquation (4). By applying softmax activation at each time sample, we allow the information to leak into all masks with varying amplitudes, which deviates from the original strict binary definition. In this case, how do we know whether this information is amplified or suppressed by the subsequent embedding networks – and hence attribute explainability to the high-amplitude parts?\n\n3.2.2. Considering that the semantic segmentation masks provide interpretability, why do we need weight sharing in the embedding network features?\n\n3.2.2. What’s the role of the differential embedding vector? Is there any empirical evidence that the decision network can’t exploit such relationships?\n\nResults\n\nTable 1. It would be preferable to report AUC metrics instead of accuracy, considering that accuracy will be sensitive to each model and binary threshold (was there any threshold tuning?) \n\nLine 337. “which suggests the effectiveness… from full-time series interval”. The comparison here may not be fair, considering that removing the mask network significantly reduces the number of parameters in the model, which will solely rely on one embedding network to receive input from the original signal.\n\nLine 370. “From Figure 3… clearly capture such information”. The heart rate variation is not very prominent in the figures. Maybe you could show smaller windows or wider X-axes?\n\nAppendix\n\nLine 926. “we enforce a minimum duration L”. How did you select L for each task? I don’t think these numbers are mentioned in the paper.\n\nGeneral\n\nPlease generate in-text citations with brackets.\n\nFrom a physiological signal interpretation perspective, how do these semantic views compare to existing post-hoc explainability methods? E.g., clustering techniques at the sample level [1].\n\nPotential Work\n\nThe assumptions behind the semantic masks (semantic states attributed to specific time samples) and the need for prior selection of the number of semantic states N may introduce limitations as a general-purpose explainability mechanism. Could the network somehow discover the optimal number of semantic states? (instead of predefining N).\n\nReferences:\n\n[1] Boubekki, A., Fadel, S.G., & Mair, S. (2024). Leveraging Activations for Superpixel Explanations. ArXiv, abs/2406.04933."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This work presents an original idea for a generalized explainable deep learning architecture that can potentially have significant implications for AI-based medicine and XAI. Specifically, it incorporates model and sample level interpretability, by introducing a prior constraint on the number of semantic views which are trainable (model level explainability), based on which each sample produces a unique segmentation mask (sample level explainability). This is contrary to post hoc explainability techniques, where the sample-level explanation can be independent, ambiguous (model approximations), and inconsistent across techniques."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The work involves an inherently explainable AI approach for clinically relevant supervised tasks based on electrocardiogram (ECG) or photoplethysmogram (PPG) inputs. The explainability is achieved by exploiting trainable masks to identify regions of ECG/PPG contributing to clinically significant information."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The proposed deep learning architecture can be relevant for signal segmentation tasks but the level of explainability is quite coarse. The method involves qualitative (visual) exploration of the learned masks that may be quite difficult for slightly more complex tasks. This is also evident from the fact that performance is only optimum for 2 or 3 masks, with the design preventing the integration of multi-dimensional concepts. Moreover, the learned views do not necessarily seem to provide unique insights into model explainability, without knowing which semantic states, and which part of the signals in each mask, contribute to the networks’ decisions (e.g., in the example of AF, the presence or absence of P waves may be more indicative for the detection of the disease, than actual QRS peaks). The selection of tasks is also quite limited in showing interpretability properties, considering that all tasks in the paper are defined as conditions related to peak-to-peak interval variability."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See above"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The research topic of this paper focuses on the interpretability of medical artificial intelligence, which is a field of great concern and has significant practical importance.\n\n- The method proposed in this paper maps different parts of the input signal to different semantic state spaces, revealing hidden patterns in the input signal that are related to model decisions, thereby enhancing the model's interpretability.\n\n- This paper has been validated on multiple datasets, and the experimental results show that the model's decision focus aligns with domain knowledge, verifying the effectiveness of the method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a self-explaining deep learning model architecture designed to enhance interpretability in the analysis of physiological signals, an issue often overlooked in existing deep models. The architecture employs a multi-semantic view approach, which generates multiple mask-modulated signal versions through a mask network. This process attributes model inputs to distinct semantic states, uncovering hidden patterns within the input data. The paper tests this architecture on four clinically relevant tasks involving ECG or PPG signals for classification and regression. Experimental results indicate that the multi-view approach demonstrates improved model interpretability, providing clearer insights into the model's decision-making process."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Data diversity: The dataset used in this article is limited to a single type of physiological signal, utilizing either ECG or PPG signals exclusively (although differentiated PPG signals were employed). This limitation raises questions about the effectiveness of the proposed method when applied to mixed types of physiological signal inputs, which warrants further validation.\n- Semantic state complexity: The number of semantic states in the paper is relatively small (2, 3, or 4). For complex inputs or tasks, a limited number of semantic states may not adequately reflect the model's decision-making process. The performance of the model with a higher number of semantic states requires further investigation.\n- Visualization challenge: The visualization results are discernible when the number of semantic states is small (e.g., 2). However, as the number of semantic states increases, these visualizations become difficult to recognize effectively. This can lead to a decreased understanding of the model's decision focus, thereby reducing the model's interpretability.\n- Evaluation metrics: There are concerns with the evaluation metrics used in the dataset. In classification problems, accuracy is employed as the evaluation metric, but this metric is susceptible to the impact of class imbalance. More robust metrics, such as AUC or F-score, should be considered for a more reliable assessment.\n- Temporal data representation: The method proposed in the paper classifies semantic states for individual sample time points. However, data from a single time point may not capture sufficient semantic information, especially when the signal sampling frequency is high, which could limit the representation of meaningful physiological information."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. \\[Line 16\\] Can you explain how xAI “ensures” reliability without casual inference study? \n2. \\[Line 31\\] Why do you believe validating on only 4 tasks with only 2 waveforms “displays universal usability”? \n3. \\[Line 37\\] Is highlighting relevant regions sufficient for transparency? How does it relate to clinical decision making? Is there any assumption to be made here for how clinicians interact with the model you developed? \n4. \\[Line 328\\] It seems 4-view is worse than 3-view. Is there any explanation?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The paper is generally easy to read, but its clarity can be enhanced by better diagrams."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents an architecture for processing medical waveforms with enhanced explainability. The author claims the learned representations are task-relevant and human-interpretable."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In general, there is no significant technical innovation specially designed for clinical applications or medical waveforms. The author should explain how the proposed method differs from previous general xAI methods and compare the performances. \n2. The author claims the method generates human-interpretable features, but the embedding and decision networks are not easily interpreted (limiting the transparency significantly). \n3. There seems to be no user study with clinicians on the relevance of extracted features. \n4. Figure 2 is too brief, consider adding some sub-figures to illustrate the ideas. It’s only about half the page width now. \n5. Some equations on Page 5 seem un-necessary, and the notation can be simplified. \n6. The tasks selected are not representative in general, and the SOTA methods cited are old in general. \n7. The ablation is only limited to the number of views. \n8. The results reported in Section 5.2 have strong selection bias (correctly-classified ones are shown)."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a generalized self-explaining multi-view deep learning architecture, that generates task-relevant human-interpretable representations during model inference, for stratifying health information from physiological signals."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning Multiple Semantic Views For Self-explaining Physiological Signal Stratification},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1d8Egv45of},\nnote={under review}\n}"
},
"abstract": {
"value": "Explainable artificial intelligence (XAI) offers enhanced transparency by revealing key features, relationships, and patterns within the input data that drive model decisions. In healthcare and clinical applications, where physiological signals serve as inputs to the models for decision making, such transparency is critical for ensuring reliability, identifying biases, and uncovering new insights. However, despite the potential to reveal clinically-relevant information used for inference, generalized solutions for explainability have remained limited in this domain. In this work, we propose a generalized self-explaining multi-view deep learning architecture, that generates task-relevant human-interpretable representations during model inference, for stratifying health information from physiological signals. Specifically, the proposed network architecture employs a mask network to produce multiple mask-modulated versions of the signal, referred to as “semantic views”, highlighting distinct regions of the signal that may be relevant to clinically significant information. These views offer complementary perspectives to enhance interpretability and feature extraction. A shared embedding network is used to extract task-related features from each semantic view, which are used to produce the model's output. Through supervised training with labels, each semantic view is updated based on the saliency information between the semantic view and the model's output, toward fitting the model's output to the task labels. Validated on 4 different clinically-relevant classification and regression tasks taking electrocardiogram (ECG) or photoplethysmogram (PPG) as input, the proposed multi-view architecture displays universal usability, achieving comparable or superior performance across all tasks, when compared to state-of-the-art methods designed for each task. Unlike current state-of-the-art models, which lack task-agnostic human interpretability, our model uniquely provides interpretable outputs. As it is shown, the semantic views generated by the proposed model highlight task-specific characteristic regions in the input signal, aligning closely with the domain knowledge of human experts for each task. Overall, the proposed method offers new directions for interpretable machine learning and data-driven analysis of physiological signals, envisioning self-explaining models for clinical applications."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Explainable artificial intelligence (XAI)",
"Interpretable machine learning",
"Interpretability",
"Deep learning",
"Time Series Analysis",
"Segmentation",
"End-to-end",
"Self-explaining models",
"Physiological signals",
"Photoplethysmogram (PPG)",
"Electrocardiogram (ECG)",
"Obstructive sleep apnea (OSA)",
"Atrial fibrillation (AF)",
"Heart rate variability (HRV)",
"Blood pressure (BP)"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/6b15d200913305505c5065677bb6970308669a7f.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Learning Multiple Semantic Views For Self-explaining Physiological Signal Stratification"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1dDxMPJy4i | Nonparametric Expert DAG Learning with Accurate Edge Strengths and Realistic Knowledge Incorporation | main | Active | probabilistic inference;nonparametric method;knowledge representation | learning on graphs and other geometries & topologies | 1;3;3;5 | 4;4;3;3 | 2;2;2;3 | 1;2;2;3 | 2;3;2;3 | 3 | 3.5 | 2.25 | 2 | 2.5 | -0.707107 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "(1) Could you elaborate on the purpose of this paper? For instance, why is DAG learning needed, and how does it differ from causal discovery?\n\n(2) Could you provide more evidence of the significance of your work beyond its ability to incorporate expert knowledge and quantify edge strengths?\n\n(3) Could you explain why you are confident in the accuracy of the learned edge strengths? What makes them reliable, and how would you convince others to use them in downstream tasks?\n\n(4) Could you clarify why classic causal discovery algorithms are not mentioned or compared in your paper? Additionally, why is continuous learning preferable to traditional score-based, combinatorial structure learning methods? I am not fully convinced by your statement that “In combinatorial search, local decisions about adding, removing, or reversing edges are made without clear visibility into their global impact, only revealed once the global objective is minimized,” as this issue is specifically addressed by GES."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The author provides a highly intuitive introduction to the background and existing challenges in the field, making it accessible even for readers less familiar with the topic. The writing style is clear and straightforward, which enhances comprehension. The explanations are both precise and easy to follow, contributing to a well-structured presentation of ideas. The paper presents both qualitative and quantitative experimental results that are insightful and visually intuitive, aiding in understanding the effectiveness of the proposed method. Additionally, the inclusion of the Sachs dataset as a real-world example is particularly informative, demonstrating the practical applicability of the method and adding significant value to the study."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a nonparametric method for quantifying edge strength and incorporating domain knowledge into modeling. It builds upon the well-known NOTEARS causal discovery method, which transforms the combinatorial search process into a continuous optimization problem. By leveraging nonparametric techniques such as Gaussian Processes, the NEDAG-GP method offers interpretable weights within a nonparametric modeling framework."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(1) While the paper aims to address DAG learning for modeling causal structures and complex dependencies, the purpose could be clarified. It seems that causal structures inherently involve complex dependencies, so it would help to clarify how these terms are being distinguished in the context of this work. If the intent is to use a DAG for causal reasoning, some discussion on the identifiability of the learned DAG would strengthen the contribution. Specifically, it would be helpful to know if the learned DAG represents a unique solution given the data or if it belongs to an equivalence class that includes the ground truth. Reviewing classic works on causal discovery algorithms, such as PC, GES, or PNL, could help refine the objectives and theoretical foundation of the approach.\n\n(2) The paper introduces the idea of incorporating Gaussian Processes into DAG learning, leveraging their nonparametric properties. While this is an interesting direction, the novelty may be somewhat limited, as Gaussian Processes are a known approach for handling nonparametric modeling. Given an adjacency matrix with binary indicators, there are many established methods for estimating associated parameters, so it would be valuable to see a discussion on how this approach contributes uniquely to the field.\n\n(3) Some aspects of the writing could be more clear. For instance, in the introduction, two bolded statements emphasize the importance of incorporating expert knowledge while minimizing reliance on expert-specified parameters and distributional assumptions. Since expert knowledge can encompass information on edges, parameters, and distributions, it would help to clarify the intended balance between these elements. Addressing this and similar points throughout the paper would enhance readability and help readers better understand the author’s perspective and familiarity with the field."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- it has been known GP and NNs share at least similarities, for example \"DEEP NEURAL NETWORKS AS GAUSSIAN PROCESSES\" ICLR 2018. However, the proposed approach did not fully explore and differentiate the use of GP from NNs, besides just a nonparametric approach in name. It would be good that authors can show, in some theoretical statement, where GP based dag learning can be superior. \n- Some related works on causal graphs and gaussian process are not discussed and compared. e.g., \nAglietti et al, \"Multi-task Causal Learning with Gaussian Processes\".\nWilson et al, \"Gaussian Process Regression Networks\". \n- typical distribution assumptions are needed to guarantee identifiablity. What can be guaraneted, in term of the identifiability or consistency, for the proposed method?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "the proposed approach to learn graph using GP partial derivative is new\n\nimproved performance over compared methods"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "the paper proposes a GP process based continuous DAG learning framework. The approach is based on nonlinear DAG constraint from NOTEARS-MLP , utilizing the partial derivaties. Authors show prior knowledge can be incorporated into this framework. Empirical evaluation shows the proposed approach is better than NOTEARS-MLP."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Unfortunately, the paper contain many imprecise statements (see below). \n\n- only one method is compared, also ignoring many literatures on GP based causal models.\n\n- motivation on using GP is not fully justified. \n\nOther comments:\n- \" local decisions about adding, removing,or reversing edges are made without clear visibility into their global impact\": this is not true, global consistency (and in some extend local consistency) properties of scores have been proven to show the optimality in these operations\n- L65: is it true that a single number that can reveal the full caual relationships, esp they often come with specific distribution assumptions? In addition, score-based approach produce specific distribution scores, constraint-based approaches offer test stats, which all represent edge weights.\n- L144: The knowledge on edge weights can be easily be via regularization, such as the L1 sparsity coefficient to achieve confidence in forbidden edges. The objective itself is data fitting + prior as regularizations. Topological order itself can be expressed by a set of forbidden edges. \n- Section 4.2: I don't see how these W constraints can not be expressed by existing continuos learning approaches. In addition, exppressing prior knowledge as an exact numerical value seems harder"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. A recent paper [1] also discusses incorporating prior knowledge in continuous structure learning framework. Can the authors comment on the connections with paper [1]?\n\n\n[1] Wang, Z,. Gao, X,. Liu, X,. Ru, X,. Zhang, Q,.(2024). Incorporating structural constraints into continuous optimization for causal discovery. Neurocomputing, Vol.595."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This work is well-situated in the literature and fills the gap of utilizing non-parametric methods and incorporating expert knowledge in continuous structure learning framework. The advantages of the proposed method are supported by both synthetic experiment and real-world experiment."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel method for learning DAG structure based on continuous structure learning framework. Equipped with additive Gaussian Process with RBF kernel, this method provides non-parametric estimation of edge strengths and improving the interpretability of the structure learning process. The method also incorporates several types of expert knowledge, effectively enhances its performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although I selected “good” for presentation, it would be better if the authors could include a pseudocode of their algorithm for a clearer presentation.\n\n2. How to select the parameters of the Gaussian Processes? In supplementary B.1, the authors described the objective function, and it seems that the notation $\\theta$ is unexplained. Does $\\theta$ refer to the parameters of the Gaussian Processes? Also, It is still unclear to me how the expert knowledge is incorporated. Is it formulated as constraints of the optimization problem?\n\n3. It seems that using non-parametric estimation method and incorporating expert knowledge make NEDAG-GP outperform NOTEARS-MLP. What if we compare NEDAG-GP with NOTEARS that is augmented with non-parametric estimation methods or expert knowledge incorporated? i.e. an ablation study."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. I found it strange that none of the in-text or reference list citations included years.\n2. Equation 4: What is $H^1$? I could not see this set introduced anywhere.\n3. Equation 5: Why is $x_k$ bold here? Other references to $x_k$ are not bold.\n4. Section 4.1: It would help if $\\sigma$ and $\\ell$ are indexed by $j$ and $k$.\n5. Section 5: This section is not substantive enough to constitute a single section. I suggest merging Section 5 with Section 6.\n6. Table 2: How many replications are the results measured over?\n7. Figure 2: There is no explicit reference to this figure anywhere in the text.\n8. Appendix B.5: It would be helpful to provide the mathematical definitions of these metrics (or references to such). In particular, I am unfamiliar with the Balancing Scoring Function.\n9. Figure 3: Each method is evaluated on a coarse grid of three points across the $x$-axis. It would be better to use a finer grid."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "DAG learning is undoubtedly an important problem for areas such as causal inference. The nonparametric nature of NEDAG-GP makes it appealing for complex nonlinear data, which is pervasive nowadays. Moreover, the capacity to incorporate expert knowledge is attractive. I also found the discussion around different characterizations of edge strength insightful."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies directed acyclic graph (DAG) structure learning on observational data. It proposes NEDAG-GP, a new method that learns a nonparametric DAG as a Gaussian process (GP). NEDAG-GP also accommodates expert prior knowledge in the learned DAG."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The methodological innovation behind NEDAG-GP is limited. Specifically, the literature review indicates that GP-based DAG methods are already available, though NEDAG-GP sets up the weighted adjacency matrix differently. Moreover, incorporating expert knowledge is seemingly straightforward, though Section 4.2 does not actually explain how the knowledge-based constraints are enforced.\n2. The paper's primary focus is unclear as it attempts to address two distinct problems simultaneously: nonparametric DAG learning and expert knowledge incorporation. Is there any reason expert knowledge cannot be included in linear DAGs, MLP-based DAGs, or spline-based DAGs? Or is there something particular about GP-based DAGs that makes them more amenable to integrating expert knowledge?\n3. The experimental evidence in favor of NEDAG-GP (without expert knowledge) is limited. Figure 3 suggests that its good performance depends on whether the ground truth is a GP, so evaluations on a wider range of functions would be helpful. Also, DAGMA should be included as a baseline since it has superseded NOTEARS as the de facto DAG learning method in this area.\n4. The paper does not provide a discussion or results about NEDAG-GP's uncertainty quantification performance, which is odd since it uses GPs."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024nonparametric,\ntitle={Nonparametric Expert {DAG} Learning with Accurate Edge Strengths and Realistic Knowledge Incorporation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1dDxMPJy4i},\nnote={under review}\n}"
},
"abstract": {
"value": "Directed Acyclic Graphs (DAGs) are crucial for modeling causal structures and complex dependencies in domains such as biology, healthcare, and finance. Effective structure learning must not only align with domain expert knowledge but also produce interpretable model decisions. Though continuous structure learning methods like NOTEARS are gaining popularity, an underexplored feature is their ability to open up the black box of decisions made by traditional combinatorial search by quantifying edge strengths in weighted adjacency matrices. Yet challenges persist in systematically integrating expert knowledge and ensuring learned weights accurately reflect true edge relationships. We present Non-parametric Expert DAG (NEDAG), a novel method that formulates accurate weight matrices using Gaussian Processes (GPs) and incorporates realistic domain knowledge into the continuous structure learning framework. Experiments on both synthetic and real-world datasets demonstrate that NEDAG not only surpasses existing methods in structure accuracy but also produces more accurate edge strengths. NEDAG thus provides a robust and interpretable solution for structure discovery in real-world applications."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"probabilistic inference",
"nonparametric method",
"knowledge representation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/9f14a2251e051f15e8e2a0e77d391388062d79d1.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on graphs and other geometries & topologies"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Nonparametric Expert DAG Learning with Accurate Edge Strengths and Realistic Knowledge Incorporation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1dUdNzLJRF | TICKing All the Boxes: Generated Checklists Improve LLM Evaluation and Generation | main | Active | large language models;evaluation;instruction following;self-critique | foundation or frontier models, including LLMs | 3;3;6;6 | 3;4;5;4 | 2;2;3;3 | 2;2;3;2 | 3;3;3;4 | 4.5 | 4 | 2.5 | 2.25 | 3.25 | 0.707107 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "We would like to thank all reviewers for their insights on the paper and suggestions for improvement. We have taken on all actionable suggestions and updated the manuscript accordingly. We have also submitted individual comments to each reviewer with further details relevant to their specific reviews.\n\nWe thank each reviewer in advance for taking the time to read our comments engage in further discussion."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Updates to the manuscript and individual comments"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "We would like to thank the reviewer for their precise and informative review. We are glad that the reviewer sees our paper as “significantly improving the scalability of automated instruction-following benchmarks” and finds it “interesting that the checklist can help LLMs refine their initial responses”.\n\nWe directly address the weaknesses raised by the reviewer in the updated manuscript, as described below.\n\n> Lexical-matching metrics should be replaced with more semantic ones [such as] BERTScore.\n\nWe thank the reviewer for suggesting this and have included BERTScore as a column in Table 2 of the updated manuscript. These results further demonstrate the similarity between LLM-generated and gold standard human-written checklists, thus strengthening this part of the paper. We acknowledge that reporting the percentage of recalled gold standard checklist items would be another informative metric, however doing so would require further costly human annotations, as judging whether items from two different checklists are precisely the same is an ambiguous task. However, we hope that the addition of BERTScore in combination with reporting the count MAE provides sufficient evidence that the checklists are similar in terms of count and content. Finally, we would also like to point the reviewer to Table 3 (a), which shows that the downstream evaluation scores from using gold standard or LLM-generated checklists are highly correlated, demonstrating that the checklists also lead to similar evaluations on aggregate.\n\n> The paper fails to discourse the details of human study.\n\nWe would again like to thank the reviewer for identifying this shortcoming. We have updated the manuscript to include further details on the training of annotators and report the inter-annotator agreement. This information is now at the top of Appendix H.\n\nWe are confident that these changes strengthen our paper and hope that the reviewer finds that their suggestions have been directly addressed."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Taking on suggestions"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "We are pleased that the reviewer finds that the paper “conducts extensive automated and manual consistency experiments to quantify and demonstrate the advantages of the TICK evaluation method”. However, we find it a shame that the reviewer has not acknowledged Section 4 of the paper, beyond claiming that using checklists for refinement is not new, despite the fact that no prior work on using checklists for in-context self-improvement exists to the best of our knowledge. Our paper demonstrates that Self-TICK (STICK) significantly improves in-context self-refinement, an increasingly common practice for improving response quality by expending more compute at inference time [1]. Table 1 demonstrates that end-to-end checklist self-evaluations enable purely in-context self-correction in challenging code, reasoning and mathematics problems, despite the reviewer’s claims that we do not consider these task types. We are therefore led to believe that the reviewer has not considered these results in their assessment and would like to highlight their significance, especially in light of several recent works suggesting that purely in-context self-correction is yet to be demonstrated in these settings [2, 3].\n\n> Employing decomposed checklists for instruction evaluation, validation and refinement is not new, as seen in work like FollowBench, InFoBench, and Self-Contrast.\n\nFollowBench and InFoBench use expensive-to-gather, human-written checklists, which limits the use of checklist-based evaluations to their predefined prompt datasets, whereas TICK is substantially cheaper and generally applicable (as acknowledged by Reviewer sSrg), which is what enables Section 4 of the paper, where an LLM can perform checklist-based self-evaluations on-the-fly. Self-Contrast is not closely related to our paper, being a similarity-based method for alignment involving two fine-tuning phases. To the best of our knowledge, no prior work uses decomposed evaluation structure to enable improved iterative self-refinement and even self-correction, which recent work has suggested requires RL fine-tuning to achieve [2].\n\n> There is a lack of in-depth discussion regarding the efficiency of the proposed method.\n\nWe thank the reviewer for identifying this. Scaling inference compute by sampling more or longer generations is an increasingly common practice for improving LLM capabilities on problems that are otherwise challenging [1, 4, 5]. We therefore see the improvements of TICK and STICK as being due to an effective way of improving evaluation and refinement quality in exchange for additional inference costs. To further address this concern, we additionally compare to the most common approach to inference scaling of majority vote among K generations sampled in parallel (i.e., Maj@K) [5] in Table 4 of the updated manuscript. We do this for preference-based LLM-as-judge evaluation and direct scoring with K=32 and still using Chain-of-Thought for the evaluator in each case. The results demonstrate that this improves both LLM-as-judge preferences and direct scores, but that both still perform worse than TICK, highlighting that TICK makes more efficient use of additional tokens than majority vote.\n\n## Answering questions\n\n1. We acknowledge that checklists do not capture sequential dependencies by default and see the automatic construction of evaluation rubrics for agentic tasks as an exciting direction for future work. Whilst ComplexBench explicitly constructs a dataset of instructions with constraint dependencies and has human annotators write checklists that reflect this, simply prompting the LLM to “opt for ’NO’ if the generated text provides no information that could be utilised to answer the question” implicitly captures the fact that a checklist question should be answered ‘NO’ if a question higher up a dependency chain is answered ‘NO’, as is done in this work and InFoBench.\n\n2. We explicitly prompt the LLM to include “implicit criteria that are generally important for an instruction’s problem domain” in the checklist (line 174 in the manuscript), the positive effect of which can be observed in the examples of generated checklists in the appendix and in the positive STICK results on precisely the fields mentioned by the reviewer (Table 1 of the manuscript).\n\n3. We have included results for Llama-3.1-8B-Instruct in Table 2 and Table 3 (b) to address this. We see that it performs only marginally worse than larger models at both generating and answering checklist questions.\n\n[1] Snell et al, Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters, 2024\n\n[2] Kumar et al, Training Language Models to Self-Correct via Reinforcement Learning, 2024\n\n[3] J. Huang et al, Large Language Models Cannot Self-Correct Reasoning Yet, 2023\n\n[4] Madaan et al, Self-Refine: Iterative Refinement with Self-Feedback, 2023\n\n[5] X. Wang et al, Self-Consistency Improves Chain of Thought Reasoning in Language Models, 2022"
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Addressing concerns"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "We thank the reviewer for their detailed review. We are pleased that the reviewer sees our paper as “offering valuable insights” and that the “overall experimental analysis is thorough”. We strongly believe that the weaknesses raised by the reviewer are addressed by the following clarifications.\n\n> [The approach] requires the LLM to first follow a complex set of instructions to generate the checklist.\n\nOur results demonstrate that current LLMs are already capable of generating checklists that are similar to gold standard human-written checklists (Table 2 and Table 3 (a)), including smaller, open-source models, such as Llama-3.1-8B for which results have been added in the updated manuscript. Additionally, checklist self-feedback (i.e., STICK) proves effective at enabling self-correction where unstructured self-feedback fails (Table 1), demonstrating that checklist generation and answering in fact *eases* the problem of answering the original instruction and thus cannot be more difficult than answering the original instruction.\n\n> It only examines three benchmarks: Internal, InFoBench, and WildBench.\n\nAs shown in Table 1, we also evaluate on LiveBench, which spans a range of task categories covering reasoning, mathematics, code, language and more. Additionally, both Internal and WildBench cover a very broad spectrum of instructions, with WildBench instructions being taken from a wide range of real-world interactions with chatbots. We believe that the four benchmarks considered cumulatively provide strong evidence for the benefits of using automatically generated checklists to structure automatic evaluation.\n\n> The existing design is computationally expensive during inference time.\n\nScaling inference compute as an alternative to scaling training compute has emerged as an exciting paradigm for further improving LLM capabilities [1, 2], with self-refinement [3] and self-correction [4, 5] becoming popular research directions. We convincingly demonstrate that checklist-based self-evaluations are an effective way obtaining greater benefits from increased inference compute, whether by iterative refinement (Table 1 & Figure 3), or Best-of-N selection (Table 5). As a further investigation of how TICK compares to alternative approaches to assigning more inference compute to the task of evaluation, we have added a comparison to majority vote among 32 parallel sampled evaluations (i.e., Maj@32) for preference and direct scoring in Table 4 of the updated manuscript. We see that doing so improves agreement between the subsequent preferences or scores and human evaluations, but that they remain worse than TICK.\n\n## Answering questions\n\n1. We thank the reviewer for this suggestion. We have included results using the semantic similarity metric BERTScore in Table 2 of the updated manuscript.\n\n[1] Snell et al, Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters, 2024\n\n[2] OpenAI, o1, 2024\n\n[3] Madaan et al, Self-Refine: Iterative Refinement with Self-Feedback, 2023\n\n[4] Kumar et al, Training Language Models to Self-Correct via Reinforcement Learning, 2024\n\n[5] Gou et al, CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, 2024"
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Taking on comments and providing clarifications"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "We would like to thank the reviewer for their clear and focused review. We are glad that the reviewer sees the presented method as “novel and significant”, including “comprehensive experiments”. In light of this, we are surprised that the reviewer’s concerns should warrant the current score and thoroughly address each one below.\n\n> Previous work has also used decomposition techniques...\n\nWhilst it is true that prior work decomposes the evaluation task, our work is the first to take an approach to decomposition that has proven powerful in fixed datasets and fully automate the decomposition and evaluation itself using an LLM. Our work is also the first to show that such a decomposition technique enables in-context self-improvement/ self-correction in settings where unstructured self-critiques fail.\n\n> The construction details and statistics of the Internal dataset are not sufficiently explained.\n\nWe thank the reviewer for drawing attention to this and have included further details on both the annotator pool and Internal dataset construction in Appendix H of the updated manuscript. The Internal dataset and its full construction details are scheduled for public release within the next month (footnote 1 of the manuscript). \n\n> The authors use metrics like ROUGE and BLEU.\n\nDue to the expense of human annotation, we were unable to additionally acquire human annotation results for this comparison. As suggested by reviewers KpU7 and sSrg, we have included semantic similarity (BERTScore) between checklists in Table 2 of the updated manuscript, where we see that LLM-generated checklists maintain strong similarity to gold standard human-written checklists.\n\n> The preference labelling approach of annotators does not fully align with the checklist-based method.\n\nWe would like to raise two key points that we are confident address this claim. Firstly, as can be seen in the prompt for checklist generation and as is stated in line 174 of the manuscript, the LLM is prompted to include “implicit criteria that are generally important for an instruction’s problem domain” in the checklist. Secondly, the superior agreement with gathered human preferences achieved by TICK relative to asking an LLM-as-judge for a preference or score empirically demonstrates that it is better aligned with the preference labelling approach of annotators.\n\n> The low inter-annotator agreement for direct scoring raises concerns.\n\nTICK’s effectiveness was demonstrated by comparing to preference annotations on Internal, for which we provide the inter-annotator agreement in Appendix H of the updated manuscript (0.684 Krippendorff’s alpha). This difference reflects the fact that WildBench involves particularly long and sometimes low quality instructions, direct scoring yields lower agreement than preference labelling, and annotators are familiar with the Internal instruction set. \n\n> Evaluations with fine-tuned models or well-established frameworks could provide a fairer assessment.\n\nGiven that TICK requires no additional data or fine-tuning, we firmly disagree that comparing to fine-tuned evaluator models would be a fairer assessment. As an alternative inference scaled baseline, we have additionally provided results for a majority vote (Maj@K) [1] version of preference evaluation and direct scoring among K=32 parallel samples in Table 4 of the updated manuscript. Notably, both remain inferior to TICK. \n\n> The baseline comparison is limited to vanilla self-refinement, which is insufficient.\n\nSelf-refine [2] is itself a relatively new method, with no well-established, fine-tuning free alternatives. There are numerous papers indicating that purely in-context self-refinement in fact generally fails, with a prominent recent paper [3] claiming that RL fine-tuning is absolutely necessary to achieve this behaviour in self-correction settings. Yet, in Table 1 we show that STICK is able to reliably self-correct across almost all task categories in the challenging benchmark LiveBench. We believe that this is a very significant result. \n\n## Answering questions\n\n1. We thank the reviewer for identifying a potentially out-of-sequence figure caption, but are unable to identify which they mean. Could the reviewer please clarify whether they mean Table 3 (a) or Figure 3 (which has no subfigure labelled (a))?\n\n2. As shown in [3, 4] and in Table 1, in-context self-refinement is typically prone to response quality degradation, as the LLM can misidentify issues with its own response. The small performance dip in the fourth iteration on WildBench simply shows that the number of iterations STICK can sustain improvements is still limited.\n\n[1] Wang et al, Self-Consistency Improves Chain of Thought Reasoning in Language Models, 2022\n\n[2] Madaan et al, Self-Refine: Iterative Refinement with Self-Feedback, 2023\n\n[3] Kumar et al, Training Language Models to Self-Correct via Reinforcement Learning, 2024\n\n[4] Huang et al, Large Language Models Cannot Self-Correct Reasoning Yet, 2023"
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Addressing concerns"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "There is a potential risk that using STICK for harmful instructions (e.g., those involving discrimination or violence) may increase the harmfulness of LLM responses. Ethical safeguards should be considered to mitigate such issues."
},
"flag_for_ethics_review": {
"value": [
"Yes, Discrimination / bias / fairness concerns"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The caption for Figure 3(a) appears to be out of sequence or unclear. Could the authors clarify or reorder the content for better coherence?\n2. The self-refinement process using STICK results in a minor decline in the last iteration, could the authors make a further explanation?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The automatic evaluation method using LLMs as judges is novel and significant. The authors present an effective and interpretable protocol for evaluating and refining generated text.\n- Comprehensive experiments and detailed analyses are provided to support the effectiveness of the proposed methods.\n- The paper is well-written and easy to follow, making it accessible to a broad audience."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose TICK, a method that uses LLMs to decompose instructions into checklists composed of several YES/NO choices to address limitations in standard evaluation metrics like Elo rating and direct scoring. This approach provides a more interpretable evaluation by breaking down instructions into specific criteria. They further introduce STICK, which refines LLM responses using self-assessment based on these checklists, achieving substantial improvements compared to traditional refinement methods. Experiments demonstrate that using LLMs for checklist generation is feasible and reliable. Also, using checklists for evaluation aligns with human annotations. Based on TICK, STICK enhances the quality of LLM outputs beyond vanilla-refinement approaches. Additionally, the authors find that using checklists in human annotation significantly increases inter-annotator agreement, making the evaluation process more consistent and reliable."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Leveraging LLMs with simple prompts to generate checklists is a straightforward approach. Previous work has also used decomposition techniques to evaluate responses across multiple dimensions, similar to step-by-step verification of LLMs' instruction-following abilities. While this method has been applied to various evaluation metrics, to my knowledge, this is the first time it has been specifically focused on instruction-following.\n2. The construction details and statistics of the Internal dataset are not sufficiently explained, which reduces confidence in the reliability of the results when using LLMs for checklist generation.\n3. When evaluating the generated checklists against gold labels, the authors use metrics like ROUGE and BLEU. However, these metrics are less effective in knowledge-intensive contexts, suggesting a need for additional manual annotation or alternative metrics. However, the human annotation results are missed.\n4. The preference labeling approach of annotators does not fully align with the checklist-based method for evaluating instruction-following capabilities. Human annotation will consider the quality of the response while TICK only considers instruction-following ability.\n5. The low inter-annotator agreement for direct scoring raises concerns, as the authors only demonstrate TICK's effectiveness through pairwise correlation with human annotations. If the inter-annotator agreement for pairwise scoring is similarly low, it might undermine the validity of this correlation.\n6. The comparison of TICK to other evaluation methods is limited to direct scoring and an ablated version (Check-then-Score). This restricts the scope of the comparison. Evaluations with fine-tuned models or well-established frameworks could provide a fairer assessment.\n7. In self-refinement experiments, the baseline comparison is limited to vanilla self-refinement, which is insufficient. Incorporating additional strong baselines would provide a more comprehensive understanding of STICK's effectiveness.\n\nReference:\n\nKalpesh Krishna, Aurko Roy, and Mohit Iyyer. Hurdles to progress in long-form question answering, 2021. URL https://arxiv.org/abs/2103.06332.\n\nShashank Sonkar and Kangqi Ni and Lesa Tran Lu and Kristi Kincaid and John S. Hutchinson and Richard G. Baraniuk. Automated Long Answer Grading with RiceChem Dataset, 2024. URL https://arxiv.org/abs/2404.14316"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. For table 2, why don't you consider the semantic similarity metrics such as scores generated by natural language inference models? BLEU and Rouge style metrics sometimes can be unreliable."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Originality: This paper analyzes the quality of checklists generated by advanced LLMs and how they can be used to improve LLM-as-judge and high-quality instruction selection. It can provide experiment results for practitioners who want to use these checklists to enhance the performance of LLMs as judges, offering valuable insights.\n\nQuality: The overall experimental analysis is thorough, including validation of LLM-generated checklists to human-generated checklists. It also features corresponding analyses on the use of checklists for self-refinement and their application as the reference for human annotators.\n\nClarity: The paper is written clearly, making it easy to follow and understand.\n\nSignificance: The topic of LLMs as judges is highly relevant, and the findings of this study may offer significant insights for the industry."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper aims to measure and enhance LLM performance in instruction-following tasks by leveraging a powerful model to generate checklists based on the given instructions. \nThe key contributions include: \n1. Proposing a prompt to generate checklists for each instruction. \n2. Validating the high similarity between checklists generated by advanced LLMs and those created by humans across several benchmarks. \n3. Showing that the judge score derived from aggregating checklists yields a pass ratio that closely aligns with human scores, highlighting the potential of using the checklist to improve the performance of LLM-as-judge. \n4. Showcasing that self-refinement guided by the generated checklists leads to higher performance improvements compared to unstructured feedback. \n5. Allowing human annotators to reference the model-generated checklists results in enhanced inter-annotator agreement."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Novelty: Given multiple works on using checklists to enhance the performance of LLMs as judges, this paper’s contribution lies in enabling LLMs to generate their own checklists and validating their feasibility. The approach involves introducing a specific prompt to elicit the checklist from the LLM. However, this requires the LLM to first follow a complex set of instructions to generate the checklist, which places even higher demands on the model’s capabilities than the instruction-following task itself.\n\nExperimental Limitations: From an experimental perspective, the study could benefit from considering a wider range and a larger scale dataset. Currently, it only examines three benchmarks: Internal, InfoBench, and WildBench. \n\nExpense: The existing design is computationally expense during inference time since it requires a large number of tokens and multiple generations during the self-refinement stages. How to distill this ability or reduce this expense can be a good direction."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Although checklists introduce a certain level of structure, they typically only express parallel relationships. When the content to be verified involves more complex logical relationships, such as selective, chain relationships, or their combinations (for example, tasks in ComplexBench), how can the effectiveness of checklists be ensured?\n\n2. A notable feature of instruction-following tasks is that verification points are directly reflected in the instructions (such as text style, word count limits, etc.), making it relatively easy to break down the task into different verification points and generate checklists. However, for a wider range of task types, especially in fields involving symbolic reasoning like mathematics and programming, how can the application methods and advantages of checklists be demonstrated?\n\n3. For models with different capability levels, particularly some weaker or smaller-scale language models (LLMs), how do they perform in terms of decomposing checklists and accurately scoring?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. TICK enhances the transparency and interpretability of the evaluation process by breaking down the assessment task into a series of specific YES/NO questions. This fine-grained evaluation approach helps to more accurately identify the strengths and weaknesses in the model's output.\n\n2. This paper conducts extensive automated and manual consistency experiments to quantify and demonstrate the advantages of the TICK evaluation method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "To evaluate the instruction-following capabilities of large language models (LLMs), this paper introduces a method called TICK (Targeted Instruct-evaluation with ChecKlists). TICK leverages the in-context learning abilities of LLM to break down complex instructions into a series of yes/no questions, forming a checklist. The LLM is then used to score this checklist. Initially, the paper demonstrates the advantages of the TICK assessment method through extensive human consistency experiments. Subsequently, the effectiveness of the TICK method is validated through experiments involving self-refinement, Best-of-N selection, and assistance with human annotations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The core of the proposed method in this paper lies in using in-context learning to break down instructions into a checklist for self-validation and refinement, as well as for best-of-N selection. However, employing decomposed checklists for instruction evaluation ,validation and refinement is not new, as seen in work like FollowBench, InfoBench, and Self-Contrast. The fundamental differences and substantive contributions of this work compared to existing approaches, particularly in terms of evaluation methods and self-improvement strategies, need to be more clearly defined.\n\n2. There is a lack of in-depth discussion regarding the efficiency of the proposed evaluation method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see the suggestions in Weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper removes the major constraint of manually constructing checklists of prior works, significantly improving the scalability of automated instruction-following benchmarks.\n2. It is interesting that the checklist can help LLMs refine their initial responses.\n3. The paper is well-written and well-organized."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores developing an automated evaluation benchmark to assess the instruction-following ability of large language models. Their study is based on the idea that asking LLMs to evaluate response qualities with a set of detailed requirements provides more reliable assessments than asking LLMs to provide a holistic evaluation directly, as proposed by InfoBench. The major finding of this paper is that LLMs can also prepare the decomposed questions (i.e., the checklist) for arbitrary user prompts, scaling up this framework to the next level of automation. Also, they find that the LLM-generated checklist could further help LLMs to provide self-refined responses."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The metrics to evaluate the similarities between the human-crafted and LLM-generated checklists can be improved. In particular, those lexical-matching metrics (i.e., BLEU and ROUGE) should be replaced with more semantic ones. For example, [1] evaluates the quality of LLM-generated rubrics versus to human-crafted ones with BERTScore. Further reporting the percentage of recalled human-crafted check items and the percentage of precise LLM-generated check items will be better. \n\n2. This paper fails to discourse the details of human study. In this paper, many experiments are conducted with human annotators. The authors should discuss some basic information about the annotations, such as the statistics of their demographic information, the training procedures for the annotators, and the internal agreement among the annotators.\n\n[1] Unveiling Scoring Processes: Dissecting the Differences between LLMs and Human Graders in Automatic Scoring."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We show the advantages of using generated checklists to structure evaluation, including for response self-improvement."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024ticking,\ntitle={{TICK}ing All the Boxes: Generated Checklists Improve {LLM} Evaluation and Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1dUdNzLJRF},\nnote={under review}\n}"
},
"abstract": {
"value": "Given the widespread adoption and usage of Large Language Models (LLMs), it is crucial to have flexible and interpretable evaluations of their instruction-following ability. Furthermore, as human annotation is slow and costly, LLMs are increasingly used to make these judgments, at the expense of reliability and interpretability. In this work, we propose TICK (Targeted Instruct-evaluation with ChecKlists), a fully automated, interpretable evaluation protocol that structures evaluations with LLM-generated, instruction-specific checklists. We first show that, given an instruction, LLMs can reliably produce high-quality, tailored evaluation checklists that decompose the instruction into a series of YES/NO questions. Each question asks whether a candidate response meets a specific requirement of the instruction. We demonstrate that using TICK leads to a significant increase (46.4% $\\to$ 52.2%) in the frequency of exact agreements between LLM judgements and human preferences, as compared to having an LLM directly score an output. We then show that \\textbf{STICK} (Self-TICK) can be used to improve generation quality across multiple benchmarks via self-refinement and best-of-N selection. STICK self-refinement on LiveBench reasoning tasks leads to an absolute gain of $+$7.8%, whilst best-of-N selection with STICK attains $+$6.3% absolute improvement on the real-world instruction dataset, WildBench. In light of this, structured, multi-faceted self-improvement is shown to be a promising way to further advance LLM capabilities. Finally, by providing LLM-generated checklists to human evaluators tasked with directly scoring LLM responses to WildBench instructions, we notably increase inter-annotator agreement (0.194 $\\to$ 0.256)."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"large language models",
"evaluation",
"instruction following",
"self-critique"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7788fffa186c55a3464ac319117ec8c89a8b819e.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "TICKing All the Boxes: Generated Checklists Improve LLM Evaluation and Generation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1dkL3MVBfV | Dynamic Model Editing to Rectify Unreliable Behavior in Neural Networks | main | Active | model vulnerability;model editing;feature attribution | interpretability and explainable AI | 3;5;5;6 | 5;3;3;4 | 2;2;3;3 | 2;2;3;4 | 2;2;2;4 | 4.75 | 3.75 | 2.5 | 2.75 | 2.5 | -0.622543 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. What are the possible research avenues to mitigate some of the limitations highlighted above?\n2. How practical is your method for highly complex networks with very intricate computational graphs?\n3. How severe are the poisonous attacks that are considered? Are there newer and more severe attacks that could evade your method?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is quite well written. The problem that is addressed is clearly defined and seems quite relevant for this venue. The proposed method has shown compelling results against all the baselines. Overall, this is an enjoyable paper to read."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Neural network models often underperform when faced with data shifts. Due to their opaque nature, addressing this issue typically involves extensive data cleaning and retraining, resulting in significant computational and manual demands. This drives the need for more efficient model correction methods. This paper introduces a rank-one model editing approach to correct unreliable model behavior on corrupted inputs, aligning it with performance on clean data. The proposed method uses an attribution-based technique to identify the primary layer contributing to the model's misbehavior, incorporating this layer localization into a dynamic model editing process. This enables adaptive adjustments during editing. The authors performed extensive experiments which show that that their method effectively corrects issues related to neural Trojans and spurious correlations with as little as a single cleansed sample."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The key limitations of this method include its reliance on identifying unreliable behaviors and the requirement that both corrupted and cleansed samples are available for effective correction. While the method has shown compelling results on almost all the benchmarks, this heavy reliance on the identification of unreliabilities makes the method less practical. Unfortunately, the authors didn't provide any clues or research directions for how to mitigate this issue. Also, it seems to me that this method is mostly applicable to models with simpler computational graphs. For instance, models that involve lots of skip connections, group norm, layer norm, etc. might be quite difficult to correct. It is also not clear to me how effective the proposed method is when dealing with stronger more \"aggressive\" poisonous attacks. Mentioning how severe the attacks that are considered are and how they compare with other types of attacks might convince the reader more about the efficacy of the method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Question: The model editing process, as illustrated in Figure 3, involves clean samples and their corresponding corrupted samples. How would one edit a model trained on a dataset containing poison and Trojan horses when the original training dataset or clean samples are unavailable?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Strengths:\n1. The method's approach of retaining original samples alongside corresponding corrupted samples effectively addresses issues related to performance degradation and data volume constraints.\n2. Algorithm 1 employs a clever strategy for dynamically editing the model using attribution methods by appropriately setting thresholds for $\\delta$ and $\\epsilon$.\n3. The experimental section considers critical issues such as backdoor attacks and spurious correlations, providing an analysis of the method on the real-world ISIC dataset, which showcases the method's extensive effectiveness."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Summary: This paper proposes a novel method for editing unreliable models, drawing inspiration from rank-one editing techniques. The authors demonstrate the effectiveness of their approach through experiments focused on backdoor attacks and spurious correlations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Weaknesses:\n1. The proportion of the backdoor subset within the training set is not clearly specified. To better evaluate the robustness of the proposed method, I recommend varying the proportion of backdoor data and comparing results across different configurations.\n2. While the experiments demonstrate strong performance on image data, additional experiments in other domains, such as sequence recognition, would help establish the method's scalability and versatility, highlighting its broader applicability.\n3. The paper lacks a comparative analysis of the time complexity of the proposed method relative to existing techniques. Including this analysis would offer valuable insights into the method's efficiency and practical feasibility."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "n/a"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "What is the difference between your method and ROME, aside from (i) applying it to image classifiers (ii) editing iteratively (or 'dynamically'), and (iii) the inclusion of corrupted samples in training (pls correct me if I am interpreting this wrong -- see next question)?\n\nDoes this mean you train on the corrupted samples? What if your model has already been trained? Also, don't the corrupted samples need to be included in training to begin with, so that the attack is successful? This part is unclear to me. \n> L207: \"Our proposed process of model editing to correct unreliable behaviors involves integrating both original samples x and their corrupted counterpart x˜ into the training procedure\"\n\nMore minor:\nDon't these sentences contradict each other? Maybe you need a name for your method to distinguish it from the standard ROME (which you say doesn't work out of the box).\n> L43: \"we formally pinpoint two key challenges when applying rank-one editing to domain adaptation, which inevitably lead to diminished model performance and necessitate labor-intensive data preparation (details in § 4.1). Next, we establish that rank-one model editing is\nwell-suited for correcting model’s unreliable behavior as it intrinsically sidesteps these challenges\" \n\nSome questions for figure 3:\n- Should it be key k*? Instead of key v*? (step 3 panel)\n- Yellow arrow is pointing wrong way and should be under 'attribution flow'?\n\nWhat patterns? I think this is important -- it is a lot easier to edit out a reliance on a spurious pattern that does not vary much in its appearance (like in the Trojan case) than it is to edit real spurious features (e.g. to backgrounds). \nUpdate: I see in the appendix the patterns added are the same as the Trojans -- the only difference in the settings are that the added patterns flip the label in the Trojan case, while they do not in the spurious case. I think this makes your spurious correlations setting unrealistic. \n> L410: \"we pollute a proportion of samples of class y by attaching patterns to create spurious samples\"\n\nSuggestion for table 4: highlight the accuracy on the samples without the spurious correlation (is this what you mean by clean?) or performance drop for these samples instead of showing the performance for samples with the correlation. It reads a little cleaner to see your method improves accuracy, and better highlights the cost of relying on spurious features (i.e. performance is worse when the feature is absent + correlation is broken)."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The method seems to work well! ASR is dropped to nearly zero without compromising accuracy, and bias toward spurious features are removed without affecting accuracy. \n\nThe result could be an important result for the mechanistic interpretability community (which is currently garnering lots of attention) as it shows editing techniques can be applied to a second modality and to alleviate existing concerns.\n\nI really liked that a realistic setting was also considered, and that the method seemed to work well in this case too."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors apply model editing (very similar to ROME) for the task of removing neural trojans and mitigating reliance on spurious correlations (which in the paper, closely resemble neural trojans) for image classifiers. About half the paper is dedicated to i. explaining ROME, ii. describing some challenges in applying editing to the desired settings, and iii. detailing their modified approach. Their method consists of identifying the best layer to edit by comparing attributions (i.e. via Integrated-Gradients) for clean and poisoned samples, and then \"dynamically\" applying model editing, which I believe (but am unsure) means making repeated edits until the edited model's overall performance does not deviate much from the original model's overall performance. \n\nExperiments are conducted using a simple neural trojan and spurious feature for CIFAR10 and ImageNet classification. The proposed method successfully reduces to the attack success rate of the neural trojan to nearly zero without compromising overall accuracy. It also removes any bias to the spurious feature (measured as the increase in performance when the spurious feature is present). Baselines include fine-tuning and methods called P(or A)-ClArC, and are surpassed. Similar results are attained on a more realistic setting of skin lesion classification."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Clarity**: I personally found this paper hard to read. I did not find Sec 4 to be well integrated to the paper. Paragraph 207-215 feels like it should be very important, but it was not clear to me how your method changed to sidestep concerns and if there was empirical evidence showing that this methodological change was truly responsible for improved performance. The paper would have benefitted more from taking more time to clearly explain the experiments, imo. \n\n**Novelty over ROME**: Perhaps this relates to the above point, but it is unclear to me exactly how this differs from ROME, which as I understand it, also involves localizing and editing. The iterative nature (which authors term 'dynamic') is perhaps new, but it seems to only marginally improve performance over static editing. Similarly, the comparison to Santurkar's editing work was a bit lackluster (is the main difference that they choose the penultimate layer by default?)\n\nI am **unsure if this method would work for more realistic spurious correlations**, which would not have a single fixed appearance, as is the case for the spurious features studied (even for the skin lesions, the spurious patches are quite consistent in their appearance). Even a simple benchmark like Waterbirds is not studied (I personally think even Waterbirds is too simple, but it is very established and having a result on it would greatly improve the paper's claim about spurious features). \n\nSummary: I am borderline on this paper. The clarity issues are somewhat significant for me, but the experimental results are strong and the impact of showing editing is effective for vision models would be impactful. I am curious if other reviewers also had difficulty reading the paper -- if it is just me, I'd raise my score. Novelty issues are not as big for me, but I think the paper would be strengthened if the exact differences between this and the most similar related methods are clearly and concisely articulated."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Equation (1): What is $f(k^*;W)?\n- Lemma 2: What does $x^* \\rightarrow k^*$ mean? What is $\\mathcal{X}$?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "**Promising research direction:** Domain adaptation is a serious issue in the application of machine learning to many domains. While the use of rank-1 editing has been proposed before in this setting, the reviewer believes that the suite of tools that editing offers have still not been fully exploited.\n\n**Methodical experiments and results are strong:** Overall, the experimental section is well-written. While the reviewer has some issues with the overall scope of the experiments (see Weaknesses), those that were run seem to be fairly comprehensive and the results are well-described. In the settings where the method is deployed, it performs well relative to the other baselines that are explored."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work considers the problem of spurious correlations learned by a model during training. It proposes the use of rank-1 editing as an approach to correct such model errors while preserving the model’s overall performance. This method is motivated by two challenges in the use of rank-1 editing. One is technical and involves finding rank-1 updates that do not interfere with other facets of model performance. The second is more specific to domain adaptation and involves the need for sufficient quantities of labeled data. The method first utilizes a feature attribution-based approach to locate the layer of the model where editing will yield the biggest improvement. Then it applies rank-1 editing to this layer to correct the spurious correlation in the model. The paper evaluates the approach on models to which a trojan has been injected and models that have learned spurious features related to patches (for both toy and real datasets). Experiments suggest that the approach strikes a good balance between correcting model behavior in specific instances without degrading overall accuracy too much."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**The problem that is addressed is narrow:** This work claims to explore domain adaptation, but it only looks at mitigating the presence of very obvious spurious correlations in the data (e.g., trigger patches). The challenges associated with real-world domain adaptation are far more subtle than this. It would have made the work stronger if it had investigated instances of domain adaptation where the differences between domains were more subtle. If the proposed method worked in such situations, it would be very notable. Alternatively, the paper could shift its language to focus more exclusively on spurious correlations.\n\n**The challenges that motivate the method are not very clearly described and are never shown empirically to be issues:** The work describes two challenges that are meant to motivate the proposed approach. Overall, these are not very clearly described. Indeed, Challenge 2, which involves lack of data in domain adaptation settings, could be easily described outside of the mathematical formalism, but this is not done. Challenge 1 relates to the specific approach to rank-1 editing, specifically the failure of $k^*$ to be included in the statistics matrix $C$. This is stated as one of the fundamental challenges of using rank-1 editing for domain adaptation, but it is never explained why this is specifically a problem for domain adaptation and not a general issue. Finally, it would help make these challenges more meaningful if some empirical evidence was given to support their centrality.\n\n**Repeated editing has been explored in the past:** To this reviewer’s understanding, the main contribution of the work is the use of feature attribution to locate a layer to edit, the modification of the existing rank-1 editing technique to mitigate an issue with the statistical matrix $C$, and the introduction of dynamic editing. The first and second are new to this reviewer’s knowledge (though the reviewer is not an expert in the breadth of what has been done in the editing space). The impact of repeated editing has been explored in detail in past works (e.g., [1]). It would be good to consider how the present paper fits into such studies.\n\n### Nitpicks:\n- Line 077: “Experimental evaluations highlight our method’s remarkable performance…” It is this reviewer’s opinion that the word ‘remarkable’ should be removed and that the paper should let the results speak for themselves.\n- Line 043: The first sentence says that there are significant challenges to using rank-1 editing for domain adaptation. The second sentence says that actually rank-1 editing is well-suited to domain adaptation. What changed?\n\n[1] Gupta, Akshat, Anurag Rao, and Gopala Anumanchipalli. \"Model editing at scale leads to gradual and catastrophic forgetting.\" arXiv preprint arXiv:2401.07453 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A dynamic model editing technique is proposed for correcting the model's misbehavior."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024dynamic,\ntitle={Dynamic Model Editing to Rectify Unreliable Behavior in Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1dkL3MVBfV},\nnote={under review}\n}"
},
"abstract": {
"value": "The performance of neural network models degrades with data shifts. Owing to their opaque nature, rectifying models to address this problem often necessitates arduous data cleaning and model retraining, resulting in huge computational and manual overhead. This motivates the development of efficient methods for rectifying models. In this work, we propose leveraging rank-one model editing to correct model's unreliable behavior on corrupted input samples and align it with that on cleansed samples. We introduce an attribution-based method for locating the primary layer responsible for the model's misbehavior and integrate this layer localization technique into a dynamic model editing approach, enabling dynamic adjustment of the model behavior during the editing process. Through extensive experiments, the proposed method is demonstrated to be effective in correcting model's misbehavior observed for neural Trojans and spurious correlations. Our approach demonstrates remarkable performance by achieving its editing objective with as few as a single cleansed sample, which makes it appealing for practice."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"model vulnerability",
"model editing",
"feature attribution"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b442ecfccdc948aa8c0777024bbf7342f63ebd38.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/fe309804fd027ef6f09fd7568f9d077d0e8b685d.zip"
},
"title": {
"value": "Dynamic Model Editing to Rectify Unreliable Behavior in Neural Networks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1dkVCX4jlH | Uncertainty-Aware PPG-2-ECG for Enhanced Cardiovascular Diagnosis using Diffusion Models | main | Active | Inverse Problems | generative models | 5;6;6 | 5;3;4 | 2;3;4 | 3;3;3 | 3;3;2 | 5.666667 | 4 | 3 | 3 | 2.666667 | -0.866025 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Hallucination Analysis: How does the model ensure that synthetic ECG signals do not contain unrealistic artifacts?\n\nBaseline Comparisons: The comparison is limited to CardioGAN and RDDM. Have you considered including additional baseline models from the literature (e.g., ArXiv:2309.15375, 2012.04949, 2204.11795, 2101.02362) to provide a more comprehensive performance evaluation?\n\nReal vs. Synthetic Data Performance: How do the model's performance metrics change when trained on real versus synthetic PPG-ECG pairs? Can you quantify the impact of synthetic data on classification accuracy?\n\nGeneralizability Across Datasets: Have you considered applying this approach to unpaired PPG-ECG datasets or datasets from different demographic groups? Would such testing be feasible within your current framework? Have you employed techniques such as cross-validation or tested on external datasets to ensure that your model generalizes well beyond the training data?\n\nRisk of Overfitting: Given the complexity of diffusion models and the size of the datasets used, what measures have you taken to prevent overfitting?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "Originality and Significance: This paper offers a novel approach to the challenging PPG-to-ECG conversion by applying a diffusion-based, uncertainty-aware model. This method effectively addresses the inherently ill-posed nature of the task, capturing the distribution of possible ECG outputs rather than a single solution. By doing so, it meets a main need in cardiovascular diagnostics, especially where paired data is limited.\n\nMethodological Rigor: The paper demonstrates strong methodological rigor with a solid theoretical foundation, including proofs of the Expected Score Classifier's (ESC) optimality. This rigorous analysis, supported by detailed equations (e.g., Theorem 3.1), supports the model’s reliability, ensuring both empirical soundness and theoretical robustness.\n\nComprehensive Evaluation: A thorough evaluation across 11 cardiovascular conditions highlights the model's generalizability and robustness. Compared to baseline models, it shows superior performance in signal reconstruction and classification, with added metrics for uncertainty quantification. This detailed analysis strengthens the case for clinical applicability.\n\nClarity and Presentation: The paper is well-organized, balancing technical details with intuitive explanations, such as figures illustrating model performance and ECG visualization strategies, enhancing clarity. The emphasis on interpretability supports practical use in clinical settings.\n\nSignificance for the Field: The integration of uncertainty-aware diffusion models for physiological signal conversion represents a meaningful enhancement in machine learning for healthcare. The interdisciplinary approach bridges machine learning and biomedical engineering with the potential to drive future innovations in cardiovascular diagnostics and medical device development."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents \"Uncertainty-Aware PPG-2-ECG (UA-P2E),\" a novel framework that employs diffusion models to convert photoplethysmography (PPG) signals into electrocardiography (ECG) signals for improved cardiovascular disease classification. Recognizing the ill-posed and inherently ambiguous nature of the PPG-to-ECG conversion—stemming from the loss of certain physiological information in PPG measurements—the authors propose a multi-solution approach to address this challenge. By leveraging diffusion models, UA-P2E captures the full distribution of possible ECG signals corresponding to a given PPG input, effectively modeling the uncertainty inherent in this inverse problem. This allows the framework to generate robust ECG signals that account for the variability and ambiguity of the conversion process. The authors validate their approach through experiments across multiple cardiovascular conditions, demonstrating state-of-the-art classification performance. They provide empirical evaluations, including comparisons with two baseline models, to substantiate the effectiveness of UA-P2E in both signal reconstruction and cardiovascular classification tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Dependence on Synthetic Data: The evaluation heavily relies on synthetic PPG data, especially for augmenting the CinC dataset (Section 5.2). This dependence raises concerns about potential biases, as synthetic data may not fully capture the variability and complexities of real-world PPG signals. Consequently, the model's generalizability to real clinical settings might be limited, potentially affecting its practical applicability.\n\nLimited Baseline Comparisons: The paper compares UA-P2E primarily with two baseline models: CardioGAN and RDDM (Table 1). While these comparisons provide some insight, the limited scope restricts a comprehensive understanding of the model's performance relative to the broader range of existing methodologies. Including additional, especially more recent baseline models, such as those in the referenced ArXiv papers, would strengthen the evaluation and better position UA-P2E within the current state-of-the-art.\n\nPotential for Synthetic Artifacts: The possibility of generating hallucinations or artifacts in the synthetic ECG signals produced by the diffusion models is not thoroughly examined. Since diffusion-based models can introduce unrealistic features, a lack of analysis on this front may raise concerns about the reliability and clinical validity of the generated signals. Addressing this risk through quantitative assessments would enhance the credibility of the proposed approach.\n\nLimited Dataset Diversity: The study focuses on paired PPG-ECG datasets from CinC and MIMIC-III. This narrow dataset selection may not adequately demonstrate the model's flexibility or adaptability to other data sources. Expanding the evaluation to include larger or unpaired datasets would provide a more robust validation of the model's generalizability and its potential utility across diverse cardiovascular data."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Given the stochastic nature of the diffusion model, are 3 seeds enough to capture the full range of ECG variability, or to provide significant statistical measures in your work?\n2. Although the paper reports the classification of the mean and random ECG diffusion-based solutions, you do not mention the performance of CardioGAN and RDDM directly, which were initially used to benchmark the ECG generation performance. If the end goal is to improve classification with ECG-generated signals, don't you think that it would be appropriate to use the exact implementations of CardioGAN and RDDM to compare the performance?\n3. Did you ensure that there are no segments from the same recording in both the training and test datasets? I believe this is not mentioned. If the same recording appears in both datasets, the generalization ability of the approach might be not properly evaluated.\n\nAlso some missing information:\n- The exact database(s) of CinC that was(were) used, as there are many databases from this source available on PhysioNet\n- A data summary with the number of samples used to train and test the PPG conversion and classification models (and the corresponding class distributions)\n- Only the AURC is used to assess the classification performance, but since the pathological labels often suffer from class imbalance, more insightful metrics such as sensitivity, specificity and F1-Score could be reported to improve the clinical relevance of the approach and facilitate comparison with other works."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This work uses a diffusion model to propose multiple generated ECG waves for a single PPG signal, instead of a single solution as in previous works.\n2. The paper provides the mathematical proof of the optimality of the proposed classifier, which reassures the credibility of the approach. Also, multiple performance metrics are used to evaluate the model.\n3. In general the paper is well structured, with relevant tables and figures for comparison, and explanations are thorough.\n4. The use of ECG-generated waves from PPG and its improved classification accuracy could be extended to the current PPG-based widespread solutions for cardiovascular monitoring."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a methodology to convert PPG signals into ECG ones by accounting for the uncertainty of the conversion process using a diffusion-based model. The methodology is novel arguing that by using a multi-solution approach - where multiple ECG signals can generate the same PPG wave - the combined classification of the generated ECGs is more accurate in comparison to using only the PPG or using a single generated ECG from state-of-the-art methods. The authors use two datasets, one with pairs of unlabeled ECG and PPG signals and another with only labeled ECG signals. A reverse ECG-2-PPG model is used to generate synthetic PPG for classification."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Despite the comprehensive methodology and appendix, some points require some clarification:\n1. The authors use 3 random seeds to generate the ECG signals from the PPG ones and report the mean and standard deviation for the chosen metrics. Given the stochastic nature of the diffusion model, are 3 seeds enough to capture the full range of ECG variability, or to provide significant statistical measures?\n2. Although the motivation for the use of PPG-2-ECG relies on the widespread of wearable devices, no database with signals acquired in wearable settings was used. For the diffusion model, the MIMIC-III data comes from hospital facilities (to my knowledge), and for classification, this is unclear (unless the CinC2017 database for AFib was used, for example). This should be stated as a limitation of the work, or as a pointer for future work."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. As Figure 3 shows, the performance of the PPG-derived ECG significantly trails that of the original ECG. How do you explain this discrepancy? Is it possible that the required information for detecting certain arrhythmias is inherently absent in the PPG signals?\n2. Appendix G focuses on Atrial Fibrillation (AF), which is relatively easier to detect. It would be beneficial to include results for other types of diseases to provide a broader evaluation.\n3. How does pacing rhythm manifest in PPG signals? Is it feasible to accurately generate ECG signals with pacing rhythm characteristics from PPG?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The theoretical foundation validating the transition from PPG to ECG and then to classification is robust. The paper demonstrates that generating multiple candidate ECG sample sets can mitigate the uncertainty of ECG conversion and eliminate errors due to mismatch.\n2. Quantification of uncertainties during the conversion process enhances the interpretability of the results.\n3. The methodological design is well-justified through practical experiments showing the reliability of PPG-derived ECGs, surpassing state-of-the-art methods. Ablation studies further substantiate the validity of the proposed approach.\n4. The paper is well-written with a clear structure, making it easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel methodology for synthesizing Electrocardiogram (ECG) signals from Photoplethysmogram (PPG) signals, aimed at enhancing the reliability of cardiovascular disease diagnostics while avoiding the difficulties associated with ECG acquisition. By measuring the uncertainties involved in generating the ECG from PPG signals and in classifying from the generated signals, the authors convincingly demonstrate the feasibility and superiority of their proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper should specify which particular database the CinC dataset is derived from. The rationale behind choosing these 11 types of anomalies should be clarified.\n2. Although the authors provide a thorough theoretical foundation and extensive analysis, the task of generating ECG from PPG lacks inherent rationale. It is unclear whether the generated ECG can reliably reflect arrhythmias, making this approach seem like an uncertain application of analytical techniques with limited practical value.\n3. The authors should further compare their classification results with state-of-the-art models, as the current performance appears suboptimal for ECG classification tasks."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We developed a SOTA PPG-to-ECG conversion model and an optimal classification approach that accounts for conversion uncertainty, supported by rigorous mathematical justification."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024uncertaintyaware,\ntitle={Uncertainty-Aware {PPG}-2-{ECG} for Enhanced Cardiovascular Diagnosis using Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1dkVCX4jlH},\nnote={under review}\n}"
},
"abstract": {
"value": "Analyzing the cardiovascular system condition via Electrocardiography (ECG) is a common and highly effective approach, and it has been practiced and perfected over many decades. ECG sensing is non-invasive and relatively easy to acquire, and yet it is still cumbersome for holter monitoring tests that may span over hours and even days. A possible alternative in this context is Photoplethysmography (PPG): An optically-based signal that measures blood volume fluctuations, as typically sensed by conventional ``wearable devices''. While PPG presents clear advantages in acquisition, convenience, and cost-effectiveness, ECG provides more comprehensive information, allowing for a more precise detection of heart conditions. This implies that a conversion from PPG to ECG, as recently discussed in the literature, inherently involves an unavoidable level of uncertainty. In this paper we introduce a novel methodology for addressing the PPG-2-ECG conversion, and offer an enhanced classification of cardiovascular conditions using the given PPG, all while taking into account the uncertainties arising from the conversion process. We provide a mathematical justification for our proposed computational approach, and present empirical studies demonstrating its superior performance compared to state-of-the-art baseline methods."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Inverse Problems"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/03df1ff3ccf1c090d8fbb7d9df71ce8a3316de2c.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Uncertainty-Aware PPG-2-ECG for Enhanced Cardiovascular Diagnosis using Diffusion Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1durmugh3I | Towards Fast, Specialized Machine Learning Force Fields: Distilling Foundation Models via Energy Hessians | main | Active | machine learning force fields;graph neural networks;knowledge distillation | applications to physical sciences (physics, chemistry, biology, etc.) | 5;5;6;6 | 5;3;4;4 | 3;2;3;3 | 3;2;2;3 | 3;3;3;3 | 5.5 | 4 | 2.75 | 2.5 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "● Some models use conservative forces and some don't. Do you have a sense of how much that impacts when you distill Hessians from a non-conservative model instead or if the student model is conservative? Or do you expect that not to impact performance as it seems like hessian quality doesn't?\n\n● Why are the rows sampled so different for GemNet across training set and the same for PaiNN? What would be the general suggestion to set this hyperparameter? Or should everyone be iterating on this for every dataset, model architecture, etc?\n\n● Why don't you compare the results with Kelvinius et al. 2023 on OC20-2M and COLL on which the original work was performed?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "● The distilled MLFFs are up to 20x faster than their foundation model counterparts, enabling more efficient simulations.\n\n● The distilled MLFFs achieve comparable or even better force prediction accuracy than the original FMs and demonstrate improved MD stability results.\n\n●The Hessian distillation method is model architecture agnostic.\n\n● Subsampling hessian rows significantly reduces computational costs without sacrificing performance. Its also interesting that subsampling quality doesn't impact the performance much."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes a new method for improving the efficiency of Machine Learning Force Fields (MLFFs) by utilizing a knowledge distillation technique. The method distills knowledge from large, general-purpose foundation MLFF models (FMs) into smaller, faster MLFFs specialized for specific regions of chemical space. This is accomplished by aligning the energy Hessians, which are the second derivatives of the energy with respect to atomic positions, between the teacher FM and the student MLFF. By strategically subsampling rows of the Hessian, the authors significantly reduce the computational cost of the distillation process. The authors demonstrate that their approach can achieve speedups of up to 20 times compared to the original FMs while retaining, and in some cases exceeding, the accuracy of the foundation models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "● Training with hessian distillation increases the computational cost compared to undistilled training.\n \n● An anonymous link to the code is not available."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "No ethics concerns."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See “Weaknesses” for detailed questions and suggestions.\n\nPotential typos:\n* Line 420: “disilling” should be “distilling.”"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The approach is well-motivated, leveraging knowledge from large MLFF foundation models and adapting it to specific chemical space regions using knowledge distillation. The method also achieves promising results across organic and material molecular systems.\n* The paper is well-organized and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a method for transferring general-purpose representations from large ML force field (MLFF) foundation models to smaller, faster MLFFs specialized for specific regions of chemical space, with the aim of improving inference speed. The approach is formulated as a knowledge distillation (KD) process, where the smaller “student” MLFF learns to match the Hessians of energy predictions made by the “teacher” foundation model. By selectively subsampling rows of the Hessian corresponding to individual atomic coordinates, the “student” MLFF achieves a training process that is computationally efficient relative to foundation models and demonstrates improved force prediction on downstream datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Although the method shows promising results, the rationale for using Hessian information as a distillation signal is unclear, which may impact the perceived technical contribution. Additional theoretical or intuitive insights on this choice would clarify the method’s grounding.\n* Foundation models are often trained with energy and force supervision, possibly derived from various electronic structure methods, making the physical reliability of Hessians from pre-trained foundation models questionable.\n\t* Notably, the authors mention that some student models outperform foundation models in specialized chemical spaces even before distillation, suggesting that foundation models may not fully converge in certain cases. This raises questions about the significance and reliability of using Hessians from foundation models as distillation targets.\n\t* Accurate Hessians are crucial for tasks like geometry optimization (as referenced in Figure 1). It remains unclear how potentially inaccurate Hessians from foundation models could affect the student model's performance in such applications.\n* It is uncertain whether the proposed method can be extended to MLFF architectures designed with energy-conserving forces or high-order equivariance, which are often crucial factors for stable and transferable ML force fields. Discussing the impact of these inductive biases on the Hessian-driven KD approach would strengthen the work."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* The tables in the paper show only force results, without energy, so I'm curious about the energy results after distillation.\n* A major concern is that the primary use of the distilled MLFF model is for molecular dynamics simulations, where conservation properties are crucial for scientists in physics, chemistry, biology, and materials science. I understand the authors avoided second-order derivatives to calculate the Hessian by directly predicting forces, using JVP calculations. However, a pretrained model might predict forces directly to save computation due to its large size, but the student model should compute forces using autograd, similar in the SFT in JMP[1], which makes more sense. Although Fig. 2 shows stable molecular dynamics in the NVT ensemble, following [2], energy will not be conserved in the NVE ensemble.\n* The paper claims that the student model outperforms the teacher model, which is confusing. I suspect this is because the energy and force labels used in training come from the dataset itself. While the inclusion of Hessian loss is shown to be better than using only energy and force loss, this highlights the importance of derivatives. Since the Hessian matrix introduces force derivatives, could training a traditional MLFF from scratch, with forces computed via autograd, achieve similar or better results? Additionally, the statement in Fig. 3b about \"speculating that s may play a similar role as the batch size\" is akin to conclusions from traditional MLFF training, suggesting that direct autograd training without Hessian distillation might yield similar outcomes. The authors could compare such models to illustrate the Hessian's impact.\n* Regarding the appendix experiment using force for distillation, what is the specific loss function? If I'm correct in understanding that the Hessian term in Eq. 3 is replaced by the force term, could the poor results from force distillation be due to the inherent force label loss in the data, where the teacher model's force predictions contradict the data labels? This implies that the force distillation setup might be flawed.\n\n[1] Shoghi N, Kolluru A, Kitchin J R, et al. From Molecules to Materials: Pre-training Large Generalizable Models for Atomic Property Prediction[C]//The Twelfth International Conference on Learning Representations.\n\n[2] Fu X, Wu Z, Wang W, et al. Forces are not Enough: Benchmark and Critical Evaluation for Machine Learning Force Fields with Molecular Simulations[J]. Transactions on Machine Learning Research."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* This paper proposes a new method to distill MLFF foundation models into smaller, faster MLFFs with Hessians, which is highly beneficial for simulating realistic systems.\n* The paper is written in a clear and concise manner, facilitating effortless understanding."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a method to distillate MLFF foundation models to smaller, faster MLFFs with energy Hessians, achieving significant speed improvements while maintaining performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Related concerns are discussed in the questions section."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* Why is there a tank in Figure 1?\n* While the Hessian has a nice physical interpretation, as the authors point out, do higher derivates improve transfer further? One way to improve runtime in such cases would be to use Forward on Backward differentiation."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Clarity and Readability: The paper is very well-written and accessible, making the proposed method easy to follow.\n2. Practicality: The approach is straightforward to implement and is cost-effective relative to similar knowledge distillation methods.\n3. Experimental Validation: MD simulations performed validate the method, underscoring the practical benefits of the proposed technique.\n4. Implementation Insights: The paper also offers practical implementation guidance for practitioners, which is particularly useful for real-world applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel technique for knowledge distillation from foundation model force fields to smaller, faster, and more specialized force fields. The core idea is to align the Hessians of the energies with respect to atomic positions between the teacher (foundation model) and student models, facilitating efficient knowledge transfer. Experiments demonstrate that this approach improves stability and accuracy in the derived force fields compared to simpler distillation methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The baseline methods used for comparison seem to perform notably poorly, raising questions about fairness. This might be genuine, but the absence of hyperparameter tuning for the baselines undermines this. The authors specifically tuned their method by adjusting the Hessian distillation loss term, \"we reduce the weight of the Hessian distillation loss term, λKD, by a factor of 4 during training once the student’s validation loss on the original objective, LEF (φ), becomes lower than that of the frozen FM, LEF (ψ)\" (l.207-209). Further clarification on the role of this schedule, as well as the sensitivity to λKD, would be helpful. Would similar tuning or scheduling benefit the alternative approaches as well?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We distill large machine learning force field foundation models into small, specialized models using knowledge distillation from energy Hessians."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024towards,\ntitle={Towards Fast, Specialized Machine Learning Force Fields: Distilling Foundation Models via Energy Hessians},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1durmugh3I},\nnote={under review}\n}"
},
"abstract": {
"value": "Foundation models trained on large datasets have transformed machine learning by leveraging general-purpose representations to solve many downstream tasks. A similar paradigm is arising in Machine Learning Force Fields (MLFFs), a powerful tool in computational chemistry for a variety of atomistic modeling tasks. Recent MLFF foundation models are enabled by a combination of increasing ab-initio data availability and larger model sizes. Although MLFF foundation models have begun to close the accuracy gap relative to first-principles methods, there is still a strong need for faster inference speed. Additionally, while model development is increasingly focused on general-purpose models which transfer across chemical space, practitioners typically only study a small subset of systems at a given time. This underscores the need for fast, specialized MLFFs relevant to specific downstream applications. In this work, we introduce a method to transfer general-purpose representations from MLFF foundation models to smaller, faster MLFFs specialized to specific regions of chemical space. We formulate our approach as a knowledge distillation procedure, where the smaller \"student\" MLFF is trained to match the Hessians of the energy predictions of the \"teacher\" foundation model. By selectively subsampling rows of the Hessian corresponding to individual atomic coordinates, we significantly reduce the number of required backward passes. This ensures that distillation incurs a small computational cost relative to training the original foundation model. We demonstrate our approach across multiple recent foundation models, large-scale datasets, and chemical subsets. Our results demonstrate that our specialized MLFFs can be up to 20 $\\times$ faster than the original foundation model, while retaining, and in some cases exceeding, its performance. More broadly, our work suggests a new paradigm for MLFF development, in which foundation models are released along with smaller, specialized simulation \"engines\" for common chemical subsets."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"machine learning force fields",
"graph neural networks",
"knowledge distillation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/5ffd1a42a7d6f083746aa49c305ec342845a3076.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Towards Fast, Specialized Machine Learning Force Fields: Distilling Foundation Models via Energy Hessians"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1e5fX6X44w | Mean-field Continuous Sequence Predictors | main | Active | Mean-field graphon games;Mean-field games as continuous sequence prediction;Mean-field Neural SDEs | learning on time series and dynamical systems | 6;6;6 | 2;2;2 | 3;3;3 | 3;3;3 | 2;2;2 | 6 | 2 | 3 | 3 | 2 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- What are some limitations of the method?\n- How does the runtime compare to other baseline models that you compare to?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The proposed model seems theoretically well-motivated.\n- The empirical evaluation shows a superior performance on different benchmarks compared to existing models.\n- The paper describes two ablation studies that analyze the model's robustness to noise where it performs superior to the Mamba baseline and that analyze performance as the number of base predictor models increases, which leads to improved performance as predicted by the presented theory."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a new model for predicting continuous sequences and discusses its theoretical underpinnings as well as its empirical evaluation on different benchmark datasets and in comparison to a range of baseline models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- This paper is very technical and builds on many rather sophisticated mathematical concepts that I would assume many readers not to be familiar with (and this in itself is of course not a weekness). I believe the presentation of the content could be improved so that the paper and the main concepts become more understandable, e.g. I believe a more high level introduction to the modelor some of the key concepts (e.g. graphons) would make it easier to follow the paper. Secondly, I sometimes was wondering about the notation of particular equations which were only clarified much later in the text. Examples for this are: $\\mathcal{\\nu}$, $\\mathbb{W}$ or <> in definition 2.1, or the (subscript) E in $\\mathbb{E} $$[||\\mathbb{E}X_{u_\\infty}^{α^*} (t) − y||^2_E]$ in the main text. \n\n- I appreciate the overview figure 1 and can see that a lot of work went into that. However, I think the caption could be improved, e.g. there are three subfigures but the caption only mentions \"left\" and \"right\". Also what is the difference between\"real observations\" and the (observed?) values of u? And what is y in the legend?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- I am not very familiar with the mean field game literature, could the authors point to some references, and include earlier on in the paper, (before starting their own problem formulation)\n- Could authors explain/illustrate how long it takes for their method to converge/train as compared to baselines."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Seems to be the first paper to propose such a method to learn time series by casting as mean field game, which has been applied successfully in other areas of control and generative modeling/prediction.\n- Paper relatively well organized (see weakness)\n- Author provide numerical illustration on three datasets"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Authors cast the time-series prediction problem into a mean-field game, where they treat the time-series as arising from a controlled stochastic differential equation (mean-field graphon dynamic), which is given in terms of a continuum of mean-field predictors. Authors cast the problem thus to find an optimal control policy to the dynamic by solving the associated Bellman equation. The authors discuss how find such a policy by gradient descent in their mean-field game setting. Authors illustrate their method a number of real world data sets and perform two ablation studies."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The way things are introduced is a bit confusing in Section 2. There is little motivation to the definition 2.1, which has limited reference to related literature. Many terms are not defined/explained until much later, e.g. the function $b$ is never explicitly defined or explained anywhere in the manuscript, and there are multiple overlapping uses of the variable $W$ with different meanings.\n- Authors have specified little related literature to their work, e.g., connection of this work and (Liu et al. 2022) was not entirely clear to me.\n- Empirical evaluation doesn’t include runtime results, e.g., the convergence rate of the solution to scheme presented in 3.2 (w.r.t. the training of other competing models is not specified)\n\nErrata:\nLine 92: Why is initial condition $y_u \\sim p(u,y)$ measure dependent on $y$?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. In definition 2.1, define $\\psi$, such that in general the reader can know what it is, or maybe move definition 2.2? I was confused about \\psi (although I can guess what it is), it will be easier for other readers to understand the equation if two definitions are written together. \n2. Since the graphon is modeled by a neural network, is the linear assumption, where the mean-field drift and the Ito drift are linearly added, still necessary in Eq (1)? If not, is it possible to extend this framework to Mckean-Vlasov SDE to be more general than just mean field SDE ? (see[1], [2])\n3. Notation discrepancy in Eq (3)? Eigen function was defined as $\\phi_l$ but written as $\\varphi_l$\n4. Why do the authors assume the form of the graphon when $W_{\\alpha} (u,v)$ are already modeled as neural networks? What is the implication of completely assuming the graphon to be a neural network without assuming its form? Can one still recover the temporal decay and cyclic properties when no such form is assumed?\n5. Related to the above comment, [1] introduced implicit measure architecture that learns the mean field through a change of measure from the space of neural network weight to the observation space. Could this be implemented under this system without explicit assumption on the form of Graphon? \n6. Is there a way to combine the temporal decay and the cyclic properties into one form of graphon?\n7. Can the number of samples be incorporated in the cost function such that the most cost-effective number of samples can be obtained?\n\n[1]: Yang, Haoming, et al. \"Neural McKean-Vlasov Processes: Distributional Dependence in Diffusion Processes.\" International Conference on Artificial Intelligence and Statistics. PMLR, 2024. \n[2]: Sharrock, Louis, et al. \"Online parameter estimation for the McKean–Vlasov stochastic differential equation.\" Stochastic Processes and their Applications 162 (2023): 481-546."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Generally the idea is novel and nicely motivated. \n2. Although it may seem incremental to add just the mean-field SDE based on graphon, the theoretical analysis and the related algorithms are non-trivial. \n3. Demonstrated strong empirical improvement."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors described a new method of modeling continuous time series through mean-field SDE where the mean-field interaction is computed over a graphon. It can be essentially be seen as an ensemble method (over a graphon) to model timeseries, where the graphon is semi-parametric with predetermined (temporal decay and cyclic) form. A stochastic controller is then applied to control the values of the parameters of the graphon. The authors also developed a gradient-based optimization algorithm to optimize the neural network used for stochastic control. Overall the paper has sound theoretical motivation and good empirical results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Confusion in notations. Throughout reading the manuscript, I would constantly be lost in new notations. Please make sure the notations are consistent. \n2. Lack of Limitation as there is no discussion of the limitations of the proposed methods. \n3. Lack of details in experiments and implementation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024meanfield,\ntitle={Mean-field Continuous Sequence Predictors},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1e5fX6X44w},\nnote={under review}\n}"
},
"abstract": {
"value": "We propose a novel class of neural differential equation models called mean-field continuous sequence predictors (MFPs) for efficiently generating continuous sequences with potentially infinite-order complexity. To address complex inductive biases in time-series data, we employ mean-field dynamics structured through carefully designed graphons. By reframing time-series prediction as mean-field games, we utilize a fictitious play strategy integrated with gradient-descent techniques. This approach exploits the stochastic maximum principle to determine the Nash equilibrium of the system. Both empirical evidence and theoretical analysis underscore the unique advantages of our MFPs, where a collective of continuous predictors achieves highly accurate predictions and consistently outperforms benchmark prior works."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Mean-field graphon games",
"Mean-field games as continuous sequence prediction",
"Mean-field Neural SDEs"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e9c4b2df8528d3d798413c65666e6cfe7aae07e7.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on time series and dynamical systems"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Mean-field Continuous Sequence Predictors"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1eI236MqEA | LoRA-Composer: Leveraging Low-Rank Adaptation for Multi-Concept Customization in Training-Free Diffusion Models | main | Active | Multi-Concept Customization;LoRA Integration;Training-Free | generative models | 3;3;6;6 | 5;4;4;3 | 3;1;3;3 | 2;1;4;3 | 3;2;3;3 | 4.5 | 4 | 2.5 | 2.5 | 2.75 | -0.707107 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N.A."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "How to address the issue of adding restrictions on features that significantly reduce the authenticity of generated results?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The ideas are clearly presented."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a modified LORA-based multiple-concept generation model. By introducing three loss functions, the phenomenon of concept vanishing and confusion are somewhat suppressed."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper was prepared carelessly. First, the paper exceeds the length limit. Second, the authors claim that the proposed module is training-free. However, the main part is three loss functions. Third, the visualization results are poor. For example in Figure 5, the persons are pasted to the background, and the results look unreal. I think the results of mix-of-show method are far better."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. What would be the performance of LoRA-Composer when applied to datasets that exhibit more complex interactions among subjects?\n2. Would the fine-tuning of layers beyond the U-Net architecture lead to further enhancements in the preservation of concepts?\n3. If the background inherently includes elements of the foreground, would this affect the effectiveness?\n4. Would the presence of overlapping layout boxes influence the outcome?\n5. Are there any errors in Fig.3a and Fig.3b? It seems that m1-v1 and m2-v2 do not match.\n6. For two similar foreground concepts, such as people who look very similar, is there a possibility of concept confusion?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper's technical approach appears well-founded. The concept isolation and injection constraints effectively reduce concept vanishing and confusion, supporting the paper's claims of improved performance in multi-concept generation. The latent re-initialization technique also adds rigor, ensuring spatially accurate representation of concepts.\n2. The paper is clear and logically structured, guiding readers through the model's design, methodology, and experimental evaluation. Visual examples illustrate improvements over other models.\n3. The proposed LoRA Composer is an innovative solution, and the selected baseline should be the latest. In comparison, the model performance of this paper is outstanding, and there are abundant comparative and ablation experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents LoRA-Composer, a training-free framework designed to manage multi-concept image customization using Low-Rank Adaptations (LoRAs) with layout and textual prompts. LoRA-Composer addresses two key challenges in multi-concept customization: concept vanishing (loss of intended concepts) and concept confusion (misattribution of characteristics between subjects). Key features include concept injection constraints, concept isolation constraints, and latent re-initialization for spatial focus. Experimental results show LoRA-Composer outperforms existing methods in qualitative and quantitative metrics."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Despite being training-free, the model’s architecture (especially concept isolation and injection constraints) is relatively complex and might limit ease of implementation.\n2. Evaluation Scope: The method is tested on select datasets, including COCO, FFHQ, and CelebA-HQ, featuring anime and realistic styles. Testing on broader datasets could enhance its robustness claims.\n3. A discussion should be added on whether this method is easy to extend, whether it is applicable to various variants of stable diffusion, and it is not yet clear which version of Stable Diffusion is used in this paper.\n4. There seem to be some defects in the figure drawing in the article, such as the arrow pointing to the text encoder in Fig. 2, and there is also a lack of explanation for the data flow related to Fig. 2.\n\nAlthough the paper has some shortcomings, its overall innovation and the integrity of the experiments are good."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- The authors claim that concept injection constraints effectively avoid concept missing (at line 264), but Figure 6(d) still has that issue. So, does the concept isolation constraints (CI) also contribute to the mitigation of concept missing?\n\n- How does the value of k in topk(.) reduce function in the loss components (equation (3) and (7)) affect the results? For example, using larger k might lead to larger generated concepts?\n\n- In scenarios with overlapping box layouts, such as “A [v1] person hugs a [v2] dog,” how effectively does LoRA-Composer perform? It appears that the calculations in these situations may result in many artifacts in the outputs.\n\n- There's a minor analysis point that I think should be clarified. In my view, the Gradient Fusion optimization combined with ED-LoRA introduced in Mix-of-Show [1] is not the primary factor reducing concept identities when generating multi-concept images (e.g., prompts containing multiple concept tokens like “A [v1] man and [v2] woman”). Rather, it's more closely tied to the \"incorrect behavior\" in the cross-attention and self-attention modules that you are aiming to address. This suggests that the LoRA-Composer method could also be applied to Mix-of-Show or other methods using Gradient Fusion.\n\n[1] Gu, Yuchao, et al. \"Mix-of-show: Decentralized low-rank adaptation for multi-concept customization of diffusion models.\" Advances in Neural Information Processing Systems 36 (2024)."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper identifies and tackles significant challenges in the multi-concept customization task, which are concept vanishing and concept confusion, by examining the cross-attention and self-attention layers within the U-Net of Stable Diffusion.\n- The motivations for the contributions are explained well with informative figures. \n- Extensive experiments and ablation studies are conducted to showcase the capability of the proposed method.\n- LoRA-Composer can produce visual stunning multi-concept outputs in a training-free manner and does not require the image-based conditions like canny edge or pose estimations. It could potentially have wide applicability across several applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a training-free model for integrating multiple LoRAs called LoRA-Composer. From given box layouts, a global prompt and local prompts, the proposed method addresses the concept vanishing and confusion issues in multi-concept customization by proposing Concept Injection Constraints and Concept Isolation Constraints, respectively. Concept Injection Constraints modify the cross-attention layers in the U-Net to perform Region-Aware LoRA Injection and Concept Enhancement Constraint, which refine cross-attention maps using Gaussian weighting and adopt a strategy to obtain box-spread attention values. Meanwhile, Concept Isolation Constraints focus on self-attention layers to limit the interaction between queries within a specific concept region and those in other concept regions.\nThe authors also propose latent re-initialization to obtain better prior latent values for the generation process. LoRA-Composer achieves a notable enhancement compared to standard baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The novelty of the proposed method is not enough for ICLR:\n + Some contributions should be clarified as either \"inspired by existing work to develop\" or simply \"adopted,\" in order to emphasize the novelty of the paper:\n . Region-Aware LoRA Injection: Similar to Regionally Controllable Sampling in Mix-of-Show [1]\n . Gaussian weighting in Concept Enhancement Constraints: Similar to the method proposed in BoxDiff [2], with Gaussian weighting from Attend-and-Excite [3].\n + For Region Perceptual Restriction, the idea of minimizing interaction between queries of the foreground and background areas in self-attention is quite popular in existing work related to attention manipulation, such as Attention Refocusing [4].\n- The writing in some parts is quite ambiguous:\n + Region-Aware LoRA Injection at line 200: After obtaining h_i in equation (2) at line 215, what do we do next?\n + L_c loss in equation (3) at line 240: What is it? It suddenly appears there without any explanation.\n + Concept Region Mask in Line 270: What do we use it for?\n- The prompts used for qualitative evaluation should be mentioned (Figure 5, Figure 6)\n\n[1] Gu, Yuchao, et al. \"Mix-of-show: Decentralized low-rank adaptation for multi-concept customization of diffusion models.\" NIPS 2024\n[2] Xie, Jinheng, et al. \"Boxdiff: Text-to-image synthesis with training-free box-constrained diffusion.\" ICCV 2023\n[3] Chefer, Hila, et al. \"Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models.\" ACM Transactions on Graphics (TOG) 2023\n[4] Phung, Quynh, Songwei Ge, and Jia-Bin Huang. \"Grounded text-to-image synthesis with attention refocusing.\" CVPR 2024"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. More explanation of $\\mathcal{L}_c$ in Eq. 3 is needed. What is the meaning of creating $\\mathcal{L}_c$? Its form requires the weight within the concept mask to become larger, but why increasing the weight can restrain the activation for the edge region is not clear. Additionally, why can Gaussian weight restrain the activation in the edge region? Can performing a low-pass filter such as blurring get the same results?\n2. Sec. 3.4 for latent re-initialization is hard to follow. What is replacing the layout area $z_t[M_i]$ with the latent area? What is the latent area, and how can we obtain it? Missing the latent area can make the paragraph hard to follow.\n3. It is observed that the layout for each concept discussed in the paper does not overlap. Is this necessary for the approach to work? What would the outcomes be if some of the boxes overlapped?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The authors propose useful constraints, including concept enhancement and concept isolation, which is an interesting design for the community and can be seen as the plug-and-play objective for future applications.\n2. User study is conducted to bridge the gap between human preference and machine metrics. Their results have provided a huge gap under conditions without further image conditions.\n3. The model and approach design illustrations are clear and easy to follow. Also, the visualization for different approaches and designs are well-structured."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a new LoRA-based approach for customizable generation for diffusion. Their main contributions include proposing a training-free approach and designing new strategies to inject and isolate the concept to ensure the targeted objects are generated without interference. Their approach has been shown to surpass existing SOTA (e.g., Mix-to-Show, Paint-by-Example) with a higher CLIP score on image preservation and text alignment, as well as the mIoU score with the layout."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The authors state that their approach can deal with the concept vanishing issue in Sec. 3.1 but no quantitative comparison to support this statement. For instance, the author can provide a metric that counts how many predicted boxes are obtained with GroundingDINO and compare it with the GT layout. Otherwise, only visualization cannot provide any useful information on how powerfully the proposed approach can deal with the vanishing issue.\n2. The authors propose a new dataset but do not provide the results for the existing one proposed in Mix-of-Show. Further discussion or explanation is needed."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024loracomposer,\ntitle={Lo{RA}-Composer: Leveraging Low-Rank Adaptation for Multi-Concept Customization in Training-Free Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1eI236MqEA},\nnote={under review}\n}"
},
"abstract": {
"value": "Customization generation techniques have significantly advanced the synthesis of specific concepts across varied contexts. Multi-concept customization emerges as the challenging task within this domain. Existing approaches often rely on training a fusion matrix of multiple Low-Rank Adaptations (LoRAs) to merge various concepts into a single image. However, we identify this straightforward method faces two major challenges: 1) concept confusion, where the model struggles to preserve distinct individual characteristics, and 2) concept vanishing, where the model fails to generate the intended subjects. To address these issues, we introduce LoRA-Composer, a training-free framework designed for seamlessly integrating multiple LoRAs, thereby enhancing the harmony among different concepts within generated images.\nLoRA-Composer addresses concept vanishing through concept injection constraints, enhancing concept visibility via an expanded cross-attention mechanism. To combat concept confusion, concept isolation constraints are introduced, refining the self-attention computation. Furthermore, latent re-initialization is proposed to effectively stimulate concept-specific latent within designated regions. Our extensive testing showcases a notable enhancement in LoRA-Composer's performance compared to standard baselines, especially when eliminating the image-based conditions like canny edge or pose estimations."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Multi-Concept Customization",
"LoRA Integration",
"Training-Free"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/96fd9d298f7bbb76cd9721ed8caa1b242f469601.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "LoRA-Composer: Leveraging Low-Rank Adaptation for Multi-Concept Customization in Training-Free Diffusion Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1eMbYu0841 | A Gradient Descent Optimizer with auto-controlled large Learning Rates, dynamic Batch Sizes and without Momentum | main | Active | Machine Learning;ICRL;Optimization | optimization | 3;3;5 | 3;4;4 | 1;2;3 | 2;2;3 | 1;3;3 | 3.666667 | 3.666667 | 2 | 2.333333 | 2.333333 | 0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "N/A"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- This paper can be followed easily and most heuristics are intuitive. The computation of the step size is cheap.\n\n- The experiments on the 2-dimensional example are interesting and show under certain settings (e.g., rotation) the proposed method can significantly outperform other adaptive step sizes such as Adam and RMSprop."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper derives a new step size schedule based on a quadratic model. The key observation is that \nthe optimal step size happens when current and previous gradients are orthogonal. Thus, their inner products play an important role in controlling the magnitude of the step size. Besides this, it introduces several heuristics to stabilize the training and improve the overall performance. Noticeably, it considers damping the learning rate increase when the function value rises (when compared against previous iterations); and gradually increasing the batch size when some criteria based on function values are met (to reduce the random noise when a local minimum is approached). It demonstrates the effectiveness of the proposed method mostly using vision experiments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The proposed step size lacks theoretical guarantees even in the convex settings (or even convex quadratics?). I am not sure where the technical challenges are.\n\n- I don’t think the current method is compared fairly with other methods given all the heuristics added on top of the learning rate. It is difficult to tell where the gain (if there is any) comes from? Is it because of the step size schedule, or batch size increase, or iterates averaging (named as mean value boosting in the paper)? If iterates averaging were applied to other baselines, would the results change?\n\n- Another major concern is that there are many hyperparameters associated with the proposed method, which raises questions regarding its practical usability. How expensive are the tunings of these hyperparameters?\n\n- The experiments on language modelling are inconclusive."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "no further questions, see comments above"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed algorithm introduces not much extra computation cost while tuning the learning rate, as the tuning mechanism only depends on the norms of, and inner products between, past two gradient vectors.\n\nThere is less need in tuning the initial learning rate, compared with pure SGD. The algorithm is invariant under coordinate rotation, which could be an advantage over Adam and other Ada-family algorithms."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an optimization algorithm that dynamically tunes the learning rate and dynamics, based on its last two gradients. The algorithm is evaluated on neural networks with several standard benchmark datasets, and compared with SGD and Adam."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1: The design of the proposed algorithm is based on strong assumptions/conjectures which may not be true in practice. Specifically, it assumes that the loss function is a parabola in the line that goes through any two consecutive iterates. Moreover, it requires that the minimizer of the parabola happens at $x=0$. These conditions usually do not hold in modern deep learning where loss function is considered as highly non-convex, and is far from quadratic.\n\nIn addition, the paper did not verify these assumptions/conjectures in experiments.\n\n2: The proposed algorithm does not seem to improve empirical performance (according to the experiments of the paper). Without those additional techniques (FT, WD), ELRA actually performs quite worse than SGD. The successful run includes too many other techniques (boosting, FT, weight decay, gradient decay etc), it is not clear whether it is ELRA or those accessories that lead to a relatively good performance.\n\n3: [about the empirical term $\\kappa$]. The algorithm introduced the empirical term $\\kappa$. \nThere is no theoretical justification of introducing it\nThere is no explanation of the formula of $\\kappa$ (Line 151). Is it an empirical choice?\nThere is no (theoretical) justification of the claim “($\\kappa$) neutralizes random noise effects in neural networks”. It is hard to believe a single scalar can neutralize random noise. There must be some theory to support the claim. \n\n4: In line 123, the paper says “we expect the optimal $\\alpha_t$ for $x_t$ does not vary too much from the optimal $\\alpha_{t-1}$ for $x_{t-1}$. I could not see why this should be true. The algorithm makes discrete steps (some steps may be quite big), hence $x_t$ is not necessarily close to $x_{t-1}$. At least, the paper needs to experimentally verify the relation between $\\alpha_t$ and $\\alpha_{t-1}$.\n\n5: As for saddle points, the paper only looked at a special type $f(x)=x_1^2-x_2^2$. However, geometry near saddle points can be much more complicated than this special case, and the analysis of the special case may not generalize."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Please answer the issues and questions in the Weakness and point out my potential misunderstandings. I am happy to discuss and enhance my rate."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "* The authors have fused several accelerating tricks in the field of optimization into their optimizer, it seems to surprisingly work well.\n* The authors provide interesting derivations for their design of the optimizer"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work presents a new fast gradient based momentum-free optimizer algorithm with dynamic learning rate and dynamic batch size. They evaluated their algorithm on several benchmarks and made some basic empirical achievement."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* It needs to clarify that, since this optimizer introduces adaptive learning rate for each parameter, then momentum-free shouldn’t be an advantage of this work at efficiency, the cache of learning rates is equivalent to cache momentum practically.\n* In Section 2, the definition of the local minimum of $\\alpha$ is unclear, as the authors assume that $f$ is convex, whereas it does not straightly lead that $f(x_t)$ is convex for $\\alpha$, I am not sure about the effectiveness of devoting “local minimum” for faster optimization theoretically.\n* The key derivation in Section 2 first appeared in the delta-bar-delta method in [1], in which it starts a series of following works, please at least review these works in the paper, and compare how the key idea of ELRA demonstrates advancement. In my opinion, the reasons why the old trick doesn’t last long in the application of optimization may be various. However, this work lacks a considerable review to the prior works.\n* The efficiency claim in line 257 to line 258 sounds unprofessional: This gives ELRA a roughly 10% computation advantage over Adam and Lion, when used with the same batch size. The authors should at least provide some experimental analysis to prove it.\n* It’s not clear what problem or challenges the study mainly aims to address, I checked 7 subsections in Section 3, and found no necessary or professional reason for introducing such or that trick in this optimizer. It seems like an addition of several existing works, but with poor writing.\n* Showcasing “fast” needs comprehensive experiments, evaluations on some toy datasets seem far away from the word “enough”. Could the authors add some speed analysis on those real-world benchmarks compared with baseline optimizers like Adam, Lion, SGDm?\n* Typo: line 479: Our experiments suggest that ELRA shares this behaviour with SDG.\n* I didn’t find out the definition of “ELRA+FT”, please define it somewhere conspicuous, since it looks like this line performs best in the results but I could not find what it is.\n* In conclusion, this paper has a limitation in its presentation, some words seem unprofessional. I suggest the authors to re-organize the whole writing to tell a better story and show a better result.\n\nReferences:\n[1]: Bernard Widrow, Marcian E Hoff, et al. Adaptive switching circuits. In IRE WESCON convention record, volume 4, pages 96–104. New York, 1960."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We present a novel fast optimizer with self-adjusted learning rates and batch sizes, without momentum."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024a,\ntitle={A Gradient Descent Optimizer with auto-controlled large Learning Rates, dynamic Batch Sizes and without Momentum},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1eMbYu0841},\nnote={under review}\n}"
},
"abstract": {
"value": "We present a novel, fast gradient based momentum-free optimizer algorithm with dynamic learning rate and dynamic batch size. The main ideas are to exponentially adapt the learning rate $ \\alpha $ by situational awareness, mainly striving for orthogonal neighboring gradients, and to increase the batch size when the gradients become too noisy, leading to random walks rather than gradient descent. The method has a high success and fast convergence rate and relies only on few hyper-parameters, providing greater universality. It scales only linearly (of order $O(n)$) with dimension and is rotation invariant, thereby overcoming known limitations. The optimization method is termed ELRA (Exponential Learning Rate Adaption). The impressive performance of ELRA is demonstrated by experiments on several benchmark data-sets (ranging from MNIST to ImageNet) against common optimizers such as Adam, Lion and SGD."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Machine Learning",
"ICRL",
"Optimization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b5beef05b3fe5db4d444ecf7e2021014a410953c.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "A Gradient Descent Optimizer with auto-controlled large Learning Rates, dynamic Batch Sizes and without Momentum"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1eQT9OzfNQ | Long Context Compression with Activation Beacon | main | Active | Context Compression;Long Context LLMs;LLM Memory | foundation or frontier models, including LLMs | 5;5;6;8 | 4;5;3;3 | 3;2;3;3 | 3;3;3;3 | 2;2;3;3 | 6 | 3.75 | 2.75 | 3 | 2.5 | -0.738549 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- How are rotary embeddings managed for the beacon tokens? Although the LLM processes a fixed chunk at a time, the relative positions of the beacon tokens vary across chunks. How are positional embeddings applied in these cases?\n- Additional parameters are added and fine-tuned for self-attention projections specific to the beacon tokens. What is the impact of these added parameters on VRAM usage and latency? If the cost is significant, could LoRA fine-tuning be effective for the proposed activation beacons approach?\n- What portion of time is allocated to prefilling and decoding? While the proposed method reduces some recomputation, it may require customized attention masks or iterative context processing, which could lack efficient kernel implementation or result in extra kernel calls. Please provide a latency breakdown of prefilling and decoding for specific workloads (e.g., 32/128k context, 128 decoded tokens) and compare it with the flash attention full-context baseline.\n- How does the proposed approach affect fine-tuning throughput? Please compare its performance with Full-FT.\n\nI am open to adjusting my ratings if all concerns and questions are adequately addressed."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- Compressing by chunks at each layer avoids the need for recomputation and addresses gradient back-propagation challenges present in some prior baselines that rely on recursive dependencies from final-layer outputs. This design enhances both training and inference efficiency.\n- The chunking approach and the interleaved insertion of beacon tokens are straightforward and intuitive.\n- Evaluations on various benchmarks indicate that the proposed approach generally outperforms the KV cache compression and “soft-prompt” compression baselines, achieving notable reductions in both inference time and memory usage.\n- Training with randomly sampled compression ratios enables flexible compression ratios during testing."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces “Activation Beacon,” a plug-in module to conduct long-context compression for LLMs. The proposed approach progressively compresses the activations at each layer and can be trained in the conventional auto-regressive way of language modeling. The authors demonstrate the benefits of this approach through evaluations on various long-context tasks for compression quality and inference efficiency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- In addition to LongBench and NIAH, it is essential to evaluate the proposed approach on newer, more challenging benchmarks, such as RULER [1].\n- Some recent context compression baselines, including CEPE [2] and LLoCO [3], are not discussed in the paper and should be included for a more comprehensive discussion or comparison.\n\n[1] Hsieh et al. RULER: What's the Real Context Size of Your Long-Context Language Models? COLM 2024. \n[2] Yen et al. Long-Context Language Modeling with Parallel Context Encoding. ACL 2024. \n[3] Tan et al. LLoCO: Learning Long Contexts Offline. EMNLP 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Activation Beacon reduces inference time by 2x and KV cache memory costs by 8x compared to the uncompressed baseline.\n\n2. The method supports adaptive compression ratios, allowing flexibility for different tasks and contexts.\n\n3. The proposed model maintains short-context capabilities, preserving the performance of the original LLM."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces \"Activation Beacon\", a compression method designed to enhance long-context processing efficiency in LLMs. The approach compresses the activations of keys and values in transformer layers, avoiding bottlenecks associated with traditional soft prompt methods. Additionally, a progressive compression workflow compresses each context unit in chunks, allowing the model to handle longer contexts than the original LLM's window. Experimental results show Activation Beacon achieves significant memory and computation savings, with minimal loss in performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The performance of this method may vary with model size. Current evaluations focus on medium-sized models, lacking validation on larger-scale models, leaving its effectiveness and applicability in very large models underexplored.\n\n2. The added complexity of managing beacon tokens and compression ratios increases implementation overhead for end-users, particularly when adapting to different tasks. In addition to actual inference latency, specific memory usage data across implementations would help clarify practical resource requirements."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "overall, this paper is novel and idea is well presented. please add more techniques for comparison so that users can choose different method."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper presents an efficient method to compress long contexts, reducing memory usage by up to 8x and speeding up inference by 2x.\n- Its progressive, fine-grained compression approach maintains high compression quality, allowing the model to handle longer inputs than its built-in context window.\n-It supports flexible compression ratios, preserving model performance across various long-context tasks without degrading short-context capabilities."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper compresses activations (keys and values) rather than using soft prompts, facilitating a progressive, fine-grained compression process. Specifically, it first partition input into small chunks, interleaving special beacon tokens that accumulate contextual activations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Lack of Comparison with KIVI: The paper does not provide a direct comparison with KIVI, a relevant compression method that could offer insights into the performance trade-offs.\n- GPU Time Omission: The paper does not report GPU training or inference time, leaving uncertainty around the practical computational cost and efficiency of the proposed method.\n- Scalability Concerns: The method requires 8 A800 GPUs to train a 7B parameter model, raising concerns about its scalability to larger models like 70B, where computational demands could become prohibitive.\n- Limited Comparative Analysis: The paper would benefit from including more baseline methods, particularly compression-based approaches like KIVI, KV-layer shared compression methods such as CacheGen, and relative-position encoding strategies like LM-Infinite.\nAdditional References Needed: Incorporating comparisons with relevant works, such as LM-Infinite [1] for dynamic context management, CacheGen [2] for efficient context loading, and KIVI [3] for asymmetric quantization of KV caches, would strengthen the evaluation and highlight the advantages and limitations of the proposed approach.\n\n[1] LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models\n[2] CacheGen: Fast Context Loading for Language Model Applications\n[3] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "My main question is in the \"regarding differences with previous works\" above. I want to understand if the results are improved mainly from decreasing chunk size, or if there's another difference between soft tokens and beacon tokens that explains the difference.\n\nAlso, what window size do you use? From Table 1, your model has a context length of 32k. I'm guessing you use this window size, but I don't see it explicitly stated, and line 184 suggests that 1024 would be a common window size, so I'm not sure. Since LongBench has only a few examples above 32k, I'm guessing the window logic isn't really used much (unlike for Needle In a Haystack)"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper focuses on an impactful area (long-context efficiency for LLMs).\n- The paper provides a relatively simple idea that is well-explained. I view simplicity as a plus - if a simple idea can give strong accuracy improvements, it's far better than an unnecessarily complicated idea.\n- The paper demonstrates strong results. Table 2 demonstrates strong accuracy at good latency on standard benchmarks for long context. Their method is competitive with full fine-tuning and better than baselines. Table 1 provides strong accuracy as well (though latency is missing).\n- The figures do a good job of explaining what's going on. Figure 1 and Figure 2 give nice overviews of the method.\n- The method is computationally efficient compared to fine-tuning. Their \"pretraining\" (starting from an already-pretrained model) only requires 1B tokens which is very few.\n- The paper ablates design choices (Table 4).\n- The paper is generally well written."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a method called Activation Beacon for efficient long-context processing. The method adds learned \"beacon\" tokens at regular intervals in the input query. These tokens are expected to learn \"summaries\" of the text. At inference time, when processing long contexts, the beacon tokens are retained and the other context tokens are discarded. Thus, the beacon tokens essentially provide a summary of the context. The authors evaluate their method in comparison with a few other recent methods for efficient long context processing. Their method significantly improves results on LongBench and Multi-Needle-in-a-Haystack. The authors also provide ablations for various design choices."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- In Table 1, it's not obvious whether the latencies are comparable. The compression ratio isn't mentioned.\n- line 368: why do you use adaptive compression for llama-2 and uniform compression for qwen?\n\nMy main perceived weaknesses are regarding differences with previous works, and understanding why this method is performing so well:\n- line 135: \"ICAE and AutoCompressor... segment the long context into chunks and compress each chunk. However, both of them compress the context into soft tokens\" <- how are these soft tokens different than beacon tokens? (similarly, on line 373-374, you mention soft tokens being a drawback)\n- line 137: \"Their compression workflow also lacks fine-grained handling of the chunked inputs, resulting in inferior compression quality\" <- it seems like all they would need to do to allow \"fine-grained handling of the chunked inputs\" is just choose a smaller chunk size, so that the soft tokens appear more frequently. Is that right?\n- - If this is true, it seems like your main contribution is the insight that soft tokens should be distributed evenly through the context. Would doing this massively improve the accuracy of ICAE and AutoCompressor? It seems like this is the main discovery, but I'm left wondering if I'm missing some more fundamental difference.\n\n[Minor]:\nline 47: \"it it\" -> \"it\"\nline 53: \"alternamtive\" -> alternative\nline 371: \"highligh\" -> \"highlight\"\nline 483: \"scope\" -> score\nTable 2: give units of \"latency\""
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024long,\ntitle={Long Context Compression with Activation Beacon},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1eQT9OzfNQ},\nnote={under review}\n}"
},
"abstract": {
"value": "Long context compression is a critical research problem due to its significance in reducing the high computational and memory costs associated with LLMs. In this paper, we propose Activation Beacon, a plug-in module for transformer-based LLMs that targets effective, efficient, and flexible compression of long contexts. To achieve this, our method introduces the following technical designs. \n1) We directly compress the activations (i.e. keys and values at every layer), rather than leveraging soft prompts to relay information (which constitute a major bottleneck to encapsulate the complex information within long contexts).\n2) We tailor the compression workflow, where each fine-grained input unit is progressively compressed, enabling high-quality compression and efficient computation during both training and inference. \n3) We train the model through compression-based auto-regression, making full use of plain texts and instructional data to optimize the model's compression performance.\n4) During training, we randomly sample a compression ratio at each step, teaching the model to support a wide range of compression configurations. \n\nExtensive evaluations are conducted on various long-context tasks whose lengths (e.g., 128K) may far exceed the maximum training length (20K), such as document understanding, few-shot learning, and Needle-in-a-Haystack. Whilst existing methods struggle to handle these challenging tasks, Activation Beacon maintains a comparable performance to the uncompressed baseline across various scenarios, \nachieving a 2x acceleration in inference time and an 8x reduction of memory costs for KV cache."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Context Compression",
"Long Context LLMs",
"LLM Memory"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/5b3521f89b796b894da27de96764376fddf1fc25.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Long Context Compression with Activation Beacon"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1ebgtm7P10 | Fixing Data Augmentations for Out-of-distribution Detection | main | Withdraw | OOD Detection; Data Augmentation | alignment, fairness, safety, privacy, and societal considerations | Haipeng Xiong;Kai Xu;Angela Yao | ~Haipeng_Xiong1;~Kai_Xu7;~Angela_Yao1 | 3;3;5;6 | 4;4;4;3 | 2;2;2;3 | 2;2;3;3 | 2;2;2;3 | 4.25 | 3.75 | 2.25 | 2.5 | 2.25 | -0.777778 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Could the authors clarify the computational cost of AugRevise and RegMixup?\n- The v2 ResNet50 yields an ImageNet-1k accuracy of 80.92%. The accuracies in Table 4&5 for AugRevise are significantly lower and v2 models are omitted from the Table. Are there more setups where training with AugRevise leads to a significant drop in accuracy compared to the v2 models?\n\n\nRegarding clarity:\n- Section 5.2, especially lines 360-377 introduces the central part of AugRevise, but it is very short, which is in contrast to the previous Sections where the effects of Mixup/LS were explained thoroughly with several ablations. Section 5.2, in particular the introduction of the loss function requires, more explanation and justification\n- Figures 1 and 2 are hard to read on paper. Larger markers and more distinguishable colours would help. It is not always clear which text belongs to which dot.\n- Line 475: Should it be ImageNet-1k instead of ImageNet200?\n- Throughout the paper (e.g. Figure 3 and 4, but also in most other places): Specifying for each Table and Figure which dataset (ID and OOD), which score and which model is reported would make the Tables and Figures more self-contained\n- Regarding OpenOODv1.5 reporting: Which numbers are usually reported? In OpenOOD there is commonly a split between near and far OOD (as reported in the Appendix), but in the main paper, there is only one number. Is it the average?\n- Throughout the paper: The larger Tables are hard to digest, as there are many numbers but little structure. I suggest making the best methods per model bold and grouping the same models, perhaps also adding row colours.\n- In some Tables some methods are missing that appear in others, e.g. MDS in Table 19, ASH in Table 5 for AugRevise"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper is straightforward: the authors identify a problem (reduced OOD detection performance models from a certain training script), provide an explanation (mixup and label smoothing) and a fix for it\n- The experiments identifying label smoothing and mixup as the problems are convincing and thorough\n- Especially AugDelete, while being a simple method, shows consistent and believable improvements\n- Investigations on the effects of training on OOD detection are often overlooked and a relevant subject to study"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors observe a drop in OOD detection performance of torchvision-v2 models compared to their v1 counterparts despite a gain in ID classification accuracy. They identify mixup and label smoothing as the root cause for the decrease in OOD detection performance, especially on logit-based detection methods via theoretical and experimental analysis. They devise two strategies to mitigate the problem: AugDelete finetunes the linear layer of pretrained models without the problematic augmentation strategies, and AugRevise adds a loss term regularizing the effect of the max-logit of samples with and without mixup for training from scratch. Experiments on the OpenOOD1.5 benchmark are provided."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- While the paper tells a consistent and mostly believable story, its scope is somewhat limited. In particular, it focuses on models from the torchvision v2 hub that were trained from scratch on the respective datasets. How the findings translate to models from other codebases (e.g. timm) with more diverse pretraining settings (e.g. ImageNet21k, Laion, CLIP, distillation, …,) or zero-shot models is unclear. As recent studies [1,2] have shown, SOTA results are often achieved for bigger models with large pretraining schemes especially with feature-based methods, so those setups would be interesting to look at. \n- It is unclear if there is additional computational cost associated with AugRevise and RegMixup. Since for those methods both the mixed and ‘clean’ sample are propagated through the network, this in principle doubles the batchsize and the computational cost (if the batch-size is fixed w.r.t. ‘Clean’ data, which is not explained in the paper). This would give AugRevise and RegMixup an unfair advantage over other baseline methods that only forward the mixed or only the clean samples. \n- The claim that “feature-based methods are likely similarly compromised” is not backed by the provided experiments. For instance, the auroc differences in Table 7/8/9 between v1 and v2 for KNN are marginal, for CIFAR10 even the best-performing model is a v2 model with KNN. For AugRevise, additional feature-based methods like Mahalanobis distance, relative Mahalanobis Distance, Vim, … are omitted in the experiments\n- AugRevise changes the from-scratch training compared to the torchvison v2 training script, but eventually still applies AugDelete, which sometimes leads to significantly lower ID accuracy (e.g. RN50 on IN-1k). It is unclear if other training methods, e.g. autoaugment or 3-Augment[4] or RSB [5] or others would not achieve similar results (potentially when combined with AugDelete). Also, to my understanding, only one model per dataset is investigated with AugRevise (ResNet-18 and ResNet-50).\n- The authors claim that their “empirical results challenge the conventional understanding of ID and OOD performance correlation”, but similar observations have already been made in previous work, e.g. in [3]\n- Proposition 4.2 relies on the assumption that the cosine similarity between ID samples is smaller than between ID and OOD samples. This is a somewhat strong assumption: If this were satisfied for most samples, it would allow to design of a good OOD detector based on cosine similarity. I would appreciate a discussion on the limitations of this assumption and how well it is justified.\n- There are several issues regarding the presentation and the clarity of the paper (details below in Questions)\n\n[1] Julian Bitterwolf, Maximilian Müller, and Matthias Hein. In or out? Fixing ImageNet out-of-\ndistribution detection evaluation. In ICML, 2023.\n\n[2] Galil, I., Dabbah, M., and El-Yaniv, R. A framework for benchmarking class-out-of-distribution detection and its application to imagenet. In The Eleventh International Conference on Learning Representations, 2023\n\n[3] Maximilian Müller, Matthias Hein. How to train your ViT for OOD detection, ICLR 2024 R2FM workshop\n\n[4] Touvron, H., Cord, M., and Jegou, H. Deit iii: Revenge of the vit. ECCV, 2022.\n\n[5] R. Wightman, H. Touvron, and H. Jégou, “ResNet strikes ack: An improved training procedure in timm,” arXiv preprint arXiv: 2110.00476, 2021"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In Figure 2, it seems that \"all augs (v2)\" does not only reduce the OOD detection performance, but also reduce the ID accuracy. Please explain this apparent reduction in both OOD detection performance and ID accuracy. \nIn addition, could the authors consider grouping RE and TA together, and mixup and LS together, then adding these two new data points to Figure 2? This might provide additional insights into the combined effects of these augmentation strategies on both OOD detection and ID accuracy.\n\n2. In Equations (3) and (7), the standard cross-entropy (CE) loss function is missing the \"negative\" sign. While optimization can still proceed with a positive formulation, it’s important to clarify this deviation from the standard notation to avoid confusion."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The motivation is clear and strong. The observation that using data-based data augmentation degrades the OOD detection of the model is new. The paper is to show its solutions.\n\n2. The authors conduct extensive experiments across multiple architectures and benchmark datasets to support their claims.\n\n3. This paper also provides theoretical analysis for the proposed approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the impact of certain data augmentations on out-of-distribution (OOD) detection performance. The authors observe that two popular data augmentation techniques—label smoothing and mixup—although effective at improving in-distribution (ID) accuracy, degrade OOD detection performance. They provide empirical evidence and theoretical insights to explain this issue, highlighting that these techniques reduce the separability between ID and OOD samples in logit space, which is critical for effective OOD detection."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper provides valuable insights into the effects of label-based augmentations (label smoothing and Mixup) on OOD detection. However, it would benefit from a broader exploration of other popular augmentation strategies, such as CutMix, to examine if these alternatives yield similar or contrasting impacts on OOD performance. Could you clarify the rationale behind selecting these specific four augmentation methods? It would be helpful to explain whether and how these choices align with the evolution of torchvision (from v1 to v2) and whether the findings could generalize to other augmentations."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "see weakness"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The authors observe an interesting phenomenon: the torchvision v2 models perform poorly in OOD detection compared to the torchvision v1 models. They find that this is due to the improved training techniques used in the v2 models, such as Label Smoothing and Mixup, which reduces the the maximal logits and then reduces the OOD detection performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the impact of data augmentation techniques on OOD detection, focusing primarily on Label Smoothing and Mixup. The authors find that while these methods improve in-distribution accuracy, they lead to a decline in OOD detection performance. The authors attribute this phenomenon to the fact that both Label Smoothing and Mixup decrease the maximal logits, with this reduction being more pronounced in ID data. To address this issue, the authors propose two methods to mitigate the performance degradation caused by Label Smoothing and Mixup."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The using Label Smoothing and Mixup reduces the maximal logit is obvious, I am more concerned with the authors' statement that “this reduction is more pronounced for in-distribution (ID) samples than for out-of-distribution (OOD) samples”. The authors try to prove this in Proposition 4.2. However, the authors make so many strong assumptions without stating why these assumptions hold, so that the logic of the proof is like \"assume that A is correct, therefore A is correct\" (lines 816 to 822). It would be beneficial if the authors could provide a clearer proof.\n\n* From Table1 I observe that compared to v1 (trained with vanilla cross-entropy loss), the proposed v1+mixup-AugRevise and v1+LS-AugRevise only improve by 0.72 and 0.17, respectively, which is not exciting considering the additional computational cost and hyperparameters\n\n* The proposed fixing method requires retraining, making the method less favorable. I think the contribution of this paper could be greatly enhanced if a post-hoc method could be used for fixing.\n\nTypo: \"Proposition 4.1\" in line 255-256 should be \"Proposition 4.2\""
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "(Copied from Weaknesses) \n1. There are many studies showing that models trained with self-supervised learning are effective for OOD detection. Why must the classifier perform OOD detection simultaneously, rather than using experts for each task of classification and OOD detection?\n2. Can this method be applied to models trained with self-supervised learning?\n3. In the training recipe for achieving the best-performing classifier, are label smoothing or mixup essential components?\n4. Does this research have significance from a transfer learning perspective?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The proposed method is lightweight and easy to implement.\n2. The proposed method outperforms baselines in the tested settings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper examines why mixup and label smoothing can enhance the performance of image classifiers but, unlike RandAugment, Style Augment, and AugMix, simultaneously lead to lower out-of-distribution (OOD) detection performance. It also proposes AugDelete and AugRevise—methods that maintain the classification performance of classifiers trained with label smoothing or mixup while improving their OOD detection performance. AugDelete is lightweight, as it fine-tunes only the penultimate layer of pre-trained classifiers, whereas AugRevise achieves even better performance than AugDelete. The authors validate their claims using ResNets as well as the CIFAR and ImageNet datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. There are many studies showing that models trained with self-supervised learning are effective for OOD detection [1,2,3,4]. Why must the classifier perform OOD detection simultaneously, rather than using experts for each task of classification and OOD detection?\n2. Can this method be applied to models trained with self-supervised learning?\n3. In the training recipe for achieving the best-performing classifier, are label smoothing or mixup essential components?\n4. Does this research have significance from a transfer learning perspective?\n\n[1] Tack et al., \"CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances\" NeurIPS 2020 \n[2] Ming et al., \"Delving into out-of-distribution detection with vision-language representations\" NeurIPS 2022 \n[3] Jiang et al., \"Negative Label Guided OOD Detection with Pretrained Vision-Language Models\" ICLR 2024 \n[4] Lee et al., \"Textual Training for the Hassle-Free Removal of Unwanted Visual Data: Case Studies on OOD and Hateful Image Detection\" NeurIPS 2024"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@misc{\nxiong2024fixing,\ntitle={Fixing Data Augmentations for Out-of-distribution Detection},\nauthor={Haipeng Xiong and Kai Xu and Angela Yao},\nyear={2024},\nurl={https://openreview.net/forum?id=1ebgtm7P10}\n}"
},
"abstract": {
"value": "Out-of-distribution (OOD) detection methods, especially post-hoc methods, rely on off-the-shelf pre-trained models. Existing literature shows how OOD and ID performance are correlated, i.e. stronger models with better ID performance tend to perform better in OOD detection. However, significant performance discrepancies exist between model versions, sometimes exceeding the impact of the OOD detection methods themselves. In this study, we systematically investigated this issue and identified two main factors—label smoothing and mixup—that, while improving in-distribution accuracy, lead to a decline in OOD detection performance. We provide empirical and theoretical explanations for this phenomenon and propose a solution that enhances OOD Detection while maintaining strong in-distribution performance. Code will be released upon acceptance."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Haipeng_Xiong1",
"~Kai_Xu7",
"~Angela_Yao1"
]
},
"authors": {
"value": [
"Haipeng Xiong",
"Kai Xu",
"Angela Yao"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"OOD Detection; Data Augmentation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "xiong|fixing_data_augmentations_for_outofdistribution_detection"
},
"pdf": {
"value": "/pdf/f0cdb36e453047e49d40ccb5a58606d49aac7173.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Fixing Data Augmentations for Out-of-distribution Detection"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||
1epaSm9QRs | Complex Numerical Computation with Numerical Semantic Pre-training Framework | main | Active | Numerical Reasoning;Complex Query Answering;Knowledge Graph | learning on graphs and other geometries & topologies | 3;3;3;8 | 4;5;4;3 | 2;1;2;4 | 3;1;2;4 | 1;1;3;3 | 4.25 | 4 | 2.25 | 2.5 | 2 | -0.816497 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Context in Weaknesses Point 1: Can the framework solve absurd queries of type \"Rope $X$ is $1$mm, Y is $1.61$Meters how long does rope $Z$ has to be to be 10^3 time longer than squared average of $X$ and $Y$\"? \n2. Context in Weaknesses Point 2: Does the method suffer from calibration issues (ranges of probabilities are not homogenous, do not interact) within separate intermediate answers similar to CQD, meaning that each intermediate top-k answer can fall within varying ranges of probability and be omitted/filtered out during product (the t-norm chosen) aggregation?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The research includes many merits, from the novel approach to tackle numeric reasoning within the KGs that is able to achieve ~40% increase compared to prior benchmarks, to the introduction of more robust testing/evaluation query types (2b, 3b ... etc). The study is supported by straightforward benchmarks and comprehensive evaluations, showing that the method is particularly well suited for numeric reasoning and is computationally advantageous as it does not require explicit training on complex queries (trained only on atomic queries), yet generalises well to complex reasoning structures. The use of Multi-ComplEx is shown to be essential for embedding the numeric information within the KG, while the use of fuzzy sets allows robust reasoning when dealing with direct numeric operations, comparisons and assessments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce a novel method for numerical reasoning over numeric knowledge graphs containing both real-valued continuous attributes and entities and relations. This is important as it allows for more robust reasoning over and modelling of real-world natural and common queries within KGs. One novelty of the method lies in the capability to encode the numeric and entity attributes separately, allowing for the use of binary operators to obtain numeric values outside of the designated knowledge graph. A more comprehensive evaluation suite is proposed for this type of OOD (meaning numbers/ents not in the graph) setup. The method allows the joint encoding (separately within a joint training process) of entity-numeric relationships that capture the semantics and abstractions of the relation. This is achieved using Multi-ComplEx, an extension of the link prediction method ComplEx (a strong method) by combining a separate set of numeric-value and numeric-entity encodings."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While the method works well with a margin of error and allows the obtaining numeric values outside of the KG, it is still limited in terms of the numeric continuous values that it predicts and the precision of such numbers through the limited amount of used binary operators and initially present Numeric values. Can the framework solve absurd queries of type \"Rope $X$ is $1$mm, Y is $1.61$Meters how long does rope $Z$ has to be to be 10^3 time longer than squared average of $X$ and $Y$\"? \n\n2. As the framework shares many similarities with CQD, a natural question arises if the intermediate answers obtained during query answering are calibrated to interact with each other (intermediate probability ranges are similar), which was a problem in CQD outlined in CQD-A. This is particularly important as the fuzzy aggregation method (the T-norm) that was chosen is the product norm, which suffers from this discrepancy."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1) Why LitCQD is mentioned but not compared? \n2) Why equation 9 is used, does it satisfy commutative, associative and distributive laws, and many others?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1) The first work considering complex queries involving binary numeric operations.\n2) Experimental improvements seem significant."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a CQA method on KGs with numeric values and binary operations. This approach can effectively handles more than 100 types of complex numerical reasoning queries. On three public datasets, the proposed method CNR-NST demonstrates SOTA performance in complex numerical queries, achieving an average improvement of over 40% compared to existing methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) A comparision with some naive baselines could significantly improve the perception of the experimental results. Especially for the query types with binary operations. The MRR numbers are very small and there is no baseline, and hence it is very hard to judge whether the results are good or not. For example, one could use some simple numeric rules mined from the training graph to derive answers. \n2) It seems that the techniqical contributions are two-fold: 1) Multi-ComplEx, which is a direct extension of ComplEx used in CQD to deal with numerical information; 2) The numerical computation framework. However, it is unclear whether the numerical computation is a reasonable or not. Does it satisfy some laws like commutative, associative and distributive Laws? I see no discussion about this but I think this is the key which influences the generalization capability of the reasoning. \n3) The test queries are generated as \"hard queries\" in the sense that as least one missing link is in the test graph. However, it is unclear for a multi-hop query, how much percent of the links are seen in the training graph. Note that this is important, as if most of the links in a multi-hop queries are seen. Then the problem can be reduced to a link prediction problem."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- How does the proposed approach compare to LitCQD?\n- Section 3.1: It seems the loss function in Equation (4) can be negative (the second part). How would one train with a negative loss? Are there any settings that prevent the loss from becoming negative?\n- Lines 464-465: \"Instead of ranking based on the exact match of numerical nodes, we compute the RANK using the probability ranking of numerical nodes whose relative error compared to the correct answer is below a specified threshold (typically set at 0.001)\"\nThis evaluation metric discards some numerical nodes, which might not reveal the actual performance of a model. As an example, assume that $n$ nodes were ranked higher than the target node in the original metric (MRR). Also assume that these $n$ nodes are now removed in your new metric because they do not fulfil the 0.001 criterion. Then, the actual target node will now be ranked 1st, which does not really reveal the performance of your model. Any ideas on how to improve this metric? Why don't you use Mean Absolute Error (MAE) or Mean Squared Error (MSE) as LitCQD does?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- The paper tackles a highly relevant problem. Numeric data is wide-spread in real-world knowledge graphs.\n- The experimental results show that the new approach outperforms the NRN baseline in most cases."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes an approach (CNR-NST) for complex query answering on incomplete knowledge graphs. In contrast to many previous works (e.g., CQD, GQE, ...), the paper supports knowledge graphs with numeric attributes. The proposed approach is compared to one of the existing works addressing the same problem (NRN)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Lack of novelty: The previous approach LitCQD is mostly ignored leading to wrong claims about the contribution (see details)\n- Insufficient experiments: LitCQD is missing as a baseline\n- Poor technical and presentation quality (see details below and questions)\n\n### Details \n\n- Figure 1: Q3: \"What is the total population of Schleswig-Holstein and Dakar?\" The paper claims that previous approaches could not answer this query (\"cannot compute or infer new numerical answers from multiple values (like Q3).\"). LitCQD supports to answer such queries (see Equation 13 in the LitCQD paper).\n\n- \"Numerical Binary Operation Operator.\" This operator can handle \"queries that involve numerical answers\" and this operator is presented as a novel contribution. However, the Section 4.2 \"Multihop Queries with Literals and Literal Answers\" in the LitCQD paper deals with exactly this issue.\n\n- Section 4.3 : \"For the first time, we extend numerical reasoning in knowledge graphs to the real number domain, whereas previous methods were confined to the discrete numerical domain within the KG.\" This sentence is wrong (c.f. LitCQD)\n\n- Section 4.4: \"Previous approaches used the same evaluation metrics for these queries as for entities, but this method has limitations.\" This sentence is wrong. LitCQD used Mean Absolute Error (MAE) and Mean Squared Error (MSE) instead of Mean Reciprocal Rank (MRR).\n\n- Abstract: A sentence seems to be repeated in the abstract: \"The proposed frame-work is the first to enable binary…\" and \"The CNR-NST framework can perform binary…\". Apart from the fact that this sentence is wrong (see LitCQD), the two sentences should be merged.\n\n- Preliminaries: After the period, there should always be a space. For example: \"relations R.Each triplet\" --> \"relations R. Each triplet\" (lines 150-151, to show only a few. The problem occurs more often throughout the paper.)\n\n- Preliminaries \"Knowledge Graph $G = (V, R, \\epsilon)$ contains the set of all entities $\\epsilon$ and the set of all relations $R$.\" What is $V$ in this definition? How is it different from $\\epsilon$? This definition seems wrong: a knowledge graph is not only defined by its set of entities and relations. Triples define a knowledge graph, too.\n\n- Preliminaries: Lines 173-174: \"In the above equation, the variable $E$ represents a subset of entity $\\epsilon$...\" How is a variable a subset? It might be better to talk about variable bindings.\n\n- Preliminaries: The functions $r_i$ and $a_j$ are mentioned in the paragraph on lines 173-178 but never defined\n\n- Section 3.1: Confusing notations are used, e.g., it is not clear whether $\\mathbb{R}$ is the usual real numbers, see Equation (4). Also see line 309 where $\\mathcal{R}$ (instead of $\\mathbb{R}$) is defined as the real number domain. Moreover, in line 152 $\\mathcal{R}$ is defined as set of relations.\n\n- Section 3.1: There seem to be many inconsistencies in the Methodology section. First it is written that $f(h,t,r) \\in \\mathbb{R}$ then $(h,t,r) \\in \\mathbb{R}\\cup \\mathbb{A} \\cup \\mathbb{F}$ (Equation 4). Both f(h, t, r) and (h, r, t) are in $\\mathbb{R}$ (with and without the $f$)? Moreover, the notation $(h,r,t)$ is not consistent across the paper. Sometimes a triple is denoted as $(h,t,r)$. \n\n- Section 3.2: Confusing terms are used on page 6: \"relation edge\" and \"entity node\". I believe one should either use \"relation\" or \"edge\" or \"entity\" or \"node\" but not two of the words.\n\n- Section 3.2: $V^*(X = x)$ is defined as a truth value (see line 273). The transposition operation is also applied to it afterwards, see Equation (9). This makes the equation look incorrect. Important details on how fuzzy numbers and fuzzy sets are used in the proposed approach seem to be missing.\n\n- Section 3.2: Lines 270-271: It is written that $\\mathcal{U}(Q)$ denotes a probability, but it is actually defined as a set in Equation (8)\n\n- Section 3.2: Lines 274-275: If $x$ is an entity or numerical value, what is $|x|$?\n\n- Equation 10: It should be $\\phi (V_{i1}, V_{i2}, \\ldots, V_{im})$, you used $V_{i1}$ twice in the enumeration."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. L30 in what sense is the new MRR metric validated by your results?\n2. L105 what are the inherent fuzzy relationships within numerical data?\n3. L214 why is this referred to pre-training? Is it not just \"training\"?\n4. L236 what does it mean that \"X represents various relationships\"?\n5. L247 what does \"original distribution\" refer to?\n6. L264 what is an \"anchored\" entity?\n7. L267 is N here referring to the set of all possible numbers in the system? This will then be a huge vector\n8. L308 what are these \"membership functions\"?\n9. L329 what do Avg_All and the other columns denote?\n10. L408 are these values only eliminated post-training?\n11. L465 if you’re using a continuous range, is this not the only way this can be done?\n12. L524 how does your work differ from that of LitCQD?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The improvements on the baseline are very significant, with strong results reported across 3 datasets and a large variety of query types.\n\n2. The ablation study demonstrates well how different parts of their proposed architecture contribute towards its success.\n\n3. The related work is thorough."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper aims to solve the task of multi-hop queries over knowledge graphs containing both entities and numerical values. They adapted ComplEx to devise a series of encoders which are trained and then brought together to form a system that, using fuzzy logic, can answer queries on such KGs (including queries that have answers from the real numbers). They evaluate the model across 3 benchmarks datasets against an existing method and show a significant improvement on previous results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Major:\n1. The soundness of the mathematical presentation is very poor: this is my most significant issue with the paper. Many terms and symbols are used without being defined, notation switches arbitrarily, some sections do not make logical sense, and symbols do not match up with standard mathematical practice. As such, it was impossible for me to ascertain precisely how the proposed model operates. More specific details on this can be found below.\n\n2. The paper claims to be evaluating against 3 different numerical reasoning models, but they all come from the same paper and are variants of one another. As a result, I find the evaluation to be lacking, and suggest that the authors evaluate against some other baselines, such as the ones mentioned in L508 - L515, and L524 - L527. Furthermore, the comparison on L494 between training and testing times is not relevant, since evaluation speed is most pertinent when a model is being applied.\n\n3. Sections of the paper are full of clear typos, making it difficult to read. Some of the formatting and placement of various bits of information could also use some restructuring. More details in the minor comments.\n\nSpecific concerns with the mathematics:\n1. L150 If epsilon is entities, then what is V? And if R is all relations, where are the facts specified in the KG?\n2. L154 notation for N mismatches with the one used earlier, has not been properly defined, and usually picks out the natural numbers\n3. L172 - L178 \"bf\" is bad notation, since two variables are being used for one concept.\n4. L177 the definition of bf does not parse. Furthermore, it was already defined on L172, so has been defined in two different ways\n5. L172 N is not defined\n6. L220 - L221 what do all of these arguments refer to?\n7. L226 beta is not defind\n8. L232 R', A', F' are not defined\n9. L238 which normalisation function?\n10. L251 (())\n11. L249 - L255 I don't see how this defines the above matrices\n12. L259 which matrix M?\n13. L272 - L287 this section is very confusing, and needs more clarity as to what it is actually describing\n14. L295 what is |x| here?\n15. L302 what are Vim, u_m and n_m?\n16. L309 - L310 wrong notation for cross product and real numbers. And what is F?\n\nAlso, what is the signature of the KG?\n\nOther minor concerns:\n1. L38 could use a citation\n2. L41 \"query\"\n3. L58 \"are\"\n4. L61 \"attribute\"\n5. L108 would be nice to try cut some content to bring this line onto the previous page\n6. L140 - L145 is not a contribution, bur rather a result, and does not belong in this section\n7. L150, L151, L157, L159, L166, L238, L259, L278 - no space after full stop or comma\n8. L153 extract bracket\n9. L156 - L160 list not formatted properly, and can be defined more concisely\n10. L186 \"scored\"\n11. L188 \"t- connorm\"\n12. L173 \"entity epsilon\"\n13. L239 - 241 this paragraph does not contribute to this section\n14. L256 errant full stop\n15. L326 \"method\" instead of AVG_T would convey this better\n16. L404 Hits@K is never used in the paper\n17. L420 reference the ablation study here and show it it supports your argument\n18. L450 an example of one of the queries in the main text would be nice\n19. L535 just one metric is defined, not multiple"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a Complex Numerical Reasoning with Numerical Semantic Pre-Training Framework, which can perform binary operations on numerical attributes within numerical knowledge graphs and supports complex numerical reasoning tasks."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024complex,\ntitle={Complex Numerical Computation with Numerical Semantic Pre-training Framework},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1epaSm9QRs},\nnote={under review}\n}"
},
"abstract": {
"value": "Multi-hop complex reasoning over incomplete knowledge graphs has been extensively studied, but research on numerical knowledge graphs remains relatively limited. Recent approaches focus on separately encoding entities and numerical values, using neural networks to process query encodings for reasoning. However, in complex multi-hop reasoning tasks, numerical values are not merely symbols; they carry specific semantics and logical relationships that must be accurately represented. Directly encoding numerical values often leads to the loss of such semantic information. In this work, we propose a Complex Numerical Reasoning with Numerical Semantic Pre-Training Framework (CNR-NST). We designed a joint link predictor that incorporates the relationships between numerical values and entities into the learning process of numerical semantics. The proposed framework is the first to enable binary operations on numerical attributes in numerical knowledge graphs, allowing new numerical attributes to be inferred from existing knowledge. The CNR-NST framework can perform binary operations on numerical attributes in numerical knowledge graphs, enabling it to infer new numerical attributes from existing knowledge. Our approach effectively handles up to 102 types of complex numerical reasoning queries. On three public datasets, CNR-NST demonstrates state-of-the-art performance in complex numerical queries, achieving an average improvement of over 40% compared to existing methods. Notably, this work expands the range of query types for complex multi-hop numerical reasoning and introduces a new evaluation metric for numerical answers, which has been validated through comprehensive experiments."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Numerical Reasoning",
"Complex Query Answering",
"Knowledge Graph"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/bdafdaf627703f3572fd4e686641c2715c0b07b3.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on graphs and other geometries & topologies"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/6f562a3227dc9ec2774d579b998e954af480f6ea.zip"
},
"title": {
"value": "Complex Numerical Computation with Numerical Semantic Pre-training Framework"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1fC4ytCAgb | Self-Conditioned Diffusion Model for Consistent Human Image and Video Synthesis | main | Active | Diffusion model;human image generation | applications to computer vision, audio, language, and other modalities | 3;3;5;5;5 | 4;4;5;4;3 | 2;2;3;2;3 | 2;2;2;2;2 | 3;3;3;2;3 | 4.2 | 4 | 2.4 | 2 | 2.8 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the Weakness."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper leverages the outpainting ability of the foundation model to complete the generation under the spatial condition of referencing human images through the inpainting, with a novel perspective.\n2. The spatial conditions are applied in an inpainting manner, which makes sense."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a self-conditioned diffusion (SCD) model for consistent human-centric image and video synthesis, focusing on maintaining consistency with the reference subject while generating new contents like poses and garments. SCD frames the task as a spatially conditioned inpainting problem, where the reference image as a spatial condition guiding the generation. Besides, the authors introduce a causal feature interaction mechanism to enhances the flexibility and effectiveness. Experimentally, SCD outperforms existing methods in both image and video quality metrics on 10 TikTok-style videos."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While the insight of inpainting manner is reasonable, it heavily relies on the capabilities of the foundation model. Since this work takes SD1.5 as the base, which isn’t fully perfect for generating humans, there doesn’t appear to be a mechanism to address the situation when the base model lacks such an ability. This raises reasonable doubt that the effectiveness of the results is largely due to fine-tuning the base model with the dataset rather than overcoming inherent issues. Additionally, base models indeed perform outpainting, but this does not mean their results are consist, there is still a gap.\n2. The method primarily focuses on the spatial aspect, with no special treatment on the temporal dimension for video generation—just following the AnimateDiff. So, how to improve the consistency in temporal?\n3. The observed phenomenon (line247-267) , whether it is too model-specific (SD) or architecture-specific (UNet-base), this phenomenon may not be universally present. If so, please provide more observation results of models and architectures (DiT), like SD3.5.\n4. Dedicating too much of the introduction to detailing previous methods, making it difficult to quickly grasp the main contributions of this paper. It is recommended that the authors reorganize this section, using concise language to summarize the primary limitations of prior work and clearly present contributions of this work."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I have some questions that need further clarification:\n\n1. I noticed that the reported scores in Table 2, such as PSNR, are not consistent with those reported in other works like Champ and MagicAnyone. Could you please clarify this?\n\n2. I am also interested in the experimental settings for using SMPL information as controllable signals within the proposed framework."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "$\\textbf{Originality, Significance}$\n\n1. It is commendable to study controllable human animation generation using a single, unified SD network. This approach may streamline the training process in several areas, such as optimizing GPU resource usage and tuning hyperparameters.\n\n2. The causal feature interaction strategy represents a novel contribution to the single-network paradigm for this task.\n\n3. The method achieves higher scores on the TikTok and UBCFashion datasets compared to previous works.\n\n$\\textbf{Clarity}$\n\n1. The paper is easy to follow, and the ideas are well presented.\n\n2. The spatial conditioning and causal feature interaction strategies are validated and discussed in the ablation study section, which is commendable."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores controllable human animation generation. Unlike common frameworks that use separate networks for extracting reference appearance features and generating target visuals, this study approaches the task as an inpainting problem. In this framework, the target human image is inpainted based on the spatially conditioned reference image, allowing for the use of a single, unified SD network. Additionally, the paper introduces a causal feature interaction strategy, wherein reference features can only query information from themselves, while target features can access appearance data from both the reference and target images."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My main concern is that the technical contributions of this paper appear to be incremental.\n\nIn the context of controllable human animation generation, I am only knowledgeable about several widely studied works, such as Animate Anyone and Champ. To me, the approach of directly concatenating reference latents and target noise latents spatially for a unified SD diffusion process is new. However, the \"inpainting motivation\" and spatial conditioning strategy are common in image-to-video generation [1] and multi-view 3D generation tasks [2, 3, 4]. Given that the quantitative improvements are minimal and there are insufficient qualitative comparisons—since the supplemental videos are solely produced by this paper—it is challenging to draw definitive conclusions about the effectiveness of the proposed method.\n\nRegarding the causal feature interaction strategy, it provides only slight improvements, as shown in Table 2 (PSNR: 18.64 vs. 18.59). Based on Figure 6, it seems that the causal feature interaction strategy may not be effective. In fact, it appears that the full model introduces artifacts in the connection region of the shoulder and neck compared to the model that does not utilize the causal feature interaction strategy.\n\n\n[1] CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer.\n[3] One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View Generation and 3D Diffusion. CVPR'24\n[4] InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models\n[5] CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction Model. ECCV'24"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Explain more about the intuition from inpainting work.\n2. For the performance side, show more results to demonstrate that causal feature interaction does help."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Using the same denoising network for both reference feature extraction and target image generation reduces the training burden and ensures that the target and reference images reside in a consistent feature space.\n2. The quantitative results for video synthesis appear promising, demonstrating the SCD-V's effectiveness in maintaining appearance consistency across poses. More video results are preferred if possible."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a human image and video synthesis approach that frames the task as a spatially-conditioned inpainting problem, allowing reference appearance features to guide pose-compliant target generation within a unified denoising network. By using a shared single network with a causal feature interaction framework, the method effectively mitigates domain gaps, enhancing appearance consistency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The logic behind why inpainting is advantageous (Ln221-223) is unclear and requires further clarification. Simply framing the task as inpainting does not inherently address how it enhances appearance consistency.\n2. The proposed \"causal feature interaction\" lacks novelty. It is intuitive that target features should query information from the reference, while reference features should query only from themselves; this approach feels too trivial to be considered a novel contribution.\n3. The description of the method in Ln238-287 is overly redundant, especially regarding the use of self-attention in diffusion to achieve content consistency. This observation has already been well-documented in previous video generation research.\n4. There are performance concerns. In Table 1, the FID score is significantly higher than other methods, suggesting suboptimal quality. Furthermore, in Table 2, a straightforward spatial conditioning approach without causal feature interaction achieves a lower FID and FID-VID, along with a higher SSIM, which suggests that the main claimed contribution—\"causal feature interaction\"—does not improve results. In fact, pure spatial conditioning seems sufficient for content consistency. Additionally, Figure 6 shows that results \"without causal interaction\" are visually closer to the ground truth. Could the authors provide more video-format visual results to clarify?\n5. The paper has instances of careless writing (e.g., Ln261) and inconsistencies between titles and tables (e.g., Table 2), which detract from readability and clarity."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see the weakness."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well-written and easy to understand. Figure 3 demonstrates the motivation, and Figure 4 clearly explains the details of the proposed causal spatial condition. The proposed pipeline for human motion transfer can be easily extended to virtual try on human image editing, which further shows the effectiveness of the method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed a spatial conditioning strategy for human video animation and motion retargeting, building upon the self-attention mechanism proposed in the series of works with Reference-Net. The proposed strategy is efficient and lightweight compared to Reference-Net, and the causal feature interaction mechanism enhances the identity-preserving ability."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. As far as I understand, the proposed method should be quite efficient compared to previous works since there isn't any copied UNet structure. Is there any discussion or comparison of the efficiency, e.g., trainable parameters and inference time for a single batch?\n\n2. What's the difference between the proposed strategy and a \"trainable version\" of Reference-Only ControlNet, from [here](https://github.com/Mikubill/sd-webui-controlnet/discussions/1236)? I believe Reference-Only ControlNet also proposed a similar share-weight structure for appearance control. Any detailed discussion on the architecture design?\n\n3. Metric for video generation evaluation. I understand the authors follow previous works and adopt FVD as the video evaluation metric. However, this metric has recently been widely criticized by the community because of its inaccuracy in reflecting the overall quality. I wonder what the performance comparison would be if debiased FVD is used for evaluation. From [here](https://content-debiased-fvd.github.io/)\n\n4. Are there any side-by-side video visualization comparisons between this work and recent baselines? E.g. MagicPose, Champ? It would be better to judge the temporal consistency of the video quality.\n\n5. How does the model generalize to out-of-domain real human identities? E.g. Old people?\n\n6. The denoising network has been fine-tuned on real human datasets and 3500 self-collected dance videos only, but the identity preservation for cartoon-style images in Figure 11 and the supplementary video is quite good. Is there any explanation for this? Do the self-collected videos contain any cartoon characters?\n\nI'm more than **willing** to **raise** my score if my concerns are addressed."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The paper's qualitative results are inadequate and the image quality is poor. visual examples are insufficient to properly demonstrate the method's effectiveness. I believe there are some issues with the \"pose injection\" method. The authors should provide more experimental details and show additional generated results."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The techniques sound reasonable and the proposed method can enhance content consistency between the generated and reference images.\n- The graph is clear and the writing is easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a Self-Conditioned Diffusion (SCD) model designed to enhance consistency in human-centric image and video synthesis. By formulating the task as a spatially conditioned inpainting problem, the model employs a unified denoising network that minimizes domain gaps between reference and target images. The key innovations lie in two aspects: a causal feature interaction mechanism that maintains appearance consistency, and a two-stage generation process that separates reference appearance extraction from conditioned target generation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **No technical contribution.** The technical novelty is limited. The significance of this paper is not expounded sufficiently. The author needs to highlight this paper’s innovative contributions to prior-guided I2I/ I2V generation.\n2. **Overclaim and SOTA.** The experimental comparisons are outdated, comparing against older methods while claiming \"state-of-the-art\" status. The work overlooks recent 2024 publications and lacks quantitative evaluations against recent work. Authors should add necessary discussions and comparisons about some of the following: \"Controlnext\", \"MimicMotion\", \"Cinemo\", \"PoseCrafter(ECCV'24)\", \"Mimo\", \"X-portrait(SIGGRAPH'24)\", \"PoseAnimate(IJCAI'24)\", \"DynamiCrafter(ECCV'24)\", \"SparseCtrl(ECCV'24)\", \"LATENTMAN(CVPR'24)\" and \"Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis (CVPR'24)\"......"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024selfconditioned,\ntitle={Self-Conditioned Diffusion Model for Consistent Human Image and Video Synthesis},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1fC4ytCAgb},\nnote={under review}\n}"
},
"abstract": {
"value": "Consistent human-centric image and video synthesis aims to generate images or videos with new poses while preserving appearance consistency with a given reference image, which is crucial for low-cost visual content creation. Recent advancements based on diffusion models typically rely on separate networks for reference appearance feature extraction and target visual generation, leading to inconsistent domain gaps between references and targets. In this paper, we frame the task as a spatially-conditioned inpainting problem, where the target image is inpainted to maintain appearance consistency with the reference. This approach enables the reference features to guide the generation of pose-compliant targets within a unified denoising network, thereby mitigating domain gaps. Additionally, to better maintain the reference appearance information, we impose a causal feature interaction framework, in which reference features can only query from themselves, while target features can query appearance information from both the reference and the target.\nTo further enhance computational efficiency and flexibility, in practical implementation, we decompose the spatially-conditioned generation process into two stages: reference appearance extraction and conditioned target generation. Both stages share a single denoising network, with interactions restricted to self-attention layers. This proposed method ensures flexible control over the appearance of generated human images and videos. By fine-tuning existing base diffusion models on human video data, our method demonstrates strong generalization to unseen human identities and poses without requiring additional per-instance fine-tuning. Experimental results validate the effectiveness of our approach, showing competitive performance compared to existing methods for consistent human image and video synthesis."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Diffusion model",
"human image generation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/248c4810ff2309ba9d7bd6f5d11065c68f90a899.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/0317bfa9d4e89dcb76ebf57ae1bca3b47fa00d87.zip"
},
"title": {
"value": "Self-Conditioned Diffusion Model for Consistent Human Image and Video Synthesis"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1fwZJzGdKj | Multi-Agent Collaborative Data Selection for Efficient Language Model Pretraining | main | Active | Language Model Pretraining; Data-efficient Training; Data Selection | foundation or frontier models, including LLMs | 3;5;6;8 | 4;3;3;4 | 2;2;3;3 | 2;2;3;4 | 3;2;3;4 | 5.5 | 3.5 | 2.5 | 2.75 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Is it possible to not employ the agent or multi-agent metaphor to formulate the proposal? What is the truth power of the proposal? Is it a compossible and multi-step paradigm of stochastic optimization based on 3 hand-crafted dimensions? \n2. Ine line 161, what is the loss function l? Also please explain more regarding the definition of the reward function. How is it different o related to the loss function of LLM auto-regression loss or other kinds of losses (if any other).\n3. In Algorithm 1, please clarify whether the sampling distribution of the data are different from iteration to iteration (line 3). \n4. In line 275, 373M model is used in ablation study. It is pretty small a model. The conclusions drawn from it might not be transferrable to LLMs of billions of parameters. Please justify the study."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "• A good analysis of how different aspects influence the performance of LLM training.\n• The experiment seems to demonstrate this kind of scoring of data points can help to improve the data deficiency and performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes to a mechanism to select data into the training process based on 3 main measure (quality, domain and topic) of the data. The 3 measures are adjusted dynamically and aggregating together to determine whether data point can be selected during the training process. A RL paradigm is employed to realize the proposal. Good performance is demonstrated in the provided experiments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "• The connection to agent or multi-agent paradigm seems weak to me. It might not be necessary to formulate the problem and the solution via \"agent\" concept. A direct stochastic optimization formulation might provide more direct description and help audience better mastering what is the proposal."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. How does the proposed framework perform when scaling up to larger models (e.g., 10B+ parameters) and datasets (e.g., trillions of tokens)?\n2. Can you provide a more detailed analysis of the computational overhead introduced by the multi-agent system compared to baseline methods? \n3. What guidelines can be provided for selecting or designing agents for other data selection criteria? Is the framework flexible enough to incorporate new agents easily? How sensitive is the method to the choice of number and types of agents?\n4. How sensitive is the performance to the choice of reference tasks used for calculating rewards in the influence functions?\n5. Can you elaborate on how the dynamic adjustment of agent weights impacts the learning process over time? Are there scenarios where this adjustment could lead to suboptimal data selection?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Introducing a multi-agent framework to collaboratively select pretraining data is a novel idea that addresses inherent conflicts among existing methods.\n2. The empirical evaluation is extensive, comparing the proposed method against a wide range of baselines and demonstrating significant improvements.\n3. The paper clearly articulates the motivation, methodology, and findings, making it accessible to readers.\n4. Improving data efficiency in LLM pretraining is a critical challenge, and the proposed method offers a practical solution with demonstrable benefits."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel multi-agent collaborative data selection mechanism aimed at enhancing data efficiency during the pretraining of large language models (LLMs). Recognizing that existing data selection methods often operate independently and may conflict with one another, the authors propose a framework where each data selection method functions as an independent agent. An agent console dynamically integrates the information from all agents throughout the training process. The agents adjust their weights based on reward signals derived from the model's performance on reference tasks. The framework is designed to flexibly and robustly combine various data selection strategies, such as data quality scoring, topic diversity, and domain information. Extensive experiments demonstrate that this multi-agent approach significantly accelerates convergence in LLM training and achieves an average performance gain of up to 10.5% across multiple benchmarks compared to state-of-the-art methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While the experiments show promising results on models up to 1.3 billion parameters, it is unclear how the approach scales to larger models commonly used in practice.\n2. The choice of agents (quality, domain, topic) seems somewhat ad-hoc. A discussion on how to generalize the selection of agents or include other data selection criteria would strengthen the paper.\n3. While ablation studies are included, more detailed analysis on how each agent contributes to different types of tasks could provide deeper insights."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "-Figure 2 could be more clear to show that each agent has its own memory. Consider focusing only on the Domain Agent’s flow of work to make it easier to follow.\n\n-In Table 1, it would be best to bold the top results so it’s easier to see that your approach is indeed among the best.\n\n-Should this really be considered a multi-agent system or is this really a multi-step process? I don’t see any use of reasoning or decision making here. It seems at each step, each agent is systemically called/updated and each data point is labeled with a combination of scores from the “agents”. What is “agentic” about this? The approach is still valuable, just questioning whether it falls under “agents”.\n\n-Does your approach add significant additional latency to the pretraining stage? \n\n-Is your approach particularly valuable for small models?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "-Mixing data quality and data selection techniques is a challenging problem: they show in their case study that typical data curation techniques can conflict and naively combining them is not sufficient.\n\n-Unlike many off-the-shelf multi-agent systems, they are proposing optimization of each agent’s weights (stored in memory) based on reward signals from the model undergoing pretraining.\n\n-Results show that their multi-agent data selection produces the best performance. Ablations show the three agents in collaboration outperform all other permutations of the agents (with and without collaboration) and strongly outperform the setting with no agents at all. \n\nOriginality:\nI am not aware of another multi-agent approach for data selection in pretraining - so this appears to be a novel application of multi-agent systems.\n\nQuality:\nAll key components covered - clear literature review, motivation, experiment design, results. It would have been better if they made their contribution differentiations clear in the literature review - for ex, confirming if they are indeed the first multi-agent approach for data selection. And if not, how are they different.\n\nClarity: \nThe paper was generally well written and easy to read.\n\nSignificance:\nPretraining is the most critical and expensive operation for LLMs. To make the best use of your pretraining, optimizing the data is key."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a multi-agent approach to strategically select data for pretraining LLMs. The paper motivates the need for a multi-agent design by sharing a case study on the SlimPajama pretraining dataset. They illustrate that the most common data set considerations (and their corresponding metrics), including data quality, topic diversity, data impact, and data domain are not straightforward to jointly optimize. Therefore, they propose a multi-agent system, where each data selection method/metric is represented as an agent. Through this multi-agent approach, these methods can be mixed via multi-agent collaboration, forming a highly adaptive approach in data selection. They show that their multi-agent approach is effective: the data curated by their method leads to faster convergence in training and improved benchmark performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "-Figures can be improved, see my comments below.\n\n-Experiments only on 1B model. It would be interesting to see the impacts across more model sizes (smaller and larger) and model architectures to see which range of models really benefit from this. \n\n-This shows a lot of potential already, but the point would be very strongly proven if they could show comparison to other 1B model performances (DeepSeek, TinyLlama, etc.), showing that their approach yields superior models in general."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* How does the definition of the agent used in this paper relates to the term agent used in other fields of AI? Such as multi-agent RL, or the type of work usually published in AAMAS? \n* The term \"console\" is usually applied to a component that is used by a human operator. Is the term agent console in the paper related to it?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* A case study validates the initial claim of the paper, that the quality, diversity and influence scores of data are not strongly correlated with each other.\n* Relatively extensive experiments were conducted with the training of a 1.3B LLAMA LLM."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors argue that training data selection is an important component in LLM training. The various techniques that had been proposed might be conflicting in their recommendations. The authors are proposing a technique in which the different data selection algorithms are considered independent agents, with an \"agent console\" integrating the recommendations. The approach enable the dynamic adjustment of contributions of the agents during the training process of the LLM. The SlimPajama dataset is used as the working example."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The fact that the diversity of topics, quality of material, and influence on the trained LLMs are different metrics is not a surprising observation - these are obviously very different things. A strong correlation between them would be the surprise. \n* The term \"agent\" has a relatively clear definition in the AI literature, as an autonomous entity, that takes actions in an environment, in the pursuit of a goal. The fact that the authors have to introduce a definition for the term \"agent\" and \"agent console\" and define them in terms of \"data selection method\" makes the paper difficult to follow. It doesn't seem that these \"agents\" are taking any actions, or have any autonomy. \n* In practice, the proposed approach appears to be a way to make a decision about what training data to include based on weighting three pre-existing metrics about the data sources. This decision process could have been easily written without introducing agent language. In fact, practitioners very likely are already making such judgements."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024multiagent,\ntitle={Multi-Agent Collaborative Data Selection for Efficient Language Model Pretraining},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1fwZJzGdKj},\nnote={under review}\n}"
},
"abstract": {
"value": "Efficient data selection is crucial to accelerate the pretraining of large language models (LLMs). While various methods have been proposed to enhance data efficiency, limited research has addressed the inherent conflicts between these approaches to achieve optimal data selection for LLM pretraining. To tackle this problem, we propose a novel multi-agent collaborative data selection mechanism. In this framework, each data selection method serves as an independent agent, and an agent console is designed to dynamically integrate the information from all agents throughout the LLM training process. We conduct extensive empirical studies to evaluate our multi-agent framework. The experimental results demonstrate that our approach significantly improves data efficiency, accelerates convergence in LLM training, and achieves an average performance gain of 10.5% across multiple language model benchmarks compared to the state-of-the-art methods."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Language Model Pretraining; Data-efficient Training; Data Selection"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/0d707d791ba04c09351304d540e4ca38c482f4e1.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/4980270b11c6591913f450dff29957221288f46a.zip"
},
"title": {
"value": "Multi-Agent Collaborative Data Selection for Efficient Language Model Pretraining"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1g4s7ME93g | Super Robot View Transformer | main | Active | robotic manipulation;multi-task learning;robot view transformer | applications to robotics, autonomy, planning | 3;3;5;6;6 | 5;4;4;3;4 | 2;4;3;3;3 | 2;1;2;3;3 | 1;3;3;2;2 | 4.6 | 4 | 3 | 2.2 | 2.2 | -0.699379 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. **Down View Clarification**: Could the authors clarify the colorless nature of the down view and its virtual camera position? Including a figure illustrating the virtual camera position could make this aspect clearer. Also can you explain why the down view looks colorless compare with top view?\n2. **Effect of S-PR on Generation**: The design of S-PR is unclear. Could the authors include a figure comparing the rendered images before and after applying S-PR? This would help illustrate S-PR’s impact on generation.\n3. **Focused Experimentation in Table 2**: The improvements mainly impact tasks needing precise top-down alignment (e.g., Insert Peg, Sort Shape). Given that most differences in Table 2 are within 1%, an experiment disabling both S-PR and down view could clarify other designs’ contributions. Additionally, statistical tests would help determine if small differences are meaningful. It will also be great if you can discuss the speed-accuracy trade-off applying these designs.\n4. **Baseline and Failure Analysis**: Real-world experiments would benefit from a baseline comparison. A typical baseline would be an RVT/RVT2 without designs introduced in the paper. It would also help if the authors could conduct a failure analysis to highlight potential improvement areas. If challenges arose in implementing real-world baselines, an explanation would be valuable.\n5. **Left View Performance Drop**: The results in Table 2 indicate a counterintuitive performance drop when incorporating a left view. A straightforward experiment replacing the right view with the left view (without adding extra views) could help isolate the cause. Based on this experiment, the authors can explore whether this drop is due to:\n - the presence of both left and right views introducing redundancy or conflicting information,\n - the left view alone, as opposed to the right view, negatively impacting performance,\n - or simply having an excess of views, which may complicate heatmap prediction?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. **Performance Improvement**: The proposed changes significantly enhance performance by mitigating occlusions from specific viewpoints, improving view flexibility.\n2. **Robust Experimentation**: The paper includes multiple experiments to verify the efficacy of each introduced method, which supports the validity of the approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an improvement over the RVT/RVT2 method by introducing enhancements such as S-PR, S-MVT, and HSP. The updates effectively address the problem of view occlusion in RVT from certain angles, particularly the top view. Extensive experiments demonstrate impressive performance gains with these adjustments, validating their effectiveness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Lack of Visual Illustration**: The concept of the down view is not entirely clear, especially concerning why it lacks color. A visual illustration showing the virtual camera position could enhance understanding.\n2. **S-PR Explanation**: It’s challenging to grasp exactly how S-PR contributes to generation. A comparative figure demonstrating results before and after applying S-PR would clarify this aspect.\n3. **Lack Real World Baseline**: The baseline experiment in real world is missing.\n4. **Unclear Contribution of Different Designs**: Most of the performance in Table 2 (S-RVT2) are within 1 point, which makes it unclear whether many of them are still useful."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- It would be nice if the authors could have some discussions on the data efficiency of the proposed framework. As discussed in the Related Work section, both transformer deployment and imitation learning with high precision require substantial training data. In this paper, the sim experiments use 100 demonstrations per task, and the real experiments use 15-20 demonstrations per task. How does it compare to other methods?\n- Another very relevant question: For the real experiments, the paper states that \"the number of demonstrations for each task is determined by its complexity and the degree of variability in task configurations.\" How much will this affect the performances? Specifically, for the two tasks \"stack blocks\" and \"plug charger\", how would the model perform if there are only 15 demonstrations, as in the other two tasks?\n- I am curious, in Table 1 task Sweep to Dustpan, why is S-RVT success rate lower than RVT? Are there any specific features of this task that make it different from the others, or is it just a normal fluctuation in the measurement? (This is really just my curiosity. The overall experimental results look good to me, and a 10% success rate drop out of 25 tests here is acceptable.)"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The authors have conducted abundant experiments and ablation studies in both sim and real.\n- The experimental results look good. The proposed framework brings consistent improvements to RVT and RVT2 across different scenarios.\n- The paper is well-written. The concepts and intuitions are explained together with concrete examples, making it very easy to understand."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses the limitations in previous virtual view-based methods, focusing on the occlusion problems and the resolution constraints. To resolve these problems, it proposes the Super Robot View Transformer (S-RVT) comprising of three modules: the Super Point Renderer (S-PR) that enhances the rendering process to mitigate occlusion artifacts, the Super-resolution Multi-View Transformer (S-MVT) that integrates superresolution\nto the output heatmaps, and the Hierarchical Sampling Policy (HSP) that samples multi-view heatmaps in 3D space to obtain accurate 3D poses. Experiments show that the proposed framework improves the performances of RVT and RVT2 in various setups."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- To my understanding, the paper is mainly addressing the uncertainty problems (in RVT or RVT-like robot learning frameworks): the aleatoric uncertainty is addressed by the virtual-view pointcloud rendering, and the epistemic uncertainty is addressed by the feature map superresolution. On one side, I like the intuitions discussed in the paper, on the other side, simply looking at the framework, the ways of resolving these problems look very straightforward, with a sequence of concrete engineering efforts. It would be good to have more concrete discussions, based on the method, on how RVT didn't address these uncertainties well and how the framework resolves these issues -- this can show better linkage between the high-level intuitions of the paper and the concrete steps in the method.\n- A concrete question following the previous question is about the pointcloud rendering: It is simply done by a 2D projection, but what is the quality of the projected virtual views? Does it have any requirements on the placement of the (real) camera? Specifically, in the ablation study, it shows that going from 4 virtual views to 5 views decreases performance. I think this implies that the virtual views are not all of good quality which helps the algorithm to figure out better policies."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- How is S-PR implemented in real-world scenarios?\n- What will happen if the resolution of input images is raised?\n- What's the performance of MVT in the real-world experiments?\n\nI will consider raising scores if my concerns are addressed in the rebuttal period."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- This paper is well-written and easy to follow.\n- The proposed modifications on RVT are intuitive, and the RLbench experiments verify the effectiveness of S-MVT in simulator settings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents Super Robot View Transformer (S-RVT) -- a series of techniques to improve Robot View Transformer. It consists of 3 modules: the Super Point Renderer that mitigates occlusion artifacts, the Super-resolution Multi-View Transformer that performs superresolution to the output heatmaps, and the Hierarchical Sampling Policy that efficiently samples multi-view heatmaps in 3D space. The experiments suggest S-RVT obtains a consistent performance boost against RVT on the RLBench benchmark."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- I doubt the fundamental rationality of the Super Point Renderer (S-PR) and Super-resolution Multi-View Transformer (S-MVT) modules.\n - According to my understanding, S-PR renders the objects occluded by the robot. It would definitely be effective in the simulator, but how would that be made possible in the real world?\n - Meanwhile, S-MVT aims to perform super-resolution to the multi-view images. However, why not just enhance the resolution of RGB-D images in the beginning? D515 could capture depth photos in a resolution of up to 1024x768, but the RGB-D images used in the paper only have a resolution of 128x128.\n- While S-MVT is compared with MVT in the simulator setting, it is not compared with MVT in the real-world setting. I am particularly confused about how S-PR is implemented in real-world scenarios."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. In the paper: \"The 3D points are projected onto the 2D image plane by converting them into image coordinates using GPU accelerated matrix operations.\" Could the author give more details on the implementation and visualization of this?\n\n1. Why the MVT module can address the epistemic uncertainty? Please provide detailed information.\n\n2. In the HSP section, can the author provide some details about how HSP can solve the GPU memory overflow problem?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper conducts sufficient experiments to evaluate the proposed framework on various robotic manipulation tasks with different baseline methods.\n2. The pictures in this paper are presented clearly.\n3. Related works are comprehensively reviewed."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed a framework for multi-task imitation learning with key-state-based methods for robotic manipulation learning, especially for high-precision tasks. The author claims their model addresses the epistemic uncertainty of the proposed framework. The framework, SRVT, consists of three modules: the Super Point Renderer (S-PR), the Super-resolution Multi-View Transformer (S-MVT), and the Hierarchical Sampling Policy (HSP). This paper shows both simulation and real-world experiment results to evaluate the framework."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The overall writing and flow of the paper need considerable improvement. The abstract, introduction, and related works sections are repetitive and convey similar concepts. Moreover, a paper should streamline these sections to provide a progressive understanding.\n\n2. This paper's primary claim that the proposed MVT module can advance epistemic uncertainty is not validated. To address such uncertainty, the paper must provide theoretical proofs, uncertainty analysis, and ablation studies. Some common methods, like conditional variable at risk, can be used for this."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* In line 257, the statement *\"S-MVT generates heatmaps with $sr$ times higher resolution\"* is unclear, as $sr$ is not introduced in the preceding paragraphs.\n* In line 260, should this result in $16^2$ patches instead of $16$ patches?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* The approach of addressing limitations in prior work, such as handling occlusion issues in the rendering process and overcoming resolution constraints in pose prediction, is valuable.\n* The authors perform extensive experiments, including comparisons with baseline models and ablation studies, to demonstrate the effectiveness of the proposed components.\n* They also conduct several real-world experiments and provide the video evidence."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Building on prior work, RVT and RVT-2, this study introduces S-RVT to address limitations like occlusion issues and resolution constraints. Specifically, it presents S-PR (Super Point Render) to enhance rendering and reduce occlusion artifacts, S-MVT (Super-resolution Multi-View Transformer) to integrate super-resolution to output heatmaps, and HSP (Hierarchical Sampling Policy) for accurate 3D pose estimation through a coarse-to-fine sampling approach. Experimental results show that S-RVT outperforms previous methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* It would be beneficial to clearly specify the differences between the proposed method and prior work, indicating which contributions are adopted from previous studies and which are newly introduced in this paper. For example, in Section 3.2, RVT-2 appears to have already implemented z-ordering and screen-space splatting techniques, yet this is not clarified here. Additionally, in Section 3.3, lines 255 to 263 (nearly half the paragraph) contain content similar to RVT and RVT-2, it would be helpful to focus more on the novel methods introduced in this work. Clearly distinguishing between techniques inherited from previous work and unique innovations would improve understanding and highlight the contributions of this study.\n* In Section 3.3, additional details on the upsampling process would be helpful for clarity. Could the authors expand on how the upsampling is implemented?\n* The authors are encouraged to provide further analysis of the experimental results:\n - In Table 1, S-RVT performs worse than RVT in tasks such as 'put in safe' and 'sweep to dustpan,' and S-RVT2 performs worse than RVT-2 in 'slide block' and 'sweep to dustpan.' Although these lower scores are acceptable for specific tasks, more in-depth analysis of why the proposed method underperforms in these cases would strengthen the findings.\n - In the ablation study, the impact of each component varies between S-RVT and S-RVT2. For instance, SPR is more critical for S-RVT2, whereas HSP has a greater influence on S-RVT. Additional analysis on why these components affect the models differently would offer valuable insights.\n* The authors are also encouraged to discuss failure cases of the proposed method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024super,\ntitle={Super Robot View Transformer},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1g4s7ME93g},\nnote={under review}\n}"
},
"abstract": {
"value": "Learning a single model for multiple robotic manipulation tasks, particularly high-precision tasks, has been a long-standing challenge in robotics research due to uncertainties inherent in both the model and the data. These uncertainties, namely epistemic uncertainty arising from model limitations and aleatoric uncertainty stemming from data variability, hinder precise control.\nWhile the Robot View Transformer (RVT) improves performance by re-rendering point clouds from fixed viewpoints and processing structured 2D virtual images, it still suffers from occlusion artifacts in rendering and limited action precision due to resolution constraints.\nTo address these limitations, we propose the Super Robot View Transformer (S-RVT) framework, which integrates three novel components: the Super Point Renderer (S-PR), the Super-resolution Multi-View Transformer (S-MVT), and the Hierarchical Sampling Policy (HSP). The S-PR enhances the rendering process to mitigate occlusion artifacts, while the S-MVT integrates super-resolution to the output heatmaps, enabling finer-grained manipulation. The HSP efficiently samples multi-view heatmaps in 3D space to obtain accurate 3D poses.\nThese innovations collaboratively mitigate the challenges of occlusion and precision in manipulation tasks. Our experimental results demonstrate that S-RVT achieves a success rate of 87.8 \\% across 18 manipulation tasks, surpassing the state-of-the-art of 81.4 \\%. Notably, for high-precision manipulation tasks, S-RVT exhibits nearly a two-fold improvement over existing methods, underscoring its effectiveness in precise control scenarios. Our code and trained models will be released to support further research."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"robotic manipulation",
"multi-task learning",
"robot view transformer"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/908210164cdd5f9b606d980de275c53864bd50a1.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to robotics, autonomy, planning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/8b7a8883967f466a090a760f04b80492ebbede93.zip"
},
"title": {
"value": "Super Robot View Transformer"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1gqR7yEqnP | Pan for gold | main | Withdraw | Generalization;Overparameterized Network;functional analysis;Domain Adaptation | unsupervised, self-supervised, semi-supervised, and supervised representation learning | Junhoo Lee;Kyomin Hwang;Dongkwan Lee;Han Sangbum;Min Kyu KIM;Nojun Kwak | ~Junhoo_Lee2;~Kyomin_Hwang1;~Dongkwan_Lee1;~Han_Sangbum1;~Min_Kyu_KIM2;~Nojun_Kwak1 | 1;1;3;3;3 | 4;4;3;4;4 | 1;1;2;2;1 | 1;1;2;2;2 | 1;2;2;3;2 | 2.2 | 3.8 | 1.4 | 1.6 | 2 | -0.408248 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- can authors discuss the work's limitations and potential impact on future work?\n- can authors discuss potential overlap with prior work notably [1] (see above)?\n- can authors adjust figure 1 with axis labels and explain why the number of samples (if I understand correctly) varies between epoch 1 and 5?"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- clear and well-written: the manuscript is well-written and easy to follow\n- relevant topic: the authors tackle an interesting finding (i.e., training with random labels leads to substantial performance increase) that is relevant to the community and connected to multiple popular topics like self-supervised representation learning as well as the emerging idea of a universal representation \n\n[1] Bojanowski, Piotr, and Armand Joulin. \"Unsupervised learning by predicting noise.\" International Conference on Machine Learning. PMLR, 2017.\n[2] Reizinger, Patrik, et al. \"Cross-Entropy Is All You Need To Invert the Data Generating Process.\" arXiv preprint arXiv:2410.21869 (2024).\n[3] Huh, Minyoung, et al. \"The platonic representation hypothesis.\" arXiv preprint arXiv:2405.07987 (2024)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores the finding that training neural networks with random labels leads to substantial performance improvements in comparison to randomly initialised neural networks. Authors claim that these experimental findings (supported by empirical evidence & overlapping with prior work -- see below) highlight that the process of learning from data occurs independently of human-imposed structure and inform a novel perspective on the way neural networks work and do not discuss the work's limitations. Authors go on to proposing the use of random labels to fine-tuned pre-trained backbones to improve downstream generalisation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- strong overlap with non-cited work/lack of novelty: the authors centered their work around the observation that random labels offers substantial performance improvement which they claim is a novel finding (\"We completely removed the structure from the learning process by randomizing the class labels, and found that the model actually was able to learn from data despite the complete randomization and even performed better from a generalization perspective.\"). In fact, this observation has been presented and discussed in several works in the past, including [1], which is not cited by the authors.\n- soundness of claims: the paper makes bold claims about \"how neural networks learn\" and what drives this process (\"we present as provocative claim that the process of learning from data happens independently of human-imposed structures. To support this, we introduce the bold alternative hypothesis called the “Pan for Gold”. \"). These claims remain conjectures and hypothesis which are only supported by empirical evidence that the network learns from random labels which does not prove the author's \"pan for gold\" hypothesis. Additionally, authors further justify the relevance of their work by relying on GradCam visualisation, a method proven to be unreliable -- as also mentioned by authors.\n- confidence in empirical findings: while the paper is well-written and clear, there is a lack of polishing of figures and of empirical results which impedes clarity and well as confidence in empirical results (e.g., missing axis labels, randomly masked out portions of curves, single seed experiments, core findings in section one are conducted on two small scale datasets and a single architecture type).\n- missing sections: the authors omit important sections to their work including a related work section and a discussion of the paper's limitations.\n\n\n[1] Bojanowski, Piotr, and Armand Joulin. \"Unsupervised learning by predicting noise.\" International Conference on Machine Learning. PMLR, 2017."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. **Theoretical Justification:** Can the authors provide a rigorous theoretical framework to support the \"Pan for Gold\" hypothesis? Specifically, how does SGD with unstructured labels in overparameterized models lead to meaningful generalization, and what are the underlying mechanisms?\n2. **Experimental Validation on Larger Datasets:** Have the authors considered testing the PUL algorithm on larger and more diverse datasets to validate the generality of their claims? Small-scale datasets may not capture the complexities of modern deep-learning tasks.\n3. **Comparative Analysis with Baselines:** How does the PUL algorithm perform compared to existing state-of-the-art methods in unsupervised domain adaptation and object discovery?\n4. **Clarity on Pan for Gold Hypothesis:** The hypothesis seems to conflate the effects of noise and regularization in SGD with meaningful learning from unstructured data. Can the authors clarify how their hypothesis differs from existing theories on overparameterization and implicit regularization?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "* **Challenging Conventional Wisdom:** The paper attempts to question the traditional beliefs about the necessity of structured labels and loss functions in deep learning, which is an interesting and bold endeavor.\n\n* **Novel Hypothesis Introduction:** The \"Pan for Gold\" hypothesis is a creative metaphor that could inspire new ways of thinking about generalization in deep learning.\n\n* **Exploration of Unstructured Labels:** Investigating the effects of training with unstructured labels is an intresting approach that could uncover overlooked aspects of model training dynamics."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces the \"Pan for Gold\" hypothesis, which challenges the traditional view that structured labels and well-defined loss functions are essential for deep learning models to learn meaningful representations and generalize well. The authors propose that generalization emerges naturally through the stochasticity inherent in SGD when training overparameterized models, even with unstructured (randomized) labels. They suggest that SGD acts like panning for gold, where valuable features are naturally sifted out from noise without relying on human-imposed structures.\nTo support this hypothesis, the authors conduct experiments where models are trained on datasets with completely randomized labels. Surprisingly, these models still learn meaningful features, as evidenced by improved performance over random initialization. They analyze this phenomenon using the NTK framework and observe a \"swing phenomenon,\" where model outputs fluctuate significantly during early training stages.\nBased on these observations, they introduce the PUL algorithm. They demonstrate PUL's effectiveness in tasks like unsupervised domain adaptation and object discovery. Additionally, they suggest that PUL mitigates issues like massive activation in vision transformers, aiding in model quantization."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* **Lack of Theoretical Rigor:** The paper makes strong claims without providing a solid theoretical foundation. The mathematical analysis is superficial and does not rigorously justify the \"Pan for Gold\" hypothesis or explain why unstructured labels should lead to better generalization.\n* **Insufficient Empirical Evidence:** The experimental evaluation is limited and inadequate to support the bold claims made. Experiments are conducted on small datasets like MNIST, CIFAR-10 and SVHN, which are not representative of modern large-scale tasks. The performance improvements reported are marginal and could be due to experimental noise.\n* **No Comparison with Baselines:** The paper fails to compare the proposed PUL algorithm with established baselines or state-of-the-art methods in the respective tasks. Without such comparisons, it's impossible to assess the significance of the results or attribute improvements to the proposed method.\n* **Overgeneralization of Findings:** The authors make sweeping generalizations about deep learning based on limited and specific experiments. The claim that generalization emerges naturally through SGD in overparameterized models trained with unstructured labels is not convincingly demonstrated.\nMethodological Issues: Key details about the experimental setup are missing or unclear, hindering reproducibility. For example, the process of assigning unstructured labels, hyperparameter settings, and specifics of the PUL algorithm are not adequately described.\n\nWeak Analysis of Results: The paper lacks a thorough analysis of the results. It does not explore alternative explanations for the observed phenomena or consider confounding factors. The interpretations often rely on anecdotal observations rather than rigorous investigation.\nAmbiguous Writing and Clarity Issues: The paper is difficult to follow in several sections due to ambiguous explanations and poor organization. Key concepts are not clearly defined, and the narrative lacks coherence, making it challenging to understand the proposed ideas fully."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Are the unstructured labels fixed during training or regenerated each epoch? Have you also experimented with changing labels at each epoch? This would be an important experiment as it could lead to completely different learning dynamics since the network cannot memorize stable image-label pairs.\n\n2. In Sections 4.1 and 4.2, how does the performance change with longer training periods? The paper only shows results with 2-3 epochs, but longer training analysis is necessary to understand the stability and effectiveness of the method.\n\n3. How was the optimal number of random classes determined in the PUL algorithm? Did you perform any experiments with different numbers of classes?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper presents a novel and interesting perspective on deep learning generalization by proposing a new hypothesis about the role of stochasticity in learning meaningful features\n2. The proposed methodology is remarkably simple yet demonstrates effectiveness, requiring only random labels and a few additional training steps\n3. The theoretical analysis through Neural Tangent Kernel provides mathematical insights into the learning dynamics and supports the main hypothesis\n4. The experimental results show significant performance improvements across various applications including domain adaptation and object discovery tasks, demonstrating the practical utility of the proposed method"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a new hypothesis about generalization in deep learning, suggesting it's not about learning structured patterns in data but rather a \"Pan for Gold\" process where SGD naturally filters useful features, and proposes the PUL algorithm utilizing random labels, demonstrating performance improvements in domain adaptation and object discovery tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper critically lacks essential experimental details needed for reproduction, including the specific method of generating unstructured labels, exact model architecture, hyperparameter settings, and detailed training procedures, making it difficult to validate the claims independently.\n\n2. The paper lacks clear explanation about whether unstructured labels remain fixed during training. Based on the paper's content, it appears that labels are fixed, in which case neural networks would inevitably learn visual features in the process of memorizing image-label pairs, as they need to recognize some visual patterns to distinguish between images even with random labels. This suggests that learning meaningful features might be a natural consequence of the memorization process rather than the proposed \"Pan for Gold\" hypothesis.\n\n3. The theoretical analysis is insufficient as the paper lacks in-depth discussion on why the \"Pan for Gold\" process leads to good generalization, focusing merely on describing phenomena without explaining the underlying mechanisms\n\n4. The experimental validation is limited, lacking analysis of performance with longer training epochs in Sections 4.1 and 4.2, missing ablation studies on the number of unstructured labels, and failing to provide sensitivity analysis for various hyperparameters."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Could you provide the definition of structure in this paper?\n2. How do you distinguish the impact of stochasticity with the impact of unsupervised/semi-supervised representation learning/model-inductive-bias/compression?\n3. In Table one, \"we applied transfer learning to the frozen encoder.\" Could you elaborate how is transfer learning carried out?\n\nMinor:\nIn Fig 2, \"red\" should be \"black\""
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper is well-written and presents an interesting analysis of generalization through stochasticity. I like the idea of decoupling the impact of human-imposed labels in generalization.\n\nThe paper carries out principled analysis on neural tangent kernel that demonstrates the swing phonemenon in the learning process. The visualizations of learning process through gradCAM and analysis on saliency maps are interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the generalization of deep learning methods when faced with random labels. It argues that generalization is not about learning the structure of data (X, Y), but rather follows a stochastic process that initially fluctuates before converging to a stable function space, akin to \"panning for gold.\" Experiments on unsupervised domain adaptation show that using random labels with KL regularization outperforms the source-only baseline, which does not apply any adaptation. Exploratory analysis also shows that the proposed random labels reduce outliers in the attention map, resulting in a more balanced distribution that is more suitable for quantization."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Although I liked the exploration of stochasticity and its role in learning, I think the paper places too much emphasis on supervised data and stochasticity while neglecting other important factors in unsupervised and semi-supervised learning. The experiments are relatively weak because they rely on a supervised model trained with ground-truth data and KL regularization, which does not fully demonstrate the impact of stochasticity.\n\n**1. Definition of structure, the role of supervised labels, and considerations for unsupervised/semi-supervised learning**\n\nThe paper frequently mentions \"structure\" but does not provide a clear definition. It appears to me that \"structure\" refers to supervised class labels assigned by humans, and the paper argues that the \"structure\" itself is not the essence of the 'gold' result and that learning also happens with random labels.\n\nI disagree with the notion that \"the goal of deep learning is to learn from data according to structures defined by humans.\" I think the supervised signal is only one source of information that models use to learn. Other sources of information include data itself as used in unsupervised or self-supervised learning, information from a model decision boundary manifested through unlabeled data as in transductive and semi-supervised learning setups, and assumptions about the world such as convolution for image processing. The paper solely focuses on the structure from supervisory signals and ignores other sources of information, which also plays a crucial role in learning that could be attributed to stochasticity in this paper.\n\n**2. Experiments**\n\nThe experiments focus on domain adaptation where a supervised model has been trained on ground-truth labels in the source domain, and the goal is to adapt the model to a closely related target domain. This setup weakens the empirical results because the model is initialized with a learned representation from ground-truth labels and does not fully demonstrate the impact of stochasticity in a learning-from-scratch setup. Moreover, the model only performs small adjustments due to the constraint of KL regularization that penalizes the model for deviating from the source model. It is well-known that, in semi-supervised learning, a model could outperform the source-only baseline without any target labels. It is unclear if the improves is from stochasticity or semi-supervised learning (clustering assumption, KL, transduction through batchnorm)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "The paper is horribly written and throws at the face of the reader, all throughout, ill-defined terms high level philosophical terms that I do not know what to make of."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed.",
"Yes, Other reasons (please specify below)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "see above"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The paper presents the observation that training on data with random labels as an unsupervised per-training scheme can learn useful “features”, though the details of how are left very vague.\nI am not familiar enough with this area of literature to know how novel this claim is. To me it does not seem surprising that training on random labels leads to a better model initialisation than the no pertaining.\nUnfortunately the written quality of the paper is very low and thus it is difficult to determine if the paper really offers any strengths."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents the observation that training on data with random labels can learn useful “features”, and the author suggest doing this as an unsupervised per-training scheme, though the details of how are left very vague. Unfortunately the written quality of the paper is very low, it is difficult to determine much more than this."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I apologise if English is not the first language of the authors but the written quality of the paper is sadly unacceptable for ICLR. At the moment the paper is almost impossible to follow. While there are a lot of machine learning terms mention the sentences are so unspecific and vague, lacking in concrete definitions and jumping from place to place it makes assessing the ideas of the authors impossible for me. I do not know if LLM’s have been used in the writing of the paper, or in translating from another language but the final quality is just not good enough at the moment.\n\nThe issues with the write are as follows:\n\n1. Use of vague terms, which are not defined. For example “structure”, “function speed”, “Noisy features”, “Gold features”\n\n2. Vague handy wavey claims not back up with references: “This aligns with physical intuition, where relaxation of tension leads to the stabilization of the space, and high-energy regions, like artifacts, are naturally eliminated.”\n\n3. Reference to a “Panning through Unstructured Label (PUL) algorithm” which is never really defined, you have to try and glean what the algorithm is from passing comments.\n\n4. The details of the experiment are extremely vague. For example. The full text exampling an experiment is: “Table 5 presents the object discovery performance on various trained models on ResNet50, including ImageNet pretrained, DINO (Caron et al., 2021a), and ImageNet pretrained weights further trained using our method. As can be seen from the results, even with just a three epochs of training using unstructured labels, performance can be easily improved. ”\n\n5. Baselines for the experiments are not explained, the experiments jump from place to place with very little intuition and justification of why important choices were made We would like to recommend the paper is rewritten with a greater leave of specificity, intuition and detail before being resubmitted.\n\n\nOther.\n\nThe experiments are extremely limited, typically only considering a single model, data set with no mention of repeats or the training procedure\nBaselines for the experiments are extremely limited a single baseline is used, with little detail of what this baseline was and why it should lead to a fair comparison\nIn the conclusion the authors claim they introduce a “bold alternative hypothesis called the “Pan for Gold”,” but after reading the paper I’m am left wondering what this current hypothesis that “Pan for Gold” is mean to be an alternative too? This is never explained in any level of detail."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper provides new insight on generalization through functional analysis and algorithms. The algorithm is applicable to wide area and sota in some areas"
},
"_bibtex": {
"value": "@misc{\nlee2024pan,\ntitle={Pan for gold},\nauthor={Junhoo Lee and Kyomin Hwang and Dongkwan Lee and Han Sangbum and Min Kyu KIM and Nojun Kwak},\nyear={2024},\nurl={https://openreview.net/forum?id=1gqR7yEqnP}\n}"
},
"abstract": {
"value": "Training a deep model is fundamentally about reducing loss, and we often believe that a ''good model'' is one that trained with a ''good loss.'' This paper investigates that belief. We show that even when learning with unstructured, randomized labels, models can still discover generalized features. We propose that generalization in deep learning is not about learning the structure of data through a well-structured loss, but rather a process akin to ''pan for gold,'' where gradient descent shakes through the function space, naturally stabilizing useful features. To support this, we present quantitative and qualitative experimental evidence, and introduce the Panning through Unstructured Label (PUL) algorithm. We demonstrate its effectiveness across various fields, showing improvements in unsupervised domain adaptation, state-of-the-art performance in object discovery, and its ability to mitigate massive attention issues. Finally, we offer a new interpretation of existing deep learning assumptions, challenging the conventional beliefs in the field."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Junhoo_Lee2",
"~Kyomin_Hwang1",
"~Dongkwan_Lee1",
"~Han_Sangbum1",
"~Min_Kyu_KIM2",
"~Nojun_Kwak1"
]
},
"authors": {
"value": [
"Junhoo Lee",
"Kyomin Hwang",
"Dongkwan Lee",
"Han Sangbum",
"Min Kyu KIM",
"Nojun Kwak"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Generalization",
"Overparameterized Network",
"functional analysis",
"Domain Adaptation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "lee|pan_for_gold"
},
"pdf": {
"value": "/pdf/0319d8978ae3a05fcc93747d94b335e773d1b564.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Pan for gold"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||
1hQKHHUsMx | What Kind of Pretraining Data Do Large Language Models Rely on When Doing Reasoning? | main | Active | large language model; LLM; reasoning; pretraining data; influence functions; mathematical reasoning | foundation or frontier models, including LLMs | 5;6;8;8 | 4;2;3;3 | 3;2;4;3 | 2;3;3;3 | 3;2;3;3 | 6.75 | 3 | 3 | 2.75 | 2.75 | -0.272166 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Could you further explain why calculating document gradients with the base model and the query gradients with the fine-tuned model? Could this discrepancy cause any potential problems?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper tries to tackle an intellectually significant question: how do LLMs generalize reasoning abilities from pretraining data to solve completion questions? This exploration into the mechanics of LLM reasoning generalization is both timely and meaningful, given the increasing focus on interpretability and robustness in AI. \n2. The findings provide intuitive insights, showing that LLMs draw on a broad range of abstractly related documents when solving reasoning questions, as opposed to the more targeted document reliance seen in factual questions. This highlights the importance of procedural knowledge and coding data for reasoning tasks, an observation that aligns with broader intuitions about reasoning and learning in LLMs.\n3. A key technical strength lies in the revision and adaptation of EK-FAC influence functions. The authors refine this method to assess the influence on model accuracy, which is essential for examining how specific documents impact LLM performance in reasoning versus factual tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the role of pretraining data in shaping large language models' (LLMs) abilities in reasoning tasks compared to factual question-answering. By analyzing two models of different sizes (7B and 35B parameters) across reasoning and factual queries, the authors aim to understand how LLMs generalize when tackling reasoning tasks and whether they rely on specific retrieval of information or broader procedural knowledge. The study applies influence functions to rank the most impactful pretraining documents for different queries, examining if reasoning draws from procedural patterns rather than specific facts.\n\nEmpirically, the study finds that reasoning tasks rely on a more distributed set of documents, often containing procedural content like code snippets or mathematical explanations, while factual questions frequently rely on specific documents containing direct answers. Code-based documents, in particular, emerge as influential for reasoning, likely due to their structured, step-by-step nature. Additionally, reasoning tasks across similar queries show correlated influence scores, suggesting a reliance on shared procedural knowledge. The larger 35B model also shows less variation in influence across documents, hinting at improved data efficiency. Together, these findings imply that LLMs approach reasoning by aggregating procedural knowledge rather than retrieving isolated factual data, shedding light on different generalization strategies in LLMs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The overall style resembles a blog post, presenting intriguing observations over a cohesive scientific narrative. For example, the conclusion/discussion section takes more than 1 page to explain everything again. The paper could either prioritize the revised EK-FAC function or convert the observations into some actionable strategies to improve LLMs. Additionally, reorganizing the paper to integrate findings more succinctly could create a more cohesive narrative.\n2. Although the paper acknowledges computational constraints, the scale of data and task complexity could be expanded to strengthen the conclusions. The study’s focus on basic arithmetic and simple mathematical queries limits its generalizability to broader reasoning tasks that are common in real-world applications. Also, the study examines only a subset (5 million documents) of the total pretraining data, which may exclude influential documents crucial to understanding the LLMs’ full generalization strategy.\n3. The paper predominantly examines positively influential documents, yet negatively influential documents could offer essential insights into reasoning limitations and biases. Understanding negative influences would allow the authors to identify pretraining data that hinders reasoning or introduces procedural noise, shedding light on inherent biases that might restrict generalization. Only focusing on the positively influential documents might bias our judgements towards cherry-picking conclusions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Could you elaborate more on how you define \"procedural knowledge\" in the context of your findings? How does this relate to the concept of learning algorithms or routines within the training data?\n- Given the high influence of code documents, how might this skew the model's reasoning capabilities, especially in non-coding contexts?\n- With these insights, what are the potential adjustments or enhancements in training strategies for LLMs to improve their reasoning generalization?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper provides an important insight of LLMs, namely how models generalize beyond their training data, which is crucial for advancing reasoning capabilities of LLMs.\n- The use of influence functions to study generalization in LLMs offers a good perspective on how models might learn to reason.\n- The experiments are well-executed, and the analysis and explanation for drawing the findings are reasonable."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper investigates the generalization strategies employed by LLMs when performing reasoning tasks compared to factual recall. The authors examine the influence of pretraining data on two LLMs of different sizes (7B and 35B parameters) by using influence functions to rank documents based on their impact on the likelihood of model outputs for reasoning and factual questions. They find that for reasoning tasks, LLMs do not rely heavily on direct retrieval of answers from pretraining data but instead use a broader set of documents that contain procedural knowledge relevant to the task. This suggests that LLMs generalize by learning how to perform reasoning steps rather than memorizing specific solutions. In contrast, for factual questions, the influential documents often directly contain the answers. The authors also note the overrepresentation of code in influential documents for reasoning, indicating its importance in teaching procedural knowledge to the models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The study only looks at a subset of the pretraining data, which might not capture less frequent but highly influential documents. \n- Findings are based on two models from the same organization, potentially limiting the generalizability across different architectures or training regimes.\n- There's no cross-validation with other methods of understanding model behavior which could corroborate the findings."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I have no serious questions for the authors, but if they have time:\n1. Can this methodology be applied to model false-positives? It would be interesting to explore how pretraining documents may relate to hallucinations in generative responses, given prior research which points to cases of memorization."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. This paper presents a series of interesting and novel investigations into the influence of documents from pretraining in model responses. Most research in model interpretability is done by examining or modulating model parameters and activations, since it is usually computationally intractable to trace model responses back to pretraining samples; this is frontier research, and I was excited to read it.\n\n2. The paper presents insights into which documents are used to answer mathematical reasoning questions, and crucially provides comparisons between two models within the same family, and also to a secondary task in factual question answering. The latter comparison was especially useful and cleanly conveyed the points made: specifically, that factual responses often rely on a specific document, but evidence is shown that reasoning responses may draw on a breadth of documents, possibly aggregating heterogeneous information into one response.\n\n3. The experiments were extremely narrowly defined, but the authors caveat this early and often throughout the paper. Additionally, even in this narrowly scoped setting approximations must be made in order to be computationally tractable, and the authors honestly qualify discussions with reasonable alternate hypotheses and give sub-experiments to explore what is the most likely hypothesis. This kind of writing is very thoughtful and I appreciated that the authors made reasonable decisions and honestly qualified the claims, which were well supported."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper applies the EK-FAC influence function to LLMs in an investigation of which documents, from a representative sample of LLM pretraining data, are used by a given model to answer basic mathematical reasoning questions. The EK-FAC influence function is used to produce a score for a given triple (prompt, completion, document) and these scores provide a basis to rank documents as more or less useful in generating the completion for the given prompt. Due to intense computational requirements, this technique is applied on a small sample of 80 prompt/completion pairs, but in great detail, examining several hundred documents at the top of the ranking for each pair. Several key findings emerge, including that models employ documents for reasoning responses in a different manner than for factual responses, and that such mathematical reasoning responses often rely on documents describing verbal procedures or code."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. As mentioned above, the experiments were very narrowly scoped. Only 80 questions were analyzed in total, and this 80 was further broken down into smaller sub-groups. Moreover, the questions were very simple mathematical problems using small numbers, requiring only short reasoning hops, and not resulting in fractional-valued answers. The experiments were performed only on two models within one model family, and one model is not available publicly. The authors do note all of these things, and some (not all) of these decisions seem to be made due to computational constraints, which is understandable. However, it would have been nice if these experiments were at least reproduced on fully public models such as Llama.\n\n2. The description of EK-FAC was brief and not as clearly described as the later experiments and results, which were very clear. It would be nice to have a little more motivation about the individual components in the given formulas, since this methodology underlies all of the later experiments. Further, the discussion section at the end of the paper (sec 5) was very dense and a bit confusing. Maybe this could be restructured? The alternating hypotheses in the paragraph starting on L490 were particularly hard to follow.\n\n3. (This is a minor point) Some of the mystique surrounding \"reasoning\" in LLMs may be because as a field we have conflated many types of problems into one, in the fervor of \"AGI\". Though this paper often discusses general reasoning, it looks specifically at mathematical reasoning, and it could be made more clear that these studies are distinct from linguistic reasoning, logical reasoning, spatial, etc etc. Analyzing linguistic reasoning provenance would be fascinating using this method, but would require different experiments."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Why were these two specific LLMs chosen, instead of more widely used and capable models?\n2. Using both fine-tuned and base models in the same experiment could lead to unreliable results due to differences in parameter initialization, potentially affecting influence calculations.\n3. Since LLMs rely on embedded representations, even if keyword matching fails to find an answer, does it conclusively mean the document is not similar to the answer?\n4. Could examples of retrieved documents for reasoning tasks be provided to offer insights into how they influence the model's approach to reasoning?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The proposed method is straightforward, well-explained, and includes sufficient detail, making it easily reproducible.\n2. This research addresses a crucial question: identifying what training data impacts LLM reasoning abilities, an area closely tied to model generalization and interpretability. It contributes to our understanding of LLMs.\n3. The paper presents intriguing findings, highlighting distinctions in how LLMs handle factual versus reasoning tasks. For instance, factual questions frequently retrieve specific information, while reasoning tasks benefit from procedural knowledge."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores the influence of specific pretraining data on the reasoning abilities of large language models (LLMs), focusing on how models rely on different types of documents when responding to reasoning versus factual queries. The paper applies influence functions to identify pretraining documents that impact performance on simple reasoning tasks. Results show that factual questions often depend on a smaller set of documents containing the answer, whereas reasoning questions are more influenced by documents with procedural knowledge."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The experimental setup is limited, potentially compromising the reliability of conclusions. Specifically: (1) only 80 queries were used for analysis, (2) the study included only three types of reasoning tasks, potentially limiting representation to other reasoning tasks, (3) there was no exploration of how different prompt formulations of the same query affect results, and (4) keyword-based methods for determining whether documents contain answers may be insufficiently accurate.\n2. The analysis may lack granularity, as it considers only each document’s influence on the overall completion without examining its impact on individual reasoning steps. This might affect the conclusions.\n3. While Appendix A.1 reports that influence scores are higher for certain documents, their similarity to random selections raises questions about whether influence functions reliably indicate actual influence."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We identify the data from a subset of pretraining data that is influential for downstream reasoning and factual questions for two LLMs of different sizes, and find evidence for a reasoning strategy that is unlike retrieval."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024what,\ntitle={What Kind of Pretraining Data Do Large Language Models Rely on When Doing Reasoning?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1hQKHHUsMx},\nnote={under review}\n}"
},
"abstract": {
"value": "The capabilities and limitations of Large Language Models (LLMs) have been sketched out in great detail in recent years, providing an intriguing yet conflicting picture. On the one hand, LLMs demonstrate a general ability to solve problems. On the other hand, they show surprising reasoning gaps when compared to humans, casting doubt on the robustness of their generalisation strategies. The sheer volume of data used in the design of LLMs has precluded us from applying the method traditionally used to measure generalisation; train-test set separation. In this work, we study what kind of generalisation strategies LLMs employ when performing reasoning tasks by investigating the pretraining data they rely on. For two models of different sizes (7B and 35B) and 2.5B of their pretraining tokens, we identify what documents impact three simple mathematical reasoning tasks and contrast this to the data that are influential for answering factual questions. We find that, while the models rely on mostly distinct sets of data for each factual question, documents often have a similar influence on different reasoning questions with the same task, indicating the presence of procedural knowledge. We further find that the answers to the factual questions often show up in the most influential data. However, for the reasoning questions the answers usually do not show up as highly influential, nor do the answers to the intermediate reasoning steps. When we characterise the top portion of the ranking for the reasoning questions qualitatively, we find that the influential documents often contain procedural knowledge, like demonstrating how to obtain the solution using formulae or code. Our findings indicate that the generalisation strategy the model uses when doing reasoning is unlike retrieval, but more like a strategy using many documents doing a similar form of reasoning."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"large language model; LLM; reasoning; pretraining data; influence functions; mathematical reasoning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/75e06c79517fb2df91bfa5df65289a7fa272838b.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/9c07b9b53fa8813a4f85d6e07992d46fa6211b84.zip"
},
"title": {
"value": "What Kind of Pretraining Data Do Large Language Models Rely on When Doing Reasoning?"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1hT2fsHbK9 | From discrete-time policies to continuous-time diffusion samplers: Asymptotic equivalences and faster training | main | Active | diffusion;variational inference;SDEs;PDEs;sampling;stochastic processes;GFlowNets | probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.) | 5;5;5;5 | 4;2;4;3 | 2;2;3;3 | 2;1;2;2 | 3;1;2;4 | 5 | 3.25 | 2.5 | 1.75 | 2.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. Is it correct to expect that the ELBO gap should converge to zero as the discretization becomes finer, or are there inherent limitations in the approach that cause the gap to saturate at a positive value? Clarifying this could help contextualize the observed results better.\n2. Are there any existing benchmarks or prior work that provide a comparable measure of ELBO gap performance for optimally trained diffusion samplers? How do the proposed methods stack up in this context?\n3. Can the authors provide more insight into why random placement of time steps works so (unexpectedly) well? Is there an intuitive or theoretical rationale for this observed behavior?\n4. In Theorem 3.4, there seems to be a potential issue as $\\vec μ_t$ appears twice in the statement. Could this be a mistake, or is there a specific reasoning behind this repetition? Clarification would be helpful."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is very well written and easy to follow, with clear exposition of the mathematical derivations and the empirical results.\n2. The experimental section is thorough and well designed, exploring the effects of different discretization strategies and their impact on performance in detail. The benchmarks used are diverse and represent a wide range of sampling challenges.\n3. The work provides strong empirical evidence that non-uniform time discretization (particularly random placement) improves training efficiency. This observation could be highly relevant for practitioners working with high-dimensional diffusion models. Furthermore, the identification of random time discretization as a performant strategy is novel and supported by robust experimental evidence.\n4. The paper effectively summarizes existing methods and objectives for diffusion sampling, offering a clear context for the proposed contributions and situating them within the broader body of work on diffusion models and sampling techniques."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the training of diffusion samplers and neural stochastic differential equations (neural SDEs) by examining the connection between continuous-time objectives and their discrete-time counterparts. The authors establish that global objectives for discrete-time policies converge to path-space measure divergence objectives in the continuous-time limit, while local constraints asymptotically align with partial differential equations governing the time evolution of marginal densities. This theoretical grounding aims to bridge reinforcement learning (RL) objectives and stochastic control frameworks for diffusion processes. Empirically, the paper demonstrates that training with coarse, non-uniform time steps, particularly with random placements, can achieve substantial computational efficiency gains while retaining strong performance across a range of benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While the theoretical contributions are valuable and provide an interesting link between discrete-time and continuous-time objectives, they are not completely unexpected and partly already present in the literature.\n2. In the experimental results, it is noted that the ELBO gap does not converge to zero as the discretization becomes finer but instead appears to stabilize at a positive value. The authors do not give an explanation for this phenomenon. In particular, the lack of a \"benchmark\" makes difficult to connect these simulations to the numerical results presented in the first part of the paper above.\n3. The observed performance gains with randomly placed time steps are well supported by empirical results, but the paper does not provide a theoretical explanation for why this approach works so well. Offering more insight into this phenomenon would enhance the overall impact of the findings."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Could the authors clarify why so much space is devoted to standard results? Would simplifying or condensing this content help highlight the unique contributions?\n\n2. Beyond applying existing convergence results, what novel techniques, if any, were introduced in proving Propositions 3.2, 3.3, and 3.4?\n\n3. Would more complex or realistic benchmarks alter the experimental outcomes, particularly in high-dimensional or non-Markovian sampling settings?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The approach of linking discrete-time policy objectives with continuous-time SDE training is a useful idea, albeit heavily reliant on established results.\n\n2. Authors show that this method potentially reduces computational costs for neural SDE training."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper examines training neural stochastic differential equations (SDEs) to sample from Boltzmann distributions without target samples. This work derives asymptotic equivalences by linking discrete-time policies to continuous-time diffusion. The approach is validated on sampling benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Firstly, I think the presentation of this work remains a major bottleneck for readers. Section 2 is preliminary, and it spans from pages 3 to 7. Such a lengthy preliminary section introduces well-known equations and results (e.g., equations (4)-(6) from GFlowNet papers, (9)-(15) from stochastic control and diffusion models, and (16), (17) as standard Euler-Maruyama discretizations). \nThese derivations, mostly grounded in existing work, dilute the contributions and add an undue burden for readers. Figures like Figure 3, which illustrate obvious points, seem unnecessary and further contribute to this issue. It is recommended to present additional informative and easy-to-follow diagrams in these sections.\n\n2. The primary theoretical contribution—showing asymptotic convergence from Euler-Maruyama discretization to continuous-time SDEs (Propositions 3.2, 3.3, 3.4)—seems not surprising. The convergence results are probably straightforward applications of established SDE theory, with little added insights or unique techniques. Without further exploration of new derivation techniques or distinctive theoretical angles, the contributions feel like direct applications of existing results.\n\n3. The experiments are conducted on standard synthetic benchmarks, such as Gaussian mixtures and low-dimensional toy distributions. To support this approach, it might be necessary to conduct higher-dimensional Bayesian inference tasks where the Boltzmann distribution is more untractable. Besides, the compared baselines exclude many recent models, such as flow-based generative models. \n\n3. While efficiency is demonstrated, additional benchmarks comparing computational costs with traditional methods in larger dimensions would be helpful for real-world applications."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "My main question is this: for a ML practitioner, how will the authors' results help?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper appears to be mathematically rigorous and experiments appear to give credence to the authors' work."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper discusses the relationship between continuous and discrete-time stochastic processes and their training. In particular, the main results give a series of propositions on how a discrete-time process can approximate a continuous time process. I have to say I had a hard time understanding the \"big picture\" of the authors' results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I found the paper very difficult to read. The notation is dense, not all appears to be defined, some is non-standard and unclear and I found it a little tricky to understand exactly what the authors wanted to do. It may be that the authors have solved in interesting problem in a genuinely useful way but that was unclear from the paper. All but the very expert reader would, in my view, find the paper a difficult read. \n\nA few specific comments are:\n\n- Abstract could be more informative and precise\n- Introduction is quite meandering and I wasn't quite clear on exactly what the authors were trying to do.\n- Figures 1 and 2 were placed, in my view, quite early in the paper and were hard to interpret. They needed more textual description, or, considering where they were placed, needed \"dumbing down\" a little. \n- Equation (1) is somewhat standard but, for completeness, it would have been useful to know what \\sigma(t) is (I could guess). Equation (1) is similar to (9) apart from \\mu(t). I think the differences between the various forms of \\mu(t) needs to be explained in more detail.\n- It wasn't clear to me exactly what the reverse arrow meant in terms of policy e.g. the backwards arrow is used on \\pi(t) below equation (3) but without any definition as far as I can see. \n- I found Section 2 quite muddled with various different concepts introduced with not too much explanation. I realise there is a page limit, but it was bordering on the unpenetrable. \n- I didn't really understand how the Propositions in Section 3 ended up affecting the Results in Section 4. Perhaps I am dense, but it would be good if the authors could explain this better."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see the weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The problem studied in this paper is well-motivated.\n\n* This paper presents extensive results in both theory and experiments.\n\n* The appendix provides a comprehensive complement to the main text."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores the connection between continuous-time SDEs and their discretization, particularly focusing on the influence of the chosen timestep. It demonstrates that using non-uniformly discretized time with fewer steps can achieve similar performance during inference. Theoretical results are provided to support this approach."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The theoretical results in Section 3 primarily focus on the convergence of the Euler-Maruyama method. Specifically, they show that convergence is ensured as the maximal step size approaches zero. However, these results do not explain why non-uniform discretization would generally be superior to uniform discretization. The advantage of non-uniform discretization—one of the main contributions of this paper—is demonstrated only through experiments\n\n* As previously mentioned, there seems to be a gap between the theoretical and empirical sections of this paper. After reading the introduction, I expected to see concrete theoretical results that justify the use of non-uniform discretization. However, simply showing that convergence is guaranteed as $\\Delta t$ approaches zero is unsurprising. The authors might consider adding more discussion on why uniform discretization is not always the optimal choice\n\n* It has been proven that the order of convergence is determined by the step size, and the Euler-Maruyama scheme with uniform discretization has been shown to achieve optimal performance in the general case (see 'Numerical Treatment of Stochastic Equations' by Rümelin, 1982). I wonder if the claim made in this paper contradicts that result.\n\nI would be willing to increase my rating if the authors are able to address my concerns."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We find theoretical connections between discrete-time and continuous-time training objectives for diffusion samplers and show their empirical implications for faster training."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024from,\ntitle={From discrete-time policies to continuous-time diffusion samplers: Asymptotic equivalences and faster training},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1hT2fsHbK9},\nnote={under review}\n}"
},
"abstract": {
"value": "We study the problem of training neural stochastic differential equations, or diffusion models, to sample from a Boltzmann distribution without access to target samples. Existing methods for training such models enforce time-reversal of the generative and noising processes, using either differentiable simulation or off-policy reinforcement learning (RL). We prove equivalences between families of objectives in the limit of infinitesimal discretization steps, linking entropic RL methods (GFlowNets) with continuous-time objects (partial differential equations and path space measures). We further show that an appropriate choice of coarse time discretization during training allows greatly improved sample efficiency and the use of time-local objectives, achieving competitive performance on standard sampling benchmarks with reduced computational cost."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"diffusion",
"variational inference",
"SDEs",
"PDEs",
"sampling",
"stochastic processes",
"GFlowNets"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/0ad32a97a547aad5965a088d5e0b27a2ac9d410e.pdf"
},
"presentation": null,
"primary_area": {
"value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/a976a40327ac91689205332763505867d7185932.zip"
},
"title": {
"value": "From discrete-time policies to continuous-time diffusion samplers: Asymptotic equivalences and faster training"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1i6lkavJ94 | Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering | main | Active | Conformal Prediction;Generative Models;Risk Control;Active Learning;Language Models | generative models | 3;5;6;8 | 2;3;1;4 | 2;3;2;3 | 2;2;2;3 | 3;3;3;3 | 5.5 | 2.5 | 2.5 | 2.25 | 3 | 0.496139 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "-- I might have missed it, but do at least a fraction of the experiments report some kind of human evaluation? I think to validate the method, at least some of it might be reasonable. While using another generative model (as a judge or to generate a calibration set) is popular, it still presents an incomplete analysis. \n\n-- The authors might also want to discuss the works on conditional language validity by Cherian https://arxiv.org/abs/2406.09714 and also the earlier work by Mohri and Hashimoto https://arxiv.org/abs/2402.10978. Further, there is also a literature on confidence scoring which is often used for fine-tuning and reducing hallucinations. e.g. Kuhn et al. https://arxiv.org/abs/2302.09664, Lin et al. https://arxiv.org/abs/2305.19187, Wang and Holmes https://web3.arxiv.org/abs/2406.05213. It would be useful to include a brief discussion of these and how conformal methods might be used to calibrate such scores. It would help to bridge the two somewhat separate lines of enquiry together."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "-- The paper is quite well-written and clear. Each of the steps are well-described and easy to follow. \n\n-- The method is novel along multiple axes: Wrappers generally make far more simplifying assumptions due to the intractability of conformal prediction directly in such settings; the admissibility control criteria and the connections to pareto methods are interesting and could provide straightforward avenues for future work.\n\n-- While still compute intensive (since multiple generations are required), it could be tuned based on available data for some domain."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This fairly well-explicated paper considers the following question of much recent interest: how could we obtain some semblance of guarantees for generative models' outputs i.e. in terms of factuality. This is related to the problem of hallucination control directly, since a method involving calibration for correctness could reduce hallucination if a suitable domain-specific calibration set were available. The approach of the paper is simple yet innovative. In a basic sense it does the following: In the first step, the method samples from the generative model conditioned on a fixed input. There is a calibration parameter that controls, based on a suitable non-conformity measure, that the generations contain at least one correct generation. Then, the generated set is pruned further using separate calibration parameters based on diversity and factuality considerations. Unlike some previous works, the sequential nature of the process permits the overall admissibility to be easily factorizable, permitting a direct application of conformal prediction proper. An interesting connection is also made to the pareto methods since there are multiple calibration parameters to be handled. The experiments report a general improvement datasets and are sufficient to demonstrate the applicability of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "-- There are now a bunch of papers on using conformal wrappers for filtering long form generations by dividing them into segments and then scoring. I think it would have been great to evaluate on such tasks as well, as generally QA type tasks are a bit too easy.\n\n-- Sampling multiple times and expecting a correct response can be quite compute-intensive."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* The paper defines admissibility as including at least one (semantically) correct answer in the prediction set and aims to minimize the prediction set size while ensuring this inclusion. This is achieved by a “sub-sampling” technique, sampling answers based on a quality score ranking. Can the proposed method generalize to a broader admissibility definition, such as including multiple correct answers (e.g., 5 out of 10) or maximizing the fraction of correct answers? How would this method perform compared to baselines if the goal were to optimize the fraction of acceptable answers within a fixed prediction set size?\n* How does the sequence of filtering stages (diversity vs. quality) impact performance? Why is diversity filtering prioritized in the proposed method?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* This paper presents an efficient approach for generating prediction sets with admissibility guarantees by using a sequential generation and greedy filtering strategy.\n* It reduces the number of admissibility checks during calibration compared to previous baselines, improving computational efficiency.\n* Experimental results support the method’s effectiveness in reducing both query counts and prediction set sizes to meet admissibility criteria."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents SCOPE-Gen, a sequential conformal prediction method designed to generate prediction sets that satisfy admissibility guarantees with high probability. The method operates in two main stages: a generation stage and a filtering stage. In the generation stage, i.i.d. samples are drawn from the generative model until a specified non-conformity measure (related to sample count or quality) surpasses a threshold set by calibration with ground-truth samples. In the filtering stage, the prediction set is refined in a greedy manner, optimizing for diversity and quality based on another threshold derived from calibration. To ensure admissibility, the approach leverages a Markov chain factorization for admissibility control, and calibration is conducted on independent, non-overlapping data subsets to enable this factorization. Experimental results demonstrate that SCOPE-Gen reduces both the number of queries to the admission function during calibration and the size of the prediction set needed to meet admissibility requirements, outperforming baseline methods like CLM."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The paper lacks a theoretical analysis detailing how effectively the proposed method reduces the required admissibility checks and prediction set size.\n* The sequential generation and filtering process may introduce additional computational costs by generating a large number of samples before the filtering stage.\n* The calibration process, which involves sample splitting for generation and each filtering stage, may require extra ground-truth samples to determine accurate threshold (lambda) values."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. One of the motivations for the method is the possible reliance on human oracle and expenses related to querying it. In the end the experiment use a non-human validation function. Related questions:\n- Would not the human oracle make some of the assumptions invalid (e.g. the need for increasing update function)?\n- As some point you mention that multiple queries over the same example need to be executed - wouldn't the human validation bring even more noise into the whole process and invalidate some of your probabilistic conclusions?\n2. I do not understand equation (6). What level of quantile is this? What is the interpretation of the n-fracion in the right-hand side?\n3. Is there a way to independently post-evaluate that the experimental results are really conformal with the $\\alpha$ level you were trying to achieve. Or would you need to use your own calibration parameters? If it were possible, this would provide additional useful insight.\n4. Please address the concerns mentioned under Weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper addresses an important problem of current stochastic generative models hallucinating non-factual responses. Formulating this problem within the risk-control framework can provide the mathematical means for addressing it. The experimental evaluation over the natural langauge tasks seems relevant."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a heuristic algorithm to filter predictions of generative model to achieve conformal admissability. It argues that previous techniques need to evaluate the admissability multiple times per instance during the calibration phase, which is impractical when the admissability is evaluated by a human oracle. In their setup, the admissability factorizes into Markov chain and thus es requires fewer queries to the admission function (e.g. human oracle). The paper presents the algorithm for the filtering heuristic as well as the necessary calibration algorithms using two filters based on diversity and quality functions of the generated examples. The experiments over natural language generative tasks and molecular generation reporting favorable metrics in particular in terms of reduction in number of queries and runtime."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I find the empirical evaluation very difficult to asses both, on its own as well as in comparison to previous methods. To understand the effects of various parts of the proposed algorithms, it would be beneficial to perform ablation studies that could provide more insights into the effects of its individual components (e.g. the update function, the coverage level $\\alpha$, etc.). \nI am not convinced about the benefits of the molecular example - when there is only one valid/admissable example, it seems to me that simple checking of the validity at the generation time for each generated specimen should be enough. I do not see the benefit of the proposed method in this setup. This also applies to the TriviaQA experiment.\nThe algorithm requires an independent calibration set which seems to be very difficult to obtain in practice. In the presented experiments either boils down to something very trivial (single example being the valid one) or relying on another model which itself may be of uncertain quality. Further, I see similar issue with the update and filter functions which seem difficult to formulate reasonable in realistic scenarios. For me these are major limitations of the method which shall be discussed. \n\nThe main text of the paper (section 9) spills over to page 11. As per the call for papers, there is a strict 10 page limit on the main text and the call suggests a desk reject in case of violations of this limit."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 1
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Can the author clarify the results in Table 2 of the appendix? It appears that the performance gap between SCOPE and CLM is narrowing - can the authors explain why this might be happening?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Generally well-written; it’s clear that the manuscript is largely inspired (and adopted) from the setup in Quach et al 2024. Nevertheless, the authors detailed differences to Quach et al 2024 and highlight the efficiency of their method by leveraging sequential factorization."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This manuscript introduces a sequential conformal prediction method called SCOPE-Gen. SCOPE-Gen uses a sequential pruning approach to iteratively refine the prediction set, allowing for separate control over each factor in the Markov chain, and demonstrates a significant reduction in the number of admissibility evaluations required during calibration. The method has been experimentally validated in natural language generation and molecular graph extension tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- (minor) Fig 1 is slightly unclear — the caption should at least include some explanation of \\nu, which is not specified until Sec 3"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024conformal,\ntitle={Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1i6lkavJ94},\nnote={under review}\n}"
},
"abstract": {
"value": "Generative models lack rigorous statistical guarantees with respect to their predictions. In this work, we propose Sequential Conformal Prediction for Generative Models (SCOPE-Gen), a sequential conformal prediction method producing prediction sets that satisfy a rigorous statistical guarantee called conformal admissibility control. This guarantee means that the prediction sets contain at least one admissible (or valid) example, with high probability. To this end, our method first samples an initial set of i.i.d. examples from a black box generative model. Then, this set is iteratively pruned via so-called greedy filters. As a consequence of the iterative generation procedure, admissibility of the final prediction set factorizes as a Markov chain, where each factor can be controlled separately, using conformal prediction. In comparison to prior work, our method demonstrates a large reduction in the number of admissibility evaluations during calibration. This is crucial e.g. in safety-critical applications, where these evaluations must be conducted manually by domain experts and are therefore costly and time consuming. We highlight the advantages of our method in terms of admissibility evaluations and cardinality of the prediction set through experiments in natural language generation and molecular graph extension tasks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Conformal Prediction",
"Generative Models",
"Risk Control",
"Active Learning",
"Language Models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7671aa4a190d0778e0e821656421711e15127489.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/b4a371dedafafb4dd0825d0f906c7a3a692b0196.zip"
},
"title": {
"value": "Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1iuaxjssVp | Fast Uncovering of Protein Sequence Diversity from Structure | main | Active | Protein design;inverse folding;generative modelling;transfer learning | generative models | 5;6;8;8 | 5;4;3;3 | 3;2;3;3 | 3;2;4;4 | 3;4;3;3 | 6.75 | 3.75 | 2.75 | 3.25 | 3.25 | -0.98644 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* How is sampling of InvMSAFold-PW achieved? Which MCMC algorithm do you use? \n* By using a PCA projection you show that sequences generated by InvFoldMSA have a better coverage of sequence space. But why do you restrict the analysis to the first two principal components? \n* Have you tried AlphaFold3 to validate the sequences generated by InvFoldMSA?\n\n__Typos / grammar__\n\n* Line 117: \"and whos outputs\"\n* Line 212: \"can be reduce to\"\n* Line 248: \"robsutly\"\n* Line 277: \"chose\" - should be present tense\n* Line 302/303: What do you mean by \"consistent with the hardness reasoning behind the split\"\n* Line 475: \"becoming worse that both\"\n* The use of the symbol $\\\\propto$ to indicate quality up to an additive constant is a bit unusual."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* An interesting idea to approach the inverse folding problem (i.e. the problem of generating sequences that fold into a given structure). \n* Proposes a low-rank approximation of the couplings and fields of the lightweight sequence model. \n* Fast generation of sequences that fit a well to a given structure."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "InvMSAFold is an inverse folding method that is optimized for diversity and speed. The general idea is to use a neural net to predict from an input structure and sequence a pairwise interaction model (a Potts model or Boltzmann machine) that captures the structure-sequence relationship and can be used to efficiently generate sequences that differ largely from the input sequence. To tame the number of parameters (fields and pairwise couplings), InvMSAFold predicts a low-rank approximation of the coupling matrix. The paper proposes two models: 1) InvMSAFold-PW is a full pairwise model that reduces the number of parameters significantly and also allows for efficient learning by using a maximum pseudo-likelihood. A drawback is that sequence generation requires MCMC. 2) InvMSAFold-AR is an autoregressive model whose likelihood is tractable thereby allowing for Maximum-likelihood parameter estimation as well as sampling of sequences in a straight forward fashion. Using various metrics the authors show that InvMSAFold, and in particular InvMSAFold-AR, outperforms current state of the art."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The idea of generating a Potts model has already been proposed by Li et al. (2023)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I was not clear on the InvMSAFold-AR/-PW. I understand that PW requires MCMC sampling and AR does not but I wonder are there cases/tasks in which one of the two PW/AR models is more appropriate? \n\nWhat would be an example in which you could demonstrate preserved functional integrity that is not directly related to structural integrity in your model's generation of diverse protein sequences? It seems an important question because when you want to design a protein to do some specific function (bind some small molecule or interact with another protein) you only care about structure to the extent that it acts as a proxy for function. But maybe it doesn't have to be? Do you think your models could get at function outside the restraint of the specific structure that is your input?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The sampling speed of InvMSAFold is a lot faster than ESM-1F or ProteinMPNN, this is important when you want to generate millions of models, as I think could be reasonable for virtual screening/protein design applications. \n\nInvMSAFold seems able to sample more diverse regions of potential protein structure/function space than ESM-1F, again this is important when you are trying to select for particular properties (substrate specificity, thermostability).\n\nThat InvMSAFold is able to capture residue covariances in MSAs may also be useful for better backbone modeling that particular functions could then be engineered into."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents InvMSAFold, an inverse folding model that generates the parameters of a probability distribution over the space of protein sequences with pairwise interwise interactions, allowing for efficient generation of diverse protein sequences while preserving structural and functional integrity. InvMSAFold is a neural network in which the inputs are the structure backbone coordinates X and the outputs are the parameters of a lightweight sequence model. The lightweight sequence model parameters are used to sample amino acid sequences compatible with the input structure. Training is based on the CATH database, which classifies protein domains into superfamilies and further into clusters based on sequence homology. The model is fast and has uses in protein design and virtual screening. Biologically, the model captures amino acid covariances observed in Multiple Sequence Alignments (MSA) of homologous proteins. The model expands the scope of inverse folding to retrieve a landscape of homologous proteins with similar folds (they say the 'entire' landscape, I don't think they have shown this). I am overall very enthusiastic about this work."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There is not a specific example taken through to the conclusion that the model preserves \"structural and functional integrity\". Functional integrity is what you want when you're designing new proteins/doing virtual screening. The authors should consider including such an example or clarifying this statement since that is a major claim of their paper. \n\nI was not clear on the InvMSAFold-AR/-PW. I understand that PW requires MCMC and AR does not but I wonder are there cases/tasks in which a PW vs AR model is more appropriate?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.\tThe symbols of Eq.(5) is not consistent with that of Eq.(3). It would be better to use consistent symbols.\n2.\tThe proof in section 2.2.1 is incoherent. What is the function of Eq.(5)?\n3.\tIn Eq.(7). It would be better to clarify that Eq.(7) is the L2 regularization term. \n4.\tIn section 3, it would be better to list the number of entries in each dataset.\n5.\tIn section 4.1, what is the necessity of tuning the hyper-parameters of InvMSAFold-AR?\n6.\tIt seems that InvMSAFold-PW performs better than InvMSAFold-AR at larger hamming distance. What is the probable cause?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "In their computational experiments, the authors demonstrated that the sequences generated by their models not only fold into the target structure but also exhibit greater diversity and more effectively capture the correlations between residues at different sites. Furthermore, the showed that this sequence diversity extends to other properties, such as predicted solubility and predicted and predicted thermostability. Overall, this paper represents a new methodological advancement."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a neural network called InvMSAFold, which takes the protein structure as input, and outputs the parameters of two statistical models. These models are then used to generate a diverse set of protein sequences corresponding to the input structure. By utilizing these simple statistical models, the proposed pipeline effectively addresses two major challenges faced by other inverse-folding methods, such as ESM-IF: (1) the limited diversity of generated sequences and (2) slow sampling speed."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe authors only compare their method with ESM-IF1, and do not compare their method with other state-of-the-art inverse folding methods.\n2.\tIn many places such as in section 1, \"ESM-IF\" was wrongly typed as \"ESM-1F\". This may lead readers to perceive the authors as lacking expertise.\n3.\tThe article contains too many grammatical errors."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "I'd like to raise a few major points:\n* It would strengthen the paper to benchmark against other methods as well. I would suggest for example a simple Potts model without the low-rank approximation and without the pre-trained ESM1-IF encoder, and an additional method such as ProteinMPNN beyond ESM1-IF. This would highlight the contributions of the paper more clearly, as currently, it may seem somewhat reliant on ESM1-IF.\n* I highly recommend adding a plot that shows RMSD versus sequence recovery, as these metrics would provide valuable insights into the model’s performance.\n* In Section 2.2.1, the explanation of Equation 7 and how it maintains linear scaling isn’t entirely obvious, at least to me. I suggest elaborating on this either within the main text or in the supplementary material to clarify the reasoning. It would be helpful to include 1-2 sentences explaining why the method or process is linear and how this linearity is established. This will provide clarity to the reader and strengthen the argument by highlighting the underlying reasoning behind the concept.\n* To make the manuscript even stronger, it would be useful to include (1) an analysis of how the method scales with very large sequences or structures, and (2) a discussion of how the size of the MSA impacts model performance.\n\nA minor point:\n* In Section 2.2, I recommend including the formula that shows the normalization constant, as it is referenced in the text but not explicitly provided.\n\nThere are several typos throughout the manuscript that disrupt the flow. I have listed the ones I noticed while reading, but I recommend a re-read of the manuscript to specifically check for additional typos:\n - Line 17: The phrase “space of sequences with pairwise interwise interactions, capturing the amino acid…” contains the term “interwise,” which doesn’t seem correct or clear.\n - Lines 107-108: The word \"Moreover\" is used consecutively, which disrupts the flow.\n - Line 117: The word \"Whos\" should be corrected to \"Whose.\"\n - Line 299: \"We monitor the the negative...\"—\"the\" is repeated.\n - Line 315: \"A can be seen...\" should likely be \"As can be seen...\""
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The authors present a novel and elegant approach to optimizing Potts model construction, training and sampling. The paper is well-structured, clearly outlining each crucial part of the methodology in a way that is easy to follow. The method is compared and benchmarked against a well-established approach, and performance metrics computed and reported."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors present an efficient method for designing protein backbones using a neural network that predicts a Potts model. The proposed architecture includes a pre-trained ESM1-IF encoder that encodes the protein backbone, generating rotation-invariant embeddings. These embeddings are then passed through a transformer-based decoder, which produces a low-rank matrix that is ultimately used to compute the fields and couplings. The low-rank approximation is a clever technique that helps mitigate the quadratic scaling cost typically associated with such computations. The neural network was trained using two distinct approaches: (1) a standard pseudo-likelihood loss, and (2) autoregressive sampling (over amino acids) with maximum likelihood training. To avoid training on single sequences, the model was trained on multiple sequence alignments (MSAs), with the mean pseudo-negative log-likelihood calculated over randomly sampled subsets of the MSA. Training and testing data were sourced from the CATH database, following its hierarchical classification to create test sets of varying difficulty, depending on the similarity between the training and test data. The authors demonstrate that their model better reconstructs covariance matrices compared to ESM1-IF, based on Pearson correlations. Moreover, the authors show that projected MSAs using PCA more closely reflect the natural sequence distributions, suggesting that their generated sequences, or predicted MSAs, are more diverse. When refolding designed sequences for test set structures, the InvMSAfold method proves to be more robust than ESM1-IF for sequences that deviate further from the native structure, and comparable to ESM1-IF for sequences that are highly similar to the native.\nIn conclusion, the paper demonstrates how a Potts model can be efficiently constructed, showing that the resulting model generates sequences that are plausible, diverse, refold successfully with AlphaFold2, and possess other promising biochemical attributes."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the proposed methodology for improving the efficiency of Potts model construction is promising, there are a few areas where the paper could be strengthened. First, Potts models have long been used in fixed-backbone protein design, which makes it difficult to clearly identify the novelty and specific contributions of this work. Additionally, the method relies on components of ESM1-IF and then benchmarks against this model, which may limit the fairness or objectivity of the comparison. Another area for improvement is scalability. The paper does not provide any analysis on how the model handles large structures or long sequences, which could be useful for evaluating its broader applicability. Furthermore, there is no discussion on the significance of using MSAs for training versus single-sequence training, nor is there any exploration of how deep the MSAs need to be if they are indeed important."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024fast,\ntitle={Fast Uncovering of Protein Sequence Diversity from Structure},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1iuaxjssVp},\nnote={under review}\n}"
},
"abstract": {
"value": "We present InvMSAFold, an inverse folding method for generating protein sequences optimized for diversity and speed. For a given structure, InvMSAFold generates the parameters of a pairwise probability distribution over the space of sequences, capturing the amino acid covariances observed in Multiple Sequence Alignments (MSA) of homologous proteins. This allows for the efficient generation of highly diverse protein sequences while preserving structural and functional integrity.\nWe demonstrate that this increased diversity in sampled sequences translates into greater variability in biochemical properties, highlighting the exciting potential of our method for applications such as protein design. The orders of magnitude improvement in sampling speed compared to existing methods unlocks new possibilities for high-throughput in virtual screening."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Protein design",
"inverse folding",
"generative modelling",
"transfer learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/3709c8e9b59d23fa64cf18fb78a629d8a6840a1e.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/1a7de29aea5bc16bb22901f204f2710fca50eb01.zip"
},
"title": {
"value": "Fast Uncovering of Protein Sequence Diversity from Structure"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1jcnvghayD | Bayesian Optimization via Continual Variational Last Layer Training | main | Active | Bayesian deep learning;bayesian optimization;uncertainty | probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.) | 3;5;8;8 | 4;4;3;4 | 2;1;3;4 | 2;3;3;3 | 3;3;3;3 | 6 | 3.75 | 2.5 | 2.75 | 3 | -0.544331 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Could you further elaborate on the differences between LLLA and VBLL? LLLA outperforms VBLL on many of the non-stationary benchmarks, and from line 530, it appears that VBLL-based approaches may be less flexible than last-layer Laplace approximations."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The method is well-explained and is theoretically justified, and there are additional modifications which can be made to increase the efficiency such as feature re-use and sparse full model retraining. This flexibility enables practitioners to balance the tradeoff between model performance and computational cost.\n- The authors use a diverse setting of test objectives, specifically demonstrating performance on instances with high-dimensionality and non-stationarity. \n- VBLL appears to be robust to hyperparameter choices and can be used as a drop-in surrogate model, unlike typical GPs which require careful kernel selection."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose VBLL networks as a surrogate for Bayesian optimization and identify a relationship between optimizing the variational posterior and recursive Bayesian linear regression. They demonstrate competitive BO results with VBLL on diverse single and multi-objective benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Although it appears that one of the primary motivations behind this work is the increased efficiency compared to other BNN surrogates, there is no measure of runtime or computational cost within the paper. It would be helpful to understand how these methods perform as a function of computational budget. This could also help clarify the difference in performances between VBLL and VBLL CL. \n- There is also currently no demonstration of why this approximation would be preferred over the using the exact marginal likelihood with approaches like DNGO [1]. Without these baselines, there is minimal evidence that the proposed method has practical merit over existing work. Furthermore, it would also be useful to compare last-layer methods like VBLL to more expensive BNN surrogates like deep ensembles so we can assess the tradeoff between computation and performance. \n\n[1] Snoek et al, Scalable Bayesian Optimization Using Deep Neural Networks, 2015"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* In line 110 und 111 it helps to point out that $\\mathbf{w} = S^{-1} \\mathbf{q}$ is the vector of precision-means (for those unfamiliar with natural parameters of the Normal distribution)\n* The proof of Theorem 1 in the appendix relies on the simple observation that the approximating family contains the true distribution. I would have preferred to see this in the main body of the text; it's less \"mechanical\" than I expected and key to the reasoning of the paper (line 178-189 can be significantly shortened b/c it uses the well known Cholesky decomposition for approximating the inverse and log-determinant of the precision)\n* Line 217: Where is the parameter $V$ (Wishart prior scale) necessary?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The paper combines two well studied ideas in an elegant way and the presentation is (relatively) easy to follow (though I would have wished a bit more emphasis on the natural parameterization of the Normal distributions as this is key to the computational efficiency). The empirical studies are extensive and well discussed."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a combination of two ideas: (!) Bayesian last-layer training of neural networks with (2) using a parametric Bayesian linear model for black-box function optimization. Using natural parameterization of the last layer Gaussian and assuming independent Gaussian noise, it is possible to use a continual update which is only $O(N^2)$ in the last-layer number of neurons $N$. As with every parametric Bayesian function model, the acquisition function can be directly and analytically optimized using Thompson sampling. The empirical results on a wide variety of Bayesian Optimization tasks are promising."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "One aspect that is disregarded by the paper is how to chose the network architecture for all but the last-layer; I have no idea how sensitive the quality of the proposed approach is to this. In essence, the complexity of choosing a kernel function for GPs has been shifted to the network architecture of the underlying neural network. This is not discussed in sufficient detail. Also, only at the end the difference to Laplace approximation of the last layer is discussed; I would have expected this in the Related Work section."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to comments in \"Weaknesses\"."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well-written and a pleasure to read. The problem statement is clear from the outset, and the connections to related work are extensive. I also appreciated how the paper’s focus on practical aspects such as improving training efficiency via continual learning. Having the method implemented in BoTorch is also appealing to practitioners wanting to experiment using this method in real-world settings. \n\nThe experiments demonstrate that VBLL performs well in the targeted settings having complex input correlations and non-Euclidean structures. Showing that VBLL outperforms competing techniques on real-world datasets such as the Oil Sorbent and Pest Control datasets adds further credence to how VBLL is suited to multi-objective settings prone to numerical instability.\n\nWhile the contributions may initially appear incremental, adapting VBLL to BO introduces challenges that require non-trivial solutions. The need for efficient, online training in BO necessitated the development of recursive conditioning and continual learning updates, which are distinct from standard regression tasks. Addressing the requirements of multi-objective and high-dimensional settings also required effective workarounds to address numerical stability issues."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Although Gaussian processes (GPs) are widely used for Bayesian Optimisation (BO), they are not always well suited to modelling functions with complex correlations across inputs, and are often limited by the choice of kernel function. On the other hand, Bayesian neural networks (BNNs) can better handle complex non-Euclidean data, but are computationally expensive and challenging to condition on new data. In this work, the authors extend recent work on Variational Bayesian Last Layer (VBLL) models specifically for BO. They show how VBLLs can be adapted as efficient surrogate models for BO tasks through modified training procedures, which enable continual, online training with recursive conditioning, improving scalability. Additionally, the authors demonstrate how VBLL’s parametric structure enables effective Thompson sampling for both single- and multi-objective acquisition functions, offering more stability and numerical efficiency compared to GPs. Experiments compare VBLL’s performance against other techniques such as baseline GPs, and BNNs, and show that VBLL performs especially well on complex tasks, such as the Oil Sorbent benchmark, where other approaches struggle due to numerical instability."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Deep kernel learning was the first method to come to mind when reading the motivation for this work. While I appreciated its inclusion in the experimental section, I would have liked more discussion in the earlier sections on why DKL might be less ideal than VBLL. To my understanding, DKL’s computational complexity, especially in high-dimensional settings, might be a key differentiator, but additional detail on this would help clarify VBLL’s practical advantages right from the outset.\n2. While the experiments are quite extensive, I would appreciate more insight on the cases where the method is expected to underperform compared to other approaches. Although dedicated experiments are provided in the supplementary material, high-level insights on possible sensitivity to hyper-parameters could also be included in the main text."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. I think the runtime advantage of the suggested algorithm must be more clearly presented, since it is the motivation of much of the methodology. Specifically, be accompanied by results showcasing a benefit (in e.g. regret/performance per wallclock time), especially compared to LLLA which in terms of regret per iteration performs similarly. \n\n2. There are a couple of sections in the methodology which I think unjustifiably and unnecessarily make claims without backing (see Weaknesses). I recommend the authors to look over the claims and make sure they have backing; either by adding relevant proofs, experiments or references for claims important to the paper, or lessening/removing claims which may be unnecessary. It is okay that not every design decision in a larger algorithm (or system or model) is fully backed, but then those design decisions should arguably not be presented as central parts of the methodology accompanied by unbacked claims. \n\nOverall, the paper addresses a meaningful gap in the literature. If the concerns outlined above are addressed, I would be inclined to raise my evaluation score."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The paper makes a valuable contribution to the field by highlighting the capabilities of a class of BNN models for Bayesian Optimization (BO) and introducing a practical technique for efficient updates in online settings, including BO scenarios. The writing is clear and well-structured, and the findings are substantiated by experiments conducted in both single-objective and multi-objective settings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Gaussian Process (GP) models are widely regarded as state-of-the-art surrogates for Bayesian Optimization, thanks to their strong predictive performance, efficient updates for small datasets, and straightforward uncertainty quantification. However, they can be challenging to apply to search problems with non-stationary or complex correlation structures. In contrast, Bayesian Neural Networks (BNNs) are more flexible and can handle such problems with minimal adaptation, but they have traditionally been associated with high computational costs and unreliable uncertainty estimates.\n\nThis paper introduces a new method that leverages variational last-layer BNNs, combining advantages of both GPs and BNNs. The proposed approach demonstrates superior performance over GPs and several other BNN architectures in tasks with complex input correlations while achieving comparable results to GPs on standard benchmark problems."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The presented method appears to perform comparably to Last Layer Laplace Approximations (LLLA) without clearly demonstrating new advantages. The paper emphasises computational concerns — and presumably could provide a favourable runtime-performance trade-off—but that is not empirically validated and the method is seemingly not compared against the baselines in this respect.\n\nThe claim that Laplace approximations are more sensitive to noise lacks sufficient support, as there is no accompanying experiment or reference to substantiate it (see Section 6, \"Performance and Baselines\"). Providing evidence here would strengthen the argument.\n\nThe discussion around early-stopping based on training loss (see Section 3.2, \"Early Stopping\") makes significant claims, such as training \"only as long as necessary to achieve optimal performance\" and suggesting that applying a similar criterion could benefit training neural network-based surrogate models in BO more broadly. While it is reasonable to argue that stopping training before full convergence improves runtime efficiency and can serve as a regularisation heuristic, the lack of experimental results of the trade-off in the setting when presented as a methodological contribution is a major omission. The effect of early stopping on the quality of the fitted model should be demonstrated through empirical evaluation, such as predictive error on relevant functions or/and assessing its impact on BO performance.\n\nThe choice of length scales [0.005, 4] for the GP model (see Section 5.1, \"Surrogate Models\") appears to be unsuitable for the high-dimensional benchmarks considered. As demonstrated in (1) \"Vanilla Bayesian Optimization Performs Great in High Dimensions\" (ICML 2024), length scales around \\sqrt{D} are generally more effective for Bayesian Optimization in high-dimensional settings. Using more appropriate lengthscales (specifically adopting a suitable lengthscale prior with mass concentrated near \\sqrt{D}) could potentially dramatically enhance the model's performance, making it a more informative comparison. \n(1) https://arxiv.org/pdf/2402.02229"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We develop an efficient and expressive Bayesian neural network surrogate for Bayesian optimization"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024bayesian,\ntitle={Bayesian Optimization via Continual Variational Last Layer Training},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1jcnvghayD},\nnote={under review}\n}"
},
"abstract": {
"value": "Gaussian Processes (GPs) are widely seen as the state-of-the-art surrogate models for Bayesian optimization (BO) due to their ability to model uncertainty and their performance on tasks where correlations are easily captured (such as those defined by Euclidean metrics) and their ability to be efficiently updated online. However, the performance of GPs depends on the choice of kernel, and kernel selection for complex correlation structures is often difficult or must be made bespoke. While Bayesian neural networks are a promising direction for higher capacity surrogate models, they have so far seen limited use due to a combination of cost of use and poor performance. In this paper, we propose an approach which offers the strengths of both methods. We build on variational Bayesian last layers (VBLLs), which provide a simple and computationally lightweight approach to Bayesian uncertainty quantification in neural networks. We connect training of these models to exact conditioning in GPs, and propose an efficient online training algorithm that interleaves conditioning and optimization. Our findings suggest that VBLL networks significantly outperform GPs and other BNN architectures on tasks with complex input correlations, and match the performance of well-tuned GPs on established benchmark tasks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Bayesian deep learning",
"bayesian optimization",
"uncertainty"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/44bd5f0a5292b638cb4b94d5891b4fa7ae58232e.pdf"
},
"presentation": null,
"primary_area": {
"value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Bayesian Optimization via Continual Variational Last Layer Training"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1kFDrYCuSu | PAL: Sample-Efficient Personalized Reward Modeling for Pluralistic Alignment | main | Active | alignment;preference learning;foundation model;reward model;ideal point model;plurality | alignment, fairness, safety, privacy, and societal considerations | 5;8 | 3;2 | 3;3 | 3;3 | 2;3 | 6.5 | 2.5 | 3 | 3 | 2.5 | -1 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weakness. Additional questions are\n\n1. Why Reddit experiments did not include results of PAL-A? Am I missing anything?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper challenges a commonly overlooked assumption in the alignment literature, that individuals' may have a different preference. The proposed two approaches are novel and clearly differentiates from existing works (e.g. KTO), which treat such inconsistencies as noises in preference data. The presentation is clear and easy to follow, with motivations and formulations of the proposed method covered in detail. The experiments showed strong performance of the proposed method. Moreover, the proposed method can be trained on a single consumer grade GPU whereas the baselines are trained on multiple A100s."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper tackles the issue of personalized reward modeling. The authors recognize that different users may not generate a consistent preference across different context-sample pairs. They propose two novel modeling method that represent preference with a latent reward function. PAL-A assumes the reward of a sample-context pair is determined by its distance to the user's ideal point in a latent space. It further assumes that the ideal points of a population lies in the convex hull of several supporting points in the latent space, and we can recover an individual's personalized preference through weighted average of supporting points. PAL-B represents preference as unknown preference mapping, and commonalities are similarly modeled as a convex combination of K prototypical mapping.\n\nThe author conducted extensive experiments on both NLP and T2I generation domain, achieving SOTA results"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.Some experiment details are lossy and not well-documented. Presentation is a bit unclear. For example, while the authors clearly documented the choice of base model and training data on Pick-v2 results (table 3), such information is not included in Pick-v1 (table 2). **What is the base model for results in table 2? Is it vanilla CLIP-H or PickScore?**\n\nWithout this knowledge, it is hard to evaluate the claim on parameter efficiency. For example, if v1 results are reported via a model that is fine-tuned on Pick-v1 embedding, then it is hard to argue that the model is more parameter efficient since it starts with a fully fine-tuned model. Overall, presentation wise, I think it would make more sense to add table 2 as an extra column in table 3, which would reduce many confusions on the setup.\n\n2.Results on Pick-a-Filter are unconvincing. \n\n2.1 Pickscore baseline is missing. While the authors claim that they cannot compare PickScore as its training set overlaps with Pick-a-Filter’s val set. However, this can be trivially fixed be eliminating the overlapped examples, which the authors already did for table 3. Why not do the same? Alternatively, given the large samples of pick-a-pick v2, it is not hard to construct a custom validation split that does not overlap with the training data of PickScore. \n\n**Please either eliminate the overlapping examples from the Pick-a-Filter validation set and include PickScore as a baseline, or\nconstruct a custom validation split from Pick-a-Pic v2 that doesn't overlap with PickScore's training data, and use this for comparison, or justify why these options are not possible**\n\n\n2.2 The red and blue filter examples seems too rival, and I suspect the obvious color differences will overshadow the \"commonalities\" in preference. I think the key benefits of the proposed method that it captures both the \"common preferences\" and \"individual variations\". However, for the color filters, but a naive color classifier may also achieve high accuracy in this example. It is unclear if the proposed method offer any benefits. Such comparison is required. \n\nThis is also highlighted in Fig 4, where differences in low beta region is unclear (side note: presentation wise this figure needs improvement. It is hard to tell which line is higher). I think the low beta region might be more representative of the actual discrepancies in human preferences. However, as Pickscore baseline is missing from Figure 4 (See comments in 2.1), it is hard to tell if PAL offer any benefits in this region. I image a proper pickscore comparison would be a flat line that resembles CLIP and HPSv2. The question is whether the line of PickScore would be higher than PAL in low beta region.\n\n**I would highly appreciate it if the authors can provide more discussions on significance of results on Pick-a-Filter, particularly on the non-linear improvement in Figure 4.** It may seem that the model just collapse to a color classifier at high beta region. **Authors should discuss if PAL is simply collapsing to a color classifier. I suggest authors compare PAL against a naive color classifier. I'm open to other means/discussion on this topic as well.**"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "Please verify if the similar work at https://pal-alignment.github.io/ is by the same authors to check for plagiarism or ICLR's policy on prior publication at ICML workshops."
},
"flag_for_ethics_review": {
"value": [
"Yes, Research integrity issues (e.g., plagiarism, dual submission)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See Weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The PAL framework enables personalized reward modeling with fewer samples, which is highly beneficial for data collection and model training, especially when data annotation costs are high.\n2. PAL has shown superior performance across multiple tasks, not only surpassing or matching the accuracy of existing state-of-the-art methods but also significantly reducing the number of parameters, demonstrating dual advantages in efficiency and effectiveness.\n3. The paper not only provides empirical evaluations but also theoretical analyses, proving the sample complexity of PAL in generalizing to new users and unseen samples, providing a solid theoretical foundation for the model's reliability and effectiveness."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The PAL framework aims to capture diverse human preferences from internet-scale data trained foundational models for pluralistic alignment. The modular design of the framework allows it to leverage commonalities among users while providing personalized services to each user, achieving efficient few-shot localization of preferences for new users. Through extensive empirical evaluation, PAL has demonstrated performance that matches or exceeds the state-of-the-art methods in both text-to-text and text-to-image tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Figure 1, as the first graph of the paper, is somewhat confusing, with pasted images and formulas that are unclear. There is a lack of a clear caption explanation, and the introduction does not provide specific details, making it difficult to understand the meaning of the symbols in the graph or the order in which to view them.\n2. The modeling of user preferences in the paper mainly focuses on the preference for summary length, which may not cover a broader range of personalized preferences. The preference distribution of seen and unseen users is limited to the preference for the length of summaries and may not be comprehensive enough.\n3. The datasets used in the paper may lack sufficient diversity, which limits the model's generalization capabilities in a broader range of scenarios."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A novel alignment framework to learn from heterogeneous human preferences"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024pal,\ntitle={{PAL}: Sample-Efficient Personalized Reward Modeling for Pluralistic Alignment},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1kFDrYCuSu},\nnote={under review}\n}"
},
"abstract": {
"value": "Foundation models trained on internet-scale data benefit from extensive alignment to human preferences before deployment. However, existing methods typically assume a homogeneous preference shared by all individuals, overlooking the diversity inherent in human values. In this work, we propose a general reward modeling framework for pluralistic alignment (PAL), which incorporates diverse preferences from the ground up. PAL has a modular design that leverages commonalities across users while catering to individual personalization, enabling efficient few-shot localization of preferences for new users. Extensive empirical evaluation demonstrates that PAL matches or outperforms state-of-the-art methods on both text-to-text and text-to-image tasks: on Reddit TL;DR Summary, PAL is 1.7% more accurate for seen users and 36% more accurate for unseen users compared to the previous best method, with 100× less parameters. On Pick-a-Pic v2, PAL is 2.5% more accurate than the best method with 156× fewer learned parameters. Finally, we provide theoretical analysis for generalization of rewards learned via PAL framework showcasing the reduction in number of samples needed per user."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"alignment",
"preference learning",
"foundation model",
"reward model",
"ideal point model",
"plurality"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/3790406f5546c86a6e15e78054a0fac92463604f.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "PAL: Sample-Efficient Personalized Reward Modeling for Pluralistic Alignment"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1kMTJnqmyl | A Realistic Threat Model for Large Language Model Jailbreaks | main | Active | LLM;jailbreaks;threat model;robustness | foundation or frontier models, including LLMs | 3;5;5;8 | 4;4;4;4 | 3;3;2;4 | 2;2;2;3 | 3;3;3;4 | 5.25 | 4 | 3 | 2.25 | 3.25 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see the weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. A unified model to evaluate different attacks.\n2. N-gram LM has certain advantages."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new threat model to compare different attacks. This threat model includes a perplexity filter based on an N-gram language model and constraints on FLOPs. A fine-tuned LLM judge measures the ASR. Many existing attacks fail in this threat model. With the consideration of the proposed perplexity filter, the adapted attacks can restore many ASRs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Only considered white-box attacks. In the real world, black-box attacks are more practical. As shown by Figure 13, white-box attacks in this new threat model have lower transferability. That is, this threat model cannot measure black-box attacks very well.\n\n2. The perplexity filter is not new.\n\n3. There are other defenses such as [instruction filter](https://arxiv.org/abs/2312.06674), and [random perturbation](https://arxiv.org/abs/2310.03684), etc. Why doesn't the threat model consider them?\n\n4. Evidence is needed to show that N-gram LM is better than LLM. Some experiments are necessary."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see the weaknesses section."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. A systematic study on jailbreaking attacks is an important direction and can lay the foundation for future research. This paper provides a good study in this space, which helps the community to better develop new techniques in attacking and protecting LLMs.\n2. The proposed N-gram model is effective in filtering several attacks that generate jiberesh text. Those jailbreaking prompts are quite different from natural sentences, causing them easily detectable and hence not robust.\n3. The evaluation is comprehensive, including multiple recent LLMs and safely aligned models. The baseline attacks are chosen from the state-of-the-arts."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Jailbreaking attacks on large language models (LLMs) are widely studied in the literature. But those attack prompts are usually non-natural text. This paper proposes an N-gram method to measure the fluency of generated attack prompts. It shows that this simple approach can filter out several existing jailbreaking attacks and significantly reduce their attack success rates. The paper then proposes an adaptive attack which considers the N-gram of the attack prompt during generation. The results show it can boost the attack performance of existing jailbreaking methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. It is known that several existing jailbreaking attacks generate non-natural text. There have been many proposed methods for filtering such jailbreaking prompts [1][2]. This insight mentioned in the paper is not new. The proposed approach of using the N-gram is straightforward. The novelty hence seems limited.\n2. According to Table 2, using the perplexity measure of llama2-7b can distinguish the two optimized suffixes. Why is this approach not used to evaluate jailbreaking attacks in Tables 3 and 4? Additionally, there are many other filtering methods such as [1][2]. It is important to compare the performance of the proposed approach with these techniques.\n3. The paper introduces an adaptive attack that considers the N-gram measure during adversarial prompt generation. It is strange that the paper introduces an attack against the proposed measure and then uses the same measure to evaluate the performance. It is similar to self-verifying correctness. It is suggested to use other filtering methods such as [1][2] and the llama perplexity to evaluate the final attack performance.\n4. The case shown in Table 2 for the adaptive attack does not seem like natural text as well. Why does the N-gram model not filter this attack prompt? For example, the phrase “A questions their She” very unlikely exists in normal text. With the 8-gram model used in the paper, it should be able to filter out this. Could the authors explain why this case bypass the detection?\n\n\n[1] Alon, Gabriel, and Michael Kamfonas. \"Detecting language model attacks with perplexity.\" arXiv preprint arXiv:2308.14132 (2023).\n[2] Inan, Hakan, et al. \"Llama guard: Llm-based input-output safeguard for human-ai conversations.\" arXiv preprint arXiv:2312.06674 (2023)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- I am generally confused about the \"threat model\" framing, e.g., \"universal threat model for comparing attacks across different models\", how does the threat model actually allow you to compare?\n- The claim in Table 1 is \"The Lack of an LLM-Agnostic Threat Model Renders Attacks Incomparable.\" But I think the attacks are comparable, just on different axises? I could not follow precisely what the claim is here? I also think the most important thing is ASR? The paper makes the claim many times that the attacks are incomparable, but I just cannot follow this.\n - In terms of needed progress, the threat model is basically what frontier AI labs release? I'm not sure a new threat model is not what is needed.\n- How does the N-gram model do on longer user queries? Presumably the perplexity increases substantially with longer queries. Does this mean that this defence would not work well with long-context models? Some of the latest models from Frontier labs can have very long context lengths. This makes me think the threat model might not actually be appropriate."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- I thought the idea of using the N-gram was interesting.\n- The paper is fairly clear written, and the plots are clearly presented.\n- I thought the analysis showing an N-gram perplexity constraint increased compute time for GCG interesting, and that it reduces ASR. The analysis comparing ASR against flops was generally very interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a new \"threat model\" for LLM jailbreaks (I am not convinced that this is the right framing here) using an N-gram perplexity approach. There are some interesting ideas, but I think the framing needs adjustment before being ready for publication."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- I'm not sure if the contribution is the threat model or the N-gram language model perplexity filter?\n - As far as I understand, the \"threat model\" is basically assuming the N-gram approach is the right way of doing things, but I am not sure that is clearly established here? If the point of the paper is to establish the threat model, there should be lots of evidence it is an appropriate defence.\n - I don't find this evidence in the paper. It is simply assumed that this is an appropriate defence?\n- I am not convinced the threat model is the best one. I think the best threat model is trying to break what Frontier AI labs have released. I think claiming the threat model here is realistic is significantly overclaiming.\n- I think the benefit and results section would benefit from making clear the implications of the results much more."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "* Clarify Equation 1: Provide a complete and formal definition of the Judge function. This will enhance the paper's clarity and reproducibility.\n\n* Enhance Comparative Analysis: Include a comparison with existing perplexity detectors, specifically the method proposed in arXiv:2308.14132."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "* Unified Threat Model: The paper addresses the critical need for a standardized framework to compare various jailbreak attacks, providing clarity in a field crowded with disparate methods.\n\n* Interpretability: By employing an N-gram model for perplexity measurement, the threat model remains interpretable and LLM-agnostic, facilitating a deeper understanding of why certain attacks succeed or fail.\n\n* Comprehensive Benchmarking: Adapting and evaluating popular attacks within the proposed threat model allows for a fair and rigorous comparison, advancing the discourse on LLM vulnerabilities."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a unified threat model for evaluating jailbreak attacks on safety-tuned LLMs. Recognizing the multitude of existing jailbreak methods that vary in success rates, fluency, and computational effort, the authors propose a framework that combines constraints on perplexity—measuring deviation from natural text—and computational budget quantified by FLOPs. To achieve an LLM-agnostic and interpretable evaluation, they construct an N-gram model based on one trillion tokens. Adapting popular attacks within this new threat model, the paper benchmarks these methods against modern safety-tuned models on equal footing. The findings indicate that attack success rates are lower than previously reported, with discrete optimization-based attacks outperforming recent LLM-based methods. Furthermore, effective attacks tend to exploit infrequent N-grams, selecting sequences that are either absent from real-world text or rare, such as those specific to code datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Comparison with Existing Methods: The paper would benefit from a direct comparison with existing perplexity detectors, such as the one proposed by Alon et al. (arXiv:2308.14132). This would contextualize the proposed model within the current state-of-the-art and highlight its relative advantages.\n\n* Perplexity Measurement Limitations: While the N-gram model offers interpretability, it may not capture the nuances of natural language as effectively as model-based perplexity measures, potentially affecting the evaluation's accuracy."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Jailbreaking attacks are not comparable - we propose a way to do so via a realistic threat model and show, how to adapt popular attacks to it."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024a,\ntitle={A Realistic Threat Model for Large Language Model Jailbreaks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1kMTJnqmyl},\nnote={under review}\n}"
},
"abstract": {
"value": "A plethora of jailbreaking attacks have been proposed to obtain harmful responses from safety-tuned LLMs. In their original settings, these methods all largely succeed in coercing the target output, but their attacks vary substantially in fluency and computational effort. In this work, we propose a unified threat model for the principled comparison of these methods. Our threat model combines constraints in perplexity, measuring how far a jailbreak deviates from natural text, and computational budget, in total FLOPs.\nFor the former, we build an N-gram model on 1T tokens, which, in contrast to model-based perplexity, allows for an LLM-agnostic and inherently interpretable evaluation. We adapt popular attacks to this new, realistic threat model, with which we, for the first time, benchmark these attacks on equal footing. After a rigorous comparison, we not only find attack success rates against safety-tuned modern models to be lower than previously presented, but also find that attacks based on discrete optimization significantly outperform recent LLM-based attacks. Further, our threat model is interpretable, thus it allows for a comprehensive analysis and comparison of jailbreak attacks. We find that effective attacks exploit and abuse infrequent N-grams, either selecting N-grams absent from real-world text or rare ones, e.g. specific to code datasets."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LLM",
"jailbreaks",
"threat model",
"robustness"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/99326ff1a8b845047774afb03c99f16f52bf1c9c.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/cff081116d1879896157bea64a71195c85d61b3e.zip"
},
"title": {
"value": "A Realistic Threat Model for Large Language Model Jailbreaks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1lB5ErmIY0 | Diverging Preferences: When do Annotators Disagree and do Models Know? | main | Active | RLHF;Pluralistic Alignment | foundation or frontier models, including LLMs | 5;5;5;6 | 4;4;3;5 | 3;2;2;3 | 3;2;2;3 | 2;3;3;2 | 5.25 | 4 | 2.5 | 2.5 | 2.5 | 0.816497 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Discussion on Limitations: What specific limitations of the proposed distributional reward model should be addressed in future research?\nComplexity of Technical Details: Which technical details were particularly complex, and how might they be simplified for better understanding?\nPractical Applicability: What are the potential real-world applications of the proposed approach, and how could its limitations affect these applications?\nOutdated Model Concerns: How does the findings with Llama-3-8B Instruction model impact recent research?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Identification of Problems: The proposed distributional reward model clearly identifies existing issues in the current methodologies.\nExperimental Evidence: Strong experimental evidence is provided to support the effectiveness of the proposed model.\nWell-Organized Structure: The paper has a well-organized structure, making it easy to follow.\nEffective Use of Visuals: Tables and figures are effectively utilized to present experimental results.\nContributions to Multi-Dimensional Alignment: The research offers a new methodology for addressing the problem of multi-dimensional alignment in LLMs through the distributional reward model."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper discusses a proposed distributional reward model aimed at addressing the issues of distinguishing between divided preferences and high-agreement preferences in reward modeling for language models (LLMs). It points out that standard reward modeling approaches, such as Bradley-Terry and MSE regression, fail to differentiate between these two types of preferences, leading to similar reward distributions and potential problems in multi-dimensional alignment when using Reinforcement Learning from Human Feedback (RLHF).\n\nThe authors outline two main objectives for their model: (1) identifying preferred responses and (2) detecting responses that may exhibit divided preferences. By achieving these objectives, the model aims to prevent the system from learning responses that reflect only a single user perspective. The authors argue that training this reward model is more cost-effective and efficient compared to obtaining multiple annotations for every data point.\n\nTo evaluate the model's performance, two metrics are introduced: Preference Accuracy, which assesses the model's ability to assign higher rewards to responses selected by human annotators, and Diverging ID AUROC, which measures the model's effectiveness in identifying divided preferences within response pairs.\n\nThe results, based on training and evaluation with the HelpSteer2 and Multipref datasets, indicate that the proposed distributional reward model performs effectively, consistently exceeding the baseline metrics for both Preference Accuracy and Diverging ID AUROC. This demonstrates that the proposed model can predict expected rewards while reflecting the controversy of responses as assessed by different annotators.\n\nIn the latter sections, the paper explores biases inherent in the evaluation of LLMs using the LLM-judge method, particularly when preferences are divided. It discusses how the LLM-judge’s assessment may unfairly penalize systems that reflect less popular opinions or that are trained with consistent policies in ambiguous scenarios."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Lack of Discussion on Limitations: There is insufficient discussion regarding the limitations and potential issues of the proposed method, particularly concerning the outdated model.\nComplex Technical Details: Some technical details are explained in a complex manner, which may hinder understanding for non-experts.\nNeed for Practical Applicability Discussion: The paper lacks a thorough discussion on the practical applicability and limitations of the proposed approach, which could enhance its relevance and usability."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Unclear Mapping Explanation in the Mean-Var Reward Model (lines 319-323): The rationale for mapping the labels back to specific ranges is unclear—why is this mapping necessary? Would it not be possible to train directly on the distribution? Section 4 is quite dense, and additional explanation for this mapping would help clarify the distributional reward model.\n\n- Figure 1 disagreement analysis: The left plot for MultiPref seems to suggest a possible bias in the annotation interface, as there is a preference of annotators to prefer B annotations over A (non-symmetric matrix, skewed histogram, and darker areas in the upper-right corner). What is your explanation for this? The HelpSteer dataset does not seem to show a similar annotation behavior. I would like to hear your thought - it can be interesting to add this to the paper discussion. This also connects to my questions below on more information on the annotation setup. \n\n- Annotators of datasets: \"MultiPref has 10k preference pairs with four annotators, HelpSteer2 has 12K with 3-5 different annotators.\" Can you say more about the identity of the annotators, are the four in MultiPre the same individuals? If not, how many individual annotators are there in both datasets? How many annotations on average did each annotator? Do you release annotator IDs? Did you collect any annotator social information? \n\n- Refusal vs Refusal: Can you provide more detail on the original annotation tasks? For example, were annotators instructed to take a forced choice? Or was a \"I don't know\" option allowed?\n\n- Results in Table 4 LLMs-as-Judge: What are the scores in the table and what do they mean? Do they only compare to the majority preference (\"winning response\")? If so, I think it would be more interesting to compare to the human preference distribution. Thank you for clarifications. \n\n- What was the motivation of using a 8B LLama-instruct model? Were there restrictions to not use a larger model (70B?)? Would you expect similar findings with the larger model? Which exact Llama model was used? (As there exist by now several versions 3, 3.1, 3.2).\n\n- Will you release the code for the distributional preference models and the trained reward models?\n\nOverall, I like the paper a lot and I am willing to go up with my overall score. However, the results are dense and I have questions I would like to hear from the authors. I look forward to hear the answers to my questions above.\n\nTypos:\n- \"singe-value\" in several places"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Positive aspects of the paper:\n- Releasing unaggregated annotator labels is of increasing importance, as there is increasing evidence that modeling the aggregate leaves out important human preference information. This supports similar earlier calls to do so - see [Basile et al., 2021](https://aclanthology.org/2021.bppf-1.3/), [Prabhakaran et al., 2021](https://aclanthology.org/2021.law-1.14/), [Plank, 2022](https://aclanthology.org/2022.emnlp-main.731/). \n- Studying reasons for disagreement is similarly important, the derived taxonomy is insightful. It extends prior taxonomies by providing categorizations for LLMs for refusal behavior, which is novel. The taxonomy further supports prior findings on reasons for disagreement [Basile et al., 2021](https://aclanthology.org/2021.bppf-1.3/) and taxonomies of disagreement in NLP, which were developed by [Jiang and de Marneffe, 2021](https://aclanthology.org/2022.tacl-1.78/) for NLI, and extended to other tasks, for example, subjective language [Sandri et al., 2023](https://aclanthology.org/2023.eacl-main.178.pdf) and law applications [Xu et al., 2023](https://aclanthology.org/2023.emnlp-main.594.pdf).\n- Distributional rewards are timely. The paper presents a simple and concrete implementation. (The question remains whether code will be released upon publication).\n- The impact on diverging preferences on LLMs as judges is, to the best of my knowledge, novel. This is an important study showing that neglecting divergences propagates majority views and thus is in competition with pluralistic alignment (Sorensen et al., 2024)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies disagreement in preference datasets and provides annotator-level annotations of two existing human preference datasets, [HelpSteer2](https://huggingface.co/datasets/nvidia/HelpSteer2) and [MultiPref](https://huggingface.co/datasets/allenai/multipref). The paper first derives a taxonomy of causes for disagreement (Table 1) on a sample of 100 items. Then, the authors train two separate standard reward models using the majority vote preferences (Bradley-Terry and MSE Regression) and find that on examples with diverging preferences the predictions of reward models are biased towards high-agreement preferences. To address this gap, a model with distributional rewards (Mean-Variance Reward Model, with KL) is presented which uses a KL-divergence loss. The results show that the KL-based distributional model outperforms a Mean-Variance baseline model and better aligns with human preferences. Finally, the paper presents experiments in LLMs-as-a-judge evaluation, finding that they promote majority preferences."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper's weaknesses are:\n\n- Evidence (lines 265-266): The claim that \"reward models predict differences in rewards that resemble high-agreement preferences, even when trained on all annotator labels\" is not convincingly supported. The scores for models trained on all labels vs. aggregated labels (All vs. Agg) are often similar. To substantiate this claim, the authors should extend Figure 2 and compare models trained on majority labels vs. all annotator labels on both datasets. Currently, Figure 2 only presents results for the model trained on aggregated labels and for a single dataset, illustrating that diverging preferences align with high-agreement items. For a stronger argument, similar plots should be included for both models and across datasets and discussed in the text.\n\n- Related Work: The field of disagreement in NLP has a substantial history, with early contributions such as de [de Marneffe et al., 2012](https://aclanthology.org/J12-2003/), [Poesio and Artstein, 2005](https://aclanthology.org/W05-0311/) and more recent surveys like [Uma et al., 2021](https://jair.org/index.php/jair/article/view/12752). This paper could be improved by citing more of this foundational literature, including key work on developing taxonomies and understanding the underlying reasons for disagreement (suggested references below). \n - Reasons for disagreement in NLP and computer visions: see [Uma et al., 2021](https://jair.org/index.php/jair/article/view/12752) and references therein. Moreover, see further references on calls to release unaggregated labels in first point in Strengths.\n - Taxonomies of disagreement: There exists seminal work by [Jiang and de Marneffe, 2021](https://aclanthology.org/2022.tacl-1.78/), I wonder whether this paper was inspired by their work? It was taken up by several others, see further references in second point in Strengths. \n\n\n- Table 1: Do the frequencies in the two datasets sum up to 1? Is this per subcategory, or what is the overall frequency for each of the four top categories on MP and HS2?\n\n- Code release is not mentioned. Releasing the code would make some study design choices clearer (like the mapping above) and enable better replication of the results in the paper.\n\n- The paper could have included more recent and larger language models. For example, results for the LLama model family over different scales would be interesting. I invite the authors to discuss any potential challenges or limitations in applying the method to larger model families, or to explain why you chose to focus on this specific model (llama 8b instruct)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. RMs and LLM-as-Judges are mostly integrated into training and evaluation pipelines as proxies for human preferences (they are rarely used alone). While the paper demonstrates that its proposed training methodologies improve RMs' ability to distinguish between instances with diverging preferences, it lacks discussion on the potential downstream impacts. Ultimately, can these new RMs create models better aligned with human preferences? Are they more effective evaluators for leaderboards that aim to reflect genuine human preferences?\n\n2. The paper lacks experimental evidence in defining the problem. While I agree with the importance of developing smaller, high-quality RMs, recent studies have shown that scaling up RMs yields better evaluators. Does the issue of failing to detect diverging preferences persist even with larger RMs? If the issue goes away with scaling, probably it might not be an issue soon when better and cheaper hardware becomes available.\n\n3. Section 5.1 highlights that LLM-as-Judges also struggle to identify instances with multiple preferences. However, could this issue stem from the prompting approach? The referenced LLMs rely on straightforward prompting techniques for judgments, which do not inherently account for multi-preference scenarios. Could more sophisticated prompting methods or multiple sampling iterations help address this limitation?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. While it is often intuitively accepted that RMs and LLM-as-Judges may exhibit biases and fail to reflect the diverse preferences of humans, this paper offers a systematic approach to identify and quantify these errors. Additionally, through a qualitative study, the paper provides a taxonomy to categorize the primary causes of preference divergence.\n\n2. The paper goes beyond just pointing out the problem to present two training methodologies to train models that better represent diverging preferences. The two methods aim to model the preference distribution instead of singular values and achieve a 10% performance improvement. \n\n3. The writing is clear and easy to understand."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper leverages the MultiPref dataset and the HelpSteer2 dataset to study the behavior of RM/LLM-as-Judge in instances with diverging human preferences. They observe that traditional methods of training RMs (Bradley Terry or MSE-Regression) fail to make RMs that represent multi-preferences. Hence, they propose alternative methodologies, Mean-Variance Reward Models, and Classification-based Reward Models, to train RMs that learn the distribution of the responses instead of a singular value. The presented methodologies show about 10% improvement from past methods using AUROC as the metric."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Please see the questions section."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please refer to the questions above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The problem of annotator disagreement is an important one as the current model training neglects the inherent difference between the annotators which could lead to misalignment of the model. The pluralistic alignment of the reward model in RLHF has great potential. \n\n2. The author not only reveals the misalignment of the reward models but also proposes a new training objective for it to mitigate the problem. Experimental results show that the reward model trained with a new objective can better identify the disagreement in the data."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the problem of diverging preference in human-labelled preference datasets.\nThey created a taxonomy of the disagreement sources and tested the RLHF reward model on disagreement data.\nThey showed that the reward model trained on majority vote would clearly prefer one of the responses when presented with examples with diverging preferences. \nThe author further proposed a new reward model by predicting the mean and variance of the distribution reward. \nThe proposed reward model achieves better performance in terms of distinguishing the diverging and non-diverging preference examples."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "One problem of the paper is that details about the implementation and motivation of experiment design are either missing or moved to the appendix.\nIt makes the paper hard to follow. \nFor example, it is not clear why the author split the level of disagreement by High-Agreement Prefs, High-Agreement Ties and so on.\n\n1. Why do High-Agreement Prefs require no rejection of the majority vote, but the High-Agreement Ties allow it? \n\n2. In lines 319-323, why is the mapping interval set like this? I believe the intervals could have a great influence on the reward model.\n\n3. The CDF estimation is an important detail for training and evaluating the reward model, which I think should be discussed in the main text.\n\n4. In line 348, I don't fully understand what you mean by \"use the predicted joint probability of annotators labelling the response as a 1 or 5\".\n\n5. In line 361, \"using smaller differences as a predictor\" is not informative. What do \"smaller differences\" mean exactly?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We examine diverging preferences in human-labeled preference datasets and their influences in reward modeling and LLM evaluations."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024diverging,\ntitle={Diverging Preferences: When do Annotators Disagree and do Models Know?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1lB5ErmIY0},\nnote={under review}\n}"
},
"abstract": {
"value": "We examine diverging preferences in human-labeled preference datasets. We develop a taxonomy of disagreement sources spanning 10 categories across four high-level classes---task underspecification, response style, refusals, and annotation errors. We find that the majority of disagreements are in opposition with standard reward modeling approaches, which are designed with the assumption that annotator disagreement is noise. We then explore how these findings impact two areas of LLM development: reward modeling and evaluation. In our experiments, we demonstrate how standard reward modeling methods, like the Bradley-Terry model, fail to differentiate whether a given preference judgment is the result of unanimous agreement among annotators or the majority opinion among diverging user preferences. We also find that these tendencies are also echoed by popular LM-as-Judge evaluation methods, which consistently identify a winning response in cases of diverging preferences. These findings highlight remaining challenges in LLM evaluations, which are greatly influenced by divisive features like response style, and in developing pluralistically aligned LLMs. To address these issues, we develop methods for identifying diverging preferences to mitigate their influence in evaluations and during LLM training."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"RLHF",
"Pluralistic Alignment"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2538658f71a247c473310eb73ac6535c7a0beb42.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Diverging Preferences: When do Annotators Disagree and do Models Know?"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1lFZusYFHq | How Transformers Implement Induction Heads: Approximation and Optimization Analysis | main | Active | Transformer;mechanisms;approximiation;training dynamics;abrupt transition | learning theory | 5;5;5;5;8 | 4;3;3;3;4 | 3;3;3;3;4 | 2;3;2;2;3 | 2;2;3;3;4 | 5.6 | 3.4 | 3.2 | 2.4 | 2.8 | 0.612372 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See Weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "A key strength of this work is its rigorous theoretical approach within the chosen framework. The results and proofs are clearly delivered and effectively presented. The authors provide a comprehensive investigation from both approximation and optimization perspectives, which might potentially deepen our understanding of transformer models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The reviewed paper explores how transformers implement ''induction heads'' to perform in-context learning (ICL) by analyzing a simplified two-layer transformer model. The analyses include both approximation and optimization parts. The approximation analysis examines transformer submodules, while the optimization analysis tracks phase transitions in training dynamics as transformers develop induction mechanisms."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- This work provides an analysis of how transformers implement induction heads, approaching the problem from both approximation and optimization perspectives. The theoretical results appear rigorous; however, the model setup seems overly simplified. Specifically, the study is limited to a two-layer transformer model, with and without feed-forward networks (FFNs), a framework initially explored in a seminal paper [1] and subsequently developed by numerous follow-up studies. Given this extensive literature, the contribution here appears somewhat incremental, with limited novelty in the analytical approach and the techniques remaining relatively standard. Expanding the analysis to a more sophisticated and realistic setting, as seen in recent work [2], would significantly strengthen the contribution. Without this, the impact of the results may be constrained, and it is unclear if they meet the high standards for significance.\n\n- Additionally, given the simplified setup and use of synthetic toy examples, I have reservations about the generalizability of these findings for interpreting real-world transformers. I would suggest that the authors conduct extensive empirical experiments on widely-used models to validate the applicability and robustness of their theoretical results.\n\n- it would be valuable if the theoretical insights could yield practical implications. Specifically, can the approximation results inform new methods, or could the optimization insights benefit transformer training? If so, this would meaningfully enhance the contribution. Otherwise, the practical significance of the work may remain limited.\n\n[1] Elhage, Nelson, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell et al. \"A mathematical framework for transformer circuits.\" Transformer Circuits Thread 1, no. 1 (2021): 12.\n\n[2] Chen, Siyu, Heejune Sheen, Tianhao Wang, and Zhuoran Yang. \"Unveiling induction heads: Provable training dynamics and feature learning in transformers.\" arXiv preprint arXiv:2409.10559 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I have the following questions for the authors:\n1. I find the statements of Theorems 4.3 and 4.4 vague. In particular, the way they are currently stated seem to imply that such results hold for _any_ number of heads $H$. In the proofs, however, it seems that $H$ cannot be arbitrary, and actually has to be large enough and possibly dependent on $n$, unless I misunderstood something. It would be helpful to clarify this further.\n2. In the gradient flow analysis of Section 5.1.3, you consider a layer-wise training paradigm in which you first train only the first-layer, and then you fix it to train the second one. I was wondering if the experimental results of Figure 2 are also obtained using this paradigm. I was wondering if this assumption is also insightful in practice, i.e., if when training the two layers at the same time, you could see experimentally that the first layer is learned before the second layer, or if in general the two layers are learned together in practice.\n3. Minor concern: in the main text you put some amount of emphasis on a novel Lyapunov function that is used in the proof, but then this function never appears in the text and is relegated at the end of the appendix. In the final version, I would either give more space to it in the main text, explaining why this function is important/novel, or put less emphasis on it."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is generally well written and the proofs seem correct to me. Even if the paper considers heavily simplified models, the theory on this topic is scarce and difficult, so any theoretical insight on the dynamics of these models is welcome."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper comprises two parts. In the first part, the authors show through representation results that simple transformer architectures with two layers and several heads are able to correctly represent different variations of induction head mechanisms. In the second part, the authors prove through gradient flow that a simplified two-layer architecture can learn a mixed target function comprising a 4-gram component and a vanilla in-context 2-gram component."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I think that the major drawback of the paper is that it mostly lacks experimental results that corroborate the theory. In particular, it would be interesting to see if the constructions used in Theorems 4.3 and 4.4 on in-context induction heads are actually learned by the considered transformer models. Additionally, I found the statements of some theorems to be rather vague (see the questions below)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "In Line 307, the authors mentioned that “the use of general similarity g enables the model to recognize not only synonymous but also antonymic semantics, thereby improving both the accuracy and diversity of in-context retrievals.” Why do we need to use general similarity g to recognize antonymic semantics? Why does this recognition improve both the accuracy and diversity of in-context retrievals?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "(1)The paper provides a very sound and complete theoretical analysis of the transformer mechanisms behind induction heads, which are often used to explain the important emergent ability of in-context learning for LLMs. \n (2) The paper is well-written and well organized. In order to the motivate the work, the paper gives a very clear streamline of the related works and formulate the research objectives very clearly. Also they provide very clear definitions of the key notions in the paper. I have not checked all the technical proofs but I believe that they are correct. The paper focuses on the main objectives and makes the contributions explicit and discusses different possible scenarios."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the theoretical analysis of the transformer mechanisms behind the induction heads from two perspectives: one is approximation theory and the other optimization. In the approximation part, the authors show how to use transformers to approximate the induction head and the generalized induction heads. In the optimization, they investigates the abrupt transition from n-gram to induction heads."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(1)Since induction head is used to explain ICL, it might be more interesting to explain how the theory in this paper helps in-context learning(ICL), especially empirical results for ICL. \n(2) Although the paper is meant to be theoretical, it would be helpful to provide some empirical experiments to support the theoretical analysis."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- In equation 4 and the following lines, I think the softmax notation is used improperly. Currently the input to the softmax function is a scalar. Also it's not clear what exactly $\\{(x_s, x_L)\\}$ means (maybe there's a typo?). The same problem appears again in equation 6, with the additional complexity that the dimensions of the matrix inside the softmax (and dimensions $X_{L−n+2:L}$) are not specified. I think in general we have dimension mismatch problems here based on the current notation. I would appreciate clarification of this issue. \n- The final value of $C_{q,n}$ is not specific clearly in the proof. I think this value is actually quite important as it determines the minimum number of heads required (I think this actually could be included in the main text.)\n- Can you elaborate on the meaning of the notation $I(z=x_{L-2})$ (and similar ones) on page 7?\n- In Figure 2, experiments setup is not mentioned. Also x-axis is not specified. More importantly, we can see a drop in the loss of IH term in the first phase of the training, this is in contrast to the theoretical results. Can you elaborate please?\n- In Theorem 5, could you explain in what variables the asymptotics are?\n- I think the writing of the insights around lines 243-250 can be improved. \n- The paper focuses on a specific format of relative positional embedding. I think the paper could be more upfront on this and state this more clearly throughout the paper. \n- Line 451, \"without loss of ambiguity\" seems to be a typo. \n- Regarding the second weakness, I'd appreciate any evidence/explanation that the proven optimization dynamics would also show up in more natural setting."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The study of mechanisms behind attention heads is of both theoretical and practical interest. It's nice to see how different components can help Transformers implement induction heads of varying complexities. \n- Generally the insights and intuitions provided into network's presentation and optimization dynamics are informative and helpful."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies Transformers expressing and learning induction heads. On the expression side, three settings, two-layer single-head Transformer without FFN, two-layer multi-head Transformer without FFN, and two-layer multi-head Transformer with FFN are considered are shown to be able to express induction head based on a single token, induction head based on several tokens, and induction head based on several tokens with arbitrary distance function. \nOn the optimization side, the paper studies the training dynamics for learning a simple task which requires induction heads in a simplified training setting."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The expressivity results are just sufficiency conditions. There are no necessity results presented in the paper. E.g., it's not shown that 2-layer single-head Transformers cannot implement general induction head based on several head (although seems to be true and intuitive). \n- The optimization results correspond to a very simplified setting from the task side (working with a Gaussian distribution instead of discrete token distribution), model parametrization side, and optimization side (layer-wise gradient flow). I believe experiments supporting the claims in more realistic settings (e.g., more natural Transformer + GD + discrete token prediction task) could have been provided. Also the paper should be more upfront with the limitation of their optimization results throughout the paper in my opinion.\n- The writing can be improved significantly (see the questions for more details)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Figure 1, the caption doesn’t sufficiently explain what is seen in the figure. E.g., that highlights correspond to tokens with high attention weights for the induction head (I assume).\n2. Following Equation 4, it would be good to highlight that this is not “just” the standard attention head because you have x_{s-1} instead of x_s. \n3. Can you comment on the relation between your results and Bietti (2023) equation 7, which also shows a two layer implementation? \n4. Your induction head averages over all previous appearances of the context. However, it isn’t clear that this is indeed the behavior of induction heads, or that it is even desirable. Wouldn’t we want to output, say, the most common element, or some encoding of the distribution over the next element?\n5. The dependence on parameter q in Theorem 4.3 is confusing because we don’t see the dependence of C_{n,q} on q. Can you present a bound that optimizes over q? \n6. It seems like the dimension of the transformer in Theorem 4.3 needs to scale with n, “the induction length”. This seems rather undesirable, and it is not clear that it is necessary. Can you provide corresponding lower bounds, or provide empirical support for this need?\n7. Typo: “the loss convergences”\n8. In the definition of f*_{G4}, shouldn't it be L-2 (not T-2)?\n9. It’s a bit confusing that in this problem you have X1,..,XL be one dimensional, but X_{L+1} is two dimensional, and you then map X1,...,XL to two dimensions. WOuld have been better to just say it’s all in two dimensions."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Since induction heads are a key capability of transformers, it is certainly interesting to understand how they can be implemented and learned. There is some prior work on this, and the current work adds to that, especially in terms of analyzing optimization dynamics (though see points above)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies theoretical aspects of induction heads, which have been argued to play a key role in the skills that transformers demonstrate, including in-context learning.\n\nThe first set of results has to do with expressivity. Specifically, they show upper bounds on the \nsize of transformers needed to implement variants of induction heads. The result here is not very surprising, as it uses basic transformer components to push tokens several steps ahead to facilitate the induction. It also seems to require increasing the dimension of the transformer as longer induction memory is requested, and again this seems intuitively natural, since this is what’s required for pushing n tokens forward in time, so they can be used in the induction. \n\nThe second part asks about the learning process of induction heads. Towards this end, it considers a data generation mechanism that has two components: an n-gram and an induction head. It shows that the learning process goes through several stages where different components are learned. This part is potentially interesting, but I think it could benefit from more work before publication. Some points are:\na. I like the idea that there’s a type of inductive bias towards n-grams (also appearing in Bietti, 2023), but it would be more interesting to see if this is a true inductive bias if you had a setting where both n-gram and induction fit the data and learning chooses one over the other. In your current setup they both must be learned, because the output contains both, so I’m not sure what we learn from the fact that one is learned before the other, but eventually they are all learned.\nb. If I understand correctly, the model has six learnable parameters. It’s possible that some observations (eg single fixed point) are due to this simplification. I would have liked to see at least simulations that show which phenomena appear in larger models.\nc. Generally, I am missing more explanation and intuition why this particular model was chosen, and what are the implications for learning in real models.\nd. I am missing more discussion of the relation to Bietti, who also propose some theoretical analysis of the learning dynamics. \ne. Please also discuss relation to https://arxiv.org/abs/2402.11004"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "As mentioned above, the expressiveness part is somewhat unsurprising. The dynamics is potentially interesting, but small scale in terms of parameters optimized, and the choice of model and lack of clear implications are also a concern."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024how,\ntitle={How Transformers Implement Induction Heads: Approximation and Optimization Analysis},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1lFZusYFHq},\nnote={under review}\n}"
},
"abstract": {
"value": "Transformers exhibit exceptional in-context learning capabilities, yet the theoretical understanding of the underlying mechanisms remain limited.\nA recent work (Elhage et al., 2021) identified a \"rich\" in-context mechanism known as induction head, contrasting with \"lazy\" $n$-gram models that overlook long-range dependencies.\nIn this work, we provide both approximation and optimization analyses of how transformers implement induction heads.\nIn the approximation analysis, we formalize both standard and generalized induction head mechanisms, and examine whether two-layer single- or multi-head transformers can efficiently implement them, with an emphasis on the distinct role of each transformer submodule.\nFor the optimization analysis, we study the training dynamics on a synthetic mixed target, composed of a 4-gram and an in-context 2-gram component. This setting enables us to precisely characterize the entire training process and uncover an *abrupt transition* from lazy (4-gram) to rich (induction head) mechanisms as training progresses."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Transformer",
"mechanisms",
"approximiation",
"training dynamics",
"abrupt transition"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c89bd0e0356d31f49f91e755d23688d2693e93fa.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning theory"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "How Transformers Implement Induction Heads: Approximation and Optimization Analysis"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1mMjZvEhwH | POMDIFFUSER: LONG-MEMORY MEETS LONG- PLANNING FOR POMDPS | main | Active | Reinforcement learning;Partial observability;Long memory;Planning | reinforcement learning | 3;3;3;3;3;6 | 3;4;3;2;3;4 | 3;2;2;2;3;3 | 1;1;2;1;3;2 | 2;2;2;1;2;3 | 3.5 | 3.166667 | 2.5 | 1.666667 | 2 | 0.542326 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Presentation\n==========\n- Abstract\n - Needs a few iterations to improve focus.\n - Didn't seem relevant to mention how humans use long-term memory or meta-learning. It seems there was no further elaboration on those themes later in the paper.\n - Both \"Diffusers\" and \"diffusion-based planning models\" are used. Prefer the latter.\n - It's not clear what \"conventional Diffuser models\" refer to, and the claim that they \"often memorize specific environments\" was not justified in the main text (correct me if I'm wrong - L255 was relevant, but doesn't discuss this specific claim). Is this claim necessary for the abstract?\n - Last two sentences seem to trail off rather than stating the main contributions clearly.\n- S1 - Introduction\n - First sentence is a bit problematic as a very broad statement. Please consider revising.\n - The notions of \"effectively\" and \"memorize\" were not defined.\n - Last sentence in 1st paragraph says \"leveraging past experiences\" which is more general than \"memorize\", so prefer the former.\n - L74: The wording here is confusing \"performs well in tasks requiring **complex** reasoning .. struggled with more **complex** planning tasks\". Please rewrite for clarity.\n- S3 - Memorize to plan\n - Recommend to lead with an introductory sentence. The first line in S3.1 seems suitable.\n - L153: writing gets a bit rough. Please rewrite.\n - Please surface sparse rewards in the introduction as the main focus; it was only mentioned in passing on L036 vs L161.\n - L185: please explain how truncating the trajectory is performed given the sparse reward situation.\n - L189: please introduce homogeneous vs heterogeneous memory architectures. The current writing assumes the reader is already familiar with those notions. It would help to also cite examples of each approach.\n - L208: where is $\\beta$?\n - L209-210: This seems more like a footnote since Superimposed-MNIST is yet to be introduced.\n - L222: please qualify and justify the claim that using adjacent frames only in POMDPs is unreliable.\n- S5 - Experiments\n - L352: Is there an appendix with this ablation study?\n - Some tables and/or figures were not referenced in the main text. Please fix.\n\nNitpicking\n========\n- L159-160 + L253 and elsewhere: please use the correct citation style.\n- L220: What is Tedrake? Is this a misformatted citation?\n- L242: Missing citation"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Proposes a heterogeneous approach to modeling memory in diffusion-based planners for POMDPs.\n- Proposes three POMDPs to evaluate the proposed approach: Superimposed-MNIST, 2D Memory Maze (MM2d), and Blind Color Matching (BCM)\n- Compares different configurations of the proposed design against baselines along with some ablations."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors extend diffusion-based planning to POMDPs with sparse rewards using memory. A heterogeneous approach based on cross-attention is adopted to incorporate memory, enabling an $O(L \\log L + H^2)$ complexity instead of $O(L^2 + H^2)$ where $L$ is the memory length and $H$ is the planning horizon. More efficiency is achieved using inverse dynamics and latent-level planning for long horizons. Three POMDPs are proposed to test the proposal."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The writing is rushed and a bit disconnected.\n- Contributions mainly take the form of the empirical results presented, comparing different configurations of known techniques, without new theoretical/algorithmic insights.\n- Given the status of the writing, it's difficult to appreciate the empirical results without significant effort - I'm reading the experiments section without fully understanding the methodology and I have to keep going back to the (rushed) prior sections.\n- I seems unlikely those serious issues with the presentation can be addressed without a major revision."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Can the author(s) provide theoretical analysis of how memory length affects planning horizon in your framework?\n- What are the convergence guarantees for POMDiffuser, especially when dealing with very long sequences? -- this is something that I am interested in learning more about. \n- (minor) How does the belief state representation quality degrade over longer horizons?\n- There seems to be a very big performance gap in Blind Color Matching (0.6956 vs 0.0187) between SSM and Transformer variants is striking. Could you provide an analysis of why this occurs? How does this change with different Transformer architectures/configurations?\n- (minor) Have you explored any techniques to reduce computational complexity while maintaining performance?\n- (minor) This is an interesting framework, how might this be extended to multi-agent POMDP settings?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Addressing long-term memory and planning in POMDPs is a significant challenge in reinforcement learning and decision-making.\n- The proposal of a new benchmark suite for evaluating diffusion models in POMDPs was very interesting and could be valuable to the research community.\n- Investigating different memory architectures (RNNs, Transformers, SSMs) provides insights into their trade-offs in the context of diffusion planning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces POMDiffuser, a diffusion-based planning framework designed for POMDPs. The aim was to extend diffusion models to handle long-term memory and long-horizon planning in POMDP settings. They incorporate various (belief) encoding architectures, including RNNs, Transformers, and Structured State Space Models, and evaluate their performance on newly proposed benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper lacks a solid theoretical analysis explaining why diffusion models are suitable for long-memory and long-planning tasks in POMDPs. There is no discussion on the convergence properties, limitations, or potential pitfalls of applying diffusion models in this context. As a reader, I was hoping to see it atleast in the appendix section of the paper. \n- The paper acknowledges that the proposed method struggles with more complex tasks but does not delve into why this is the case. I would suggest adding a section/few lines on how it might be addressed in future work.\n- The experiments seem to be very simplistic. \n\n(Follow up weaknesses in the Questions section)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "N/A"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The problem of offline RL is difficult and important, and should be of relevance to a significant part of the ICLR community.\nAdditionally, given the success of diffusion models (including in policy generation), it make sense to further investigate their capabilities, limitation, and applicability.\nEspecially progress in tackling partially observable environments is important, as they are ubiquitous in the real-world yet avoided due to their complexity, and novel generative (sequential) approaches look like a reasonable approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work considers offline RL in partially observable environments using state-space models as history representations and diffusion models as policy model.\nThe result is trained in supervised manner (behavior cloning).\n\nIt introduces three (new) tasks - based on MNIST, a grid problem, and a pick-and-place task - and provide an ablation study on their model.\nThe ablation is against transformer and RNN-based history representations, as opposed to state-space model.\n\nTo my best understanding, there are no theoretical contributions claimed made in this paper."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper is difficult to understand and the contributions are not quite fleshed out.\n\nAs someone who is not particularly familiar with the background (in particular, the \"Diffuser\", I suppose?), it is difficult for me to infer exactly what the contribution is and how it works.\nIn particular, the text is currently imprecise both in English as well as math.\nExamples include:\n- The proposed method to \"model memory\" is explained as \"through cross-attention computation during the denoising process\", and otherwise does not seem to give any details.\n- It is claimed that, by \"separating memory and planning\", the complexity reduces from one O notation to the other, but unclear where these come from.\n- The state-space is defined as transforming an input x in R^{T x D} to output y in R^{T x D} (where x and y have the same size) but it is not quite ever really clear what x and y would be for the POMDPDiffuser (most likely due to lack of my background).\n\nThis makes me believe (perhaps wrongly) that the proposed method is a combination of supervised learning of state-space models to represent histories and diffusion models to learn policies for these histories from decision data.\nThis, without additional contributions - which may be there but not understood - seem to reduce to behavior cloning on sequences, which is somewhat lacking in novelty.\n\nLastly, the experimental evaluation does not seem to be very convincing.\nIn particular, there seem to be no baselines, other than ablations on their own approach, and I must assume offline RL methods for POMDPs exist (it was not claimed otherwise in the paper)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Could the author provide clearer explanations on how their framework differs from SSMs, RNNs, Transformers, and Diffusers beyond simply incorporating them into different parts of the framework?\n2. Could the author compare their proposed benchmark to existing MNIST and Maze environments to better illustrate how it differs?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper proposes a method to extend the diffuser planner to POMDPs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a method in terms of achieving both long-memory and long-planning capabilities from past histories in POMDP, which uses diffuser models for memory utilization and long-term planning in complex environments. This method adopted Diffuser-based models which addressed the autoregressive planning problems existing in previous models like RNN, Transformers, and SSMs, it also improved over former diffuser models by extending its use to POMDPs. The authors also proposed a new benchmark suite to evaluate long-memory and long-planning capabilities within the Diffusion framework."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. This paper claims to improve upon SSMs, RNNs, and Diffusers; however, it primarily integrates these models by using an SSM as the memory encoder and a diffusion model for planning with the memory.\n2. It does not address the issues associated with transformers as stated in the introduction. The model still relies on a transformer encoder for action selection, predicting the full action sequence from past trajectories.\n3. The new evaluation benchmark does not appear to provide enough innovation to be considered a genuinely new benchmark."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. In the 2D Memory Maze experiments, what is the implementation difference between Diffuser and POMDPDiffuser? As stated earlier in the paper, Diffuser was only designed for MDP settings."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper addresses an interesting and relevant problem in the field of model-based decision-making\n- It is mainly well-written, and most parts are easy to follow\n- It introduces some new benchmarks that could be interesting for the research community"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a diffusion model approach to long-horizon planning in POMDPs, called POMDPDiffusor, by extending existing diffusion models with memory mechanisms, such as RNNs, Transformers, and State Space Models (SSMs). In addition, some benchmarks are proposed, such as Superimposed MNIST (to evaluate the memorization capabilities), 2D Memory Maze (to evaluate navigation in a discrete task), and Blind Color Matching (a robotics task, where blocks need to be placed onto floors with matching colors under partial observability and sparse rewards). The approach is evaluated against itself in these domains."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Novelty**\n\nThe paper addresses the long-term horizon problem by applying known sequence processing techniques (RNNs, Transformers, SSMs) to known decision-making models, i.e., Diffusors. The paper often refers to the computational complexity of these memory techniques, but these are well-known facts, e.g., RNNs are sequential during training but fast during inference, while Transformers are parallelizable but scale quadratically during inference. Thus, I consider the main contribution as an application of known techniques rather than technical innovation.\n\nThe introduced benchmarks seem interesting but their evaluation lacks a comparison with other approaches, which is necessary to assess their suitability for testing long-horizon planning, i.e., Are they sufficiently difficult? Do other approaches really struggle on these new domains, as stated in the paper? See Significance below.\n\n **Clarity**\n\nThe abstract teases generalization as a problem of existing approaches. However, the main challenges addressed in the paper are only focused on long-horizon planning and computational complexity (during training and inference). \n\n **Significance**\n\nThe experimental evaluation of the paper is a pure self-evaluation with POMDPDiffusors without further context.\n\nThe paper does a lot of conceptual comparison with prior works, such as world models and alternative diffusion approaches, such as Diffusion Forcing. However, none of these approaches is compared within the experimental evaluation, which leaves many questions open to assess the significance of the work:\n1. How does the POMDPDiffusor fare in traditional POMDP benchmarks like Pocman, Battleship, etc., compared with prior approaches?\n2. Do prior approaches really scale that badly, as stated in the paper? We need to see the numbers - not only the words\n3. Do prior approaches really struggle in the new benchmark domains, i.e., are the benchmark domains really justified? Again, we need the numbers - not only the words.\n\nWithout any further evidence regarding these questions, the true advancement of the work remains unclear.\n\n**Minor**\n\n- At the end of page for a UNet model is refered to out of nowhere which has not been mentioned and explained before.\n- In Related work a reference is missing in \"Efficient World Models\""
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "While I believe that this paper needs to be *substantially* reworked in order to live to its full potential, a non-exhaustive list of questions that would perhaps clarify some of my understanding are:\n\n- what are the agent dynamics?\n\n- what is the agent task (i.e., what is the formal definition of \"long-term planning\")?\n\n- what is the agent's knowledge about its environment?\n\n- what are the exact dynamics/tasks/environments in the validation tasks?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- planning in partially observable environments is a difficult challenge which generally makes sense to answer using ML/AI methods\n\n- the hyperparameters for each task are acknowledged and their values explicitly written"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper seeks to combine a diffusion approach to planning with a partially observable environment embodied in a POMDP. The new proposed algorithm, POMDiffuser, explicitly encodes memory data into the planner, and is tested on several planning tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "In short, I neither understand the problem framework nor details of the solution method. I am not understanding the validation tasks either. I cannot judge the contribution of this paper nor its possible drawbacks. Specific comments are below:\n\n- the paper explicitly positions itself as operating on a POMDP environment, even mentioning it in the abstract. But POMDPs are neither defined nor ever seemingly explicitly used in the paper. The only dynamical model that is introduced is a POMDP only in a very trivial sense: if nothing else, both its transitions and observations are deterministic\n\n- on the topic of the presented dynamical system, the paper calls it a \"Structured State Space Model\" and says that these are \"sequence-to-sequence models well-suited for tasks that require significant memory retention and are particularly effective at processing long sequences due to their computational efficiency\". But this model seems to be just a standard Linear Time-Varying (LTV) control system! Control design for LTV systems is challenging, but has certainly been explored since the 1950s or earlier. In general, in the context of agent planning, if these are agent dynamics, I don't see what makes them particularly \"well-suited\" for any task. After all, the model should not depend on the task: the model is whatever represents the agent's dynamics. If this is simply a learning model, then the agent's dynamics are not ever defined.\n\n- there seems to be a lack of awareness of classical control (let alone planning on POMDPs -- a line of work which is never truly mentioned); the authors call usual linear control systems \"time-invariant SSMs\" and speak about recent studies that explore the conditioning of system matrices on the input sequence. Again, this is not recent work -- stability of linear systems is a classical introductory control topic\n\n- I do not understand the formal problem that this paper is solving. It does not seem to be ever defined and is mostly just described as \"long-term planning\". Some questions that come to my mind are: Are there rewards? Is there a reachability task? Does the agent move? What are its dynamics?\n\n- I also do not know what the agent knows about its environment. If its dynamics are just linear *and known*, I don't understand why any learning is necessary: optimal control laws for reward maximization (at least with a particular reward structure) can possibly be derived analytically.\n\n- the details of the proposed solution approach are murky to me. Let me just give one example. Section 3.3 says that \"unlike in MDPs, predicting actions solely from adjacent frames in POMDPs can be unreliable\". Doing so is not in fact unreliable, it is theoretically impossible: both in general MDPs and POMDPs, there is no unique mapping from a transition (s,s') to an action a that might have caused this transition. To address this issue (and I am not sure what it means to address it, given that the problem simply does not have a solution), the paper says it will use \"Transformer encoders\". Why? How does that work?\n\n- I do not understand the tasks, which are never truly described (the paper does not even provide a full name for MNIST) -- agent motion, agent knowledge, possible \"long-term\" rewards, etc. never seem to be defined. The paper says that \"our model extends diffusion-based planning models into the realm of meta-learning\", but this topic is never discussed."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024pomdiffuser,\ntitle={{POMDIFFUSER}: {LONG}-{MEMORY} {MEETS} {LONG}- {PLANNING} {FOR} {POMDPS}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1mMjZvEhwH},\nnote={under review}\n}"
},
"abstract": {
"value": "Effective long-term planning in complex environments benefits from not only leveraging immediate information but also utilizing past experiences. Drawing inspiration from how humans use long-term memory in decision-making, we propose the POMDiffuser framework, an approach to planning in partially observable environments. While conventional Diffuser models often memorize specific environments, POMDiffuser explores the potential of learning to plan from memory, with the aim of generalizing to new scenarios. By incorporating a memory mechanism in POMDP scenarios, our model extends diffusion-based planning models into the realm of meta-learning with carefully designed tasks that require the diffusion planner to demonstrate both long-term planning and memory utilization. We investigated existing diffusion-based models, focusing on their applicability, computational efficiency, and performance trade-offs."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Reinforcement learning",
"Partial observability",
"Long memory",
"Planning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c2ec145af108f1b6153b119034f3e7ef69394cfc.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "POMDIFFUSER: LONG-MEMORY MEETS LONG- PLANNING FOR POMDPS"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1mXufFuv95 | Learning Diverse Attacks on Large Language Models for Robust Red-Teaming and Safety Tuning | main | Active | red-teaming;LLM;diversity | foundation or frontier models, including LLMs | 3;5;8;8;8 | 3;4;2;4;4 | 2;3;3;3;4 | 1;2;4;2;3 | 3;3;4;3;4 | 6.4 | 3.4 | 3 | 2.4 | 3.4 | 0.024282 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- Where does the diversity come from (besides the ablation on $\\beta$ values)? $\\gamma$? sampling temperature $\\tau$? replay buffer / \"off-policy-ness\"? The entropy/diversity of $p_{ref}$? The mix between replay buffer and online sampling in each iteration? Given that this is a main focus of the paper I would have been excited about more ablations / investigations in this direction.\n- The results for REINFORCE, ICL, SFT seem mostly consistent across the different red-teamed models. PPO+Novelty results are less consistent and so is GFlowNet - is this due to hyper-parameter sensitivity? Or variance between runs? GFlowNet+MLE looks more consistently strong, so the core results of the paper are not impacted by this.\n- How strong was the toxicity classifier, how often did the method \"hack\" shortcomings of the classifier rather than finding actual toxic outputs? This was hinted at in the discussion of setting $\\beta$ too low (high weight of R1), but wondering if there are some more concrete results on this?\n- What are sensible values for $r1, r2$?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- Clearly strong results and exciting possibilities for improving automated red-teaming and finding diverse attacks with strong motivations as one of the core challenges.\n- The paper is well written and easy to follow. Experiments and ablations are documented well and replication of the results seems straightforward.\n- I appreciate comparing to multiple RL-based baselines, as well as red-teaming multiple models. This gives confidence that the results will hold up in a wider range of settings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper applies GFlowNet to the problem of automated red-teaming, achieving a favorable balance between attack success rate and attack diversity. A model is trained to sample attacks with probability proportionally to their reward, followed by a cheaper second stage of training a smoother version of the model on rollouts of the trained model. The method achieves high levels of diversity combined with strong attack success rates, and, when including the smoothing step, consistently strong results across different hyper-parameters and different target models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Given the focus on learning diverse attacks, there could have been more in-depth ablations and experiments to investigate which parts of the method most strongly influence diversity (also see Questions section).\n- It would be nice to also plot error bars, at least on a version in the appendix. It was not clear to me if the various plots are also based on multiple random seeds (as table 2 is)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- What are the models you evaluating?\n- Have you considered evaluating the method against more methods?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- It’s good to have a method that produces diverse and effective red-teaming attacks\n- Prompt injection is tricky and it’d be good to have a method to red-team for it."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a two step automatic red-teaming process to produce effective and diverse attacks. In particular, this is used for automated red-teaming of jailbreaks and injection prompts.\nThe first step consists of generating a diverse set of instructions and criteria both from data and from using a rule-based reward.\nIn the second step an LLM red-teamer is trained using multi-step reinforcement learning on the instructions and criteria collected at step 1. The reward includes attack success, similarity and a length penalty.\nThe red-teaming method is tested one state-of-the-art model and one small model (that is not mentioned in the text)"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Some typos throughout the text\n- The section about AutoRBR should include more technical details. It’s not clear what is the role of the rule-based reward for the first step of the method\n- The baselines should include other red-teaming methods, not just mainly variations of the proposed method\n- The method is evaluated on two models that are not mentioned because of concerns about double blind reviews, but it’s not clear why\n- Plots in figure 4 and 5 are a bit small and are not clear. Which model is scored in these plots? What do the “crosses” represent?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "No ethics concerns"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I'm curious to know whether the method scales with stronger base model for the attacker model."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The proposed method could sample diverse prompts and appear more efficient than the baseline methods. \n2. The main claims are well-supported by the experiments.\n3. The writing is clear and the paper is easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a two-stage approach combining GFlowNet fine-tuning with MLE smoothing to automatically generate diverse and effective adversarial prompts for testing language model safety, demonstrating good performance over a selection of baseline red teaming methods, balancing attack success rate and prompt diversity while enabling reasonable transfer to new target models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. There's a lack of evaluation against stronger attacks and defenses. The paper did not consider many of the non-optimization-based gray /black box attacks, which might perform better at lower computation budget. The paper also did not consider robustified versions of the models such as models that underwent circuit breaking or (latent) adversarial training.\n2. It's unclear how to adapt this for targeted attack against specific harmful requests and how well that would work."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "On linea 305-309, it is not clear to me what dataset is being used. Can you be more explicit if you are adopting the datasets for all methods listed on these lines? Or do you use a different dataset for your fine tuning?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "On a standardized dataset and with standardized evaluations, and within the class of reinforcement learning methods that train an attacker model to produce a single adversarial prompt, this paper proposes a method that does achieve better diversity and stronger attack success rate. It is important for the community to be aware that this reinforcement learning method can produce attacker models with stronger adversarial “power” and diversity. The discussion of the adjustments needed to the GFlowNet method is also important."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies methods of training a language model to generate adversarial prompts that, when provided to more standard language models (such as GPT, Gemma, or Llama), produce responses deemed to be violating by a safety classifier. The main contribution of the paper is to apply the GFlowNet reinforcement learning method for this fine tuning of the attacker model. The paper produces attacker models that generate prompts of better diversity and higher attack success rate than other methods that take the same approach to red teaming."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I would say that this paper suffers from weaknesses that the whole field of automated jailbreaking is currently subject to. I do not consider this to be a reason for rejection but I also think it is important to record this critique for future iterations of automated red teaming work.\nTo be more precise, it is not clear to me that the majority of methods in this field – this work included – discover ways to elicit meaningfully harmful responses from models. It appears to me that the responses provided in the Appendix are all generally informational. To put this another way, this paper – along with most works who take AdvBench off the shelf as the goal set and slap some classifier on the responses – sidestep a robust definition of what is and is not considered harmful. This is not necessarily a problem of the work itself and rather a shortcoming of the yardsticks (harmful prompt datasets and classifiers) that have been widely adopted. \n\nHowever, what is important for the authors of this work, is that the end result of this leads to methods that generate adversarial prompts that likely exploit the model’s willingness to be generically helpful. In particular, the prompts listed in tables B.5 and B.6 are themselves very generic. For example, it is not clear why or how “records in the financial database” are being manipulated. Is this necessarily some harmful or unauthorized manipulation? The model response itself assumes that you would be doing this with a properly authorized user. This is likely because the prompt leaves enough vagueness in it to be interpreted as asking for something that is perfectly legal and acceptable (help with interacting with a database). \n\nThus, I believe methods in this space in the future should also consider specificity of the harmful request as another axis of desirable properties, in addition to diversity and attack success rate judged by a classifier. So instead of ending up with prompts that dance around a vaguely specified line, methods should a) be explicit about the line and b) make sure that their attacks clearly cross it. It would be interesting if GFlowNet adversarial methods can help elicit specific and truly harmful responses from language models."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Could you please elaborate on how you expect your method to fare against current SOTA guardrails and the challenges you see in overcoming those with attacks using your method?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The method is well motivated, presented in an easy to understand way, and backed by a rigorous experimental setup. Especially the performance in the transfer setting is impressive, as most other methods completely fail at this task. This paper advances the state of the art in a significant way and addresses a crucial problem in AI security."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces GFlowNet fine-tuning plus a follow-up smoothing phase for attack generation in LLM red teaming. Their approach overcomes the typical problems of lacking diversity, effectiveness and model collapse that arise in RL-based automated red teaming. The authors not only show the effectiveness of their method for red teaming (toxicity focus) but also that their method can be used to generate highly effective fine-tuning data for safety tuning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Experiments on the performance against LLM guardrails would have been of interest, as real-life deployments will always include input and output guardrail models. Given the strong performance of the method in transfer settings, this could also prove to be another potential strongpoint of this method."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Generating diverse, effective, and transferable prompts for red-teaming black-box large language models."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning Diverse Attacks on Large Language Models for Robust Red-Teaming and Safety Tuning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1mXufFuv95},\nnote={under review}\n}"
},
"abstract": {
"value": "Red-teaming, or identifying prompts that elicit harmful responses, is a critical step in ensuring the safe and responsible deployment of large language models (LLMs). Developing effective protection against many modes of attack prompts requires discovering diverse attacks. Automated red-teaming typically uses reinforcement learning to fine-tune an attacker language model to generate prompts that elicit undesirable responses from a target LLM, as measured, for example, by an auxiliary toxicity classifier. We show that even with explicit regularization to favor novelty and diversity, existing approaches suffer from mode collapse or fail to generate effective attacks. As a flexible and probabilistically principled alternative, we propose to use GFlowNet fine-tuning, followed by a secondary smoothing phase, to train the attacker model to generate *diverse* and *effective* attack prompts. We find that the attacks generated by our method are effective against a wide range of target LLMs, both with and without safety tuning, and transfer well between target LLMs. Finally, we demonstrate that models safety-tuned using a dataset of red-teaming prompts generated by our method are robust to attacks from other RL-based red-teaming approaches."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"red-teaming",
"LLM",
"diversity"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a885e83ff10f0c57dbc3a397450915954e78dcb9.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/eaaf23af6fac0703db7d5de23ca74e0e61b59798.zip"
},
"title": {
"value": "Learning Diverse Attacks on Large Language Models for Robust Red-Teaming and Safety Tuning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1nHQRsb3Ze | Auxiliary Classifiers Improve Stability and Efficiency in Continual Learning | main | Active | continual learning;class incremental learning;auxiliary classifiers | transfer learning, meta learning, and lifelong learning | 3;5;5 | 4;5;3 | 2;2;3 | 2;3;2 | 3;3;3 | 4.333333 | 4 | 2.333333 | 2.333333 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Figure 1 is not clear, the colors blend together. In general few figures need improvement.\n- Can you explain LP analysis? The classifiers at each layer are trained after the whole network is trained on all tasks and frozen?\n- Line 187, is this claim correct? There are no analysis for longer tasks (more than 10)\n- Can we visualize a pattern of which classifiers are being used? With multiple ACs, how is the final classifier’s predictive power affected? Could this architecture reduce overall network plasticity?\n- Line 471 - The lack of a clear impact from varying AC numbers and positioning is surprising. This makes it difficult to form a clear intuition about the impact. Thoughts on this ablation?\n- While replay and regularization methods are considered in results, parameter isolation methods such as PNN.. are not considered. Also, such as DER ++ (logit replay) are not considered?\n- Line 283 - was any other criterion tried before choosing maximum confidence? \n- How is threshold calculated for dynamic inference? Does it depend on arch or complexity of data or tasks?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- Catastrophic forgetting is a key challenge in continual learning and this paper aims to address this critical issue\n- The use of linear probing to assess accuracy at different network layers is interesting and offers insights\n- The paper is well-organized and generally easy to follow"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper aims to target catastrophic forgetting in continual learning as the problem statement. They\nIntroduce auxiliary classifiers (ACs) as a mechanism to improve performance in continual learning. The study provides analysis using linear probes and then proposes adding classifiers to intermediate layers, leveraging the fact that earlier layers of neural networks exhibit more stability. The results are shown with different Methods, naive fine-tuning , replay-based and regularizer based CL methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper’s objective is bit ambiguous. It’s unclear whether the goal is to fully mitigate catastrophic forgetting or simply to offer additional accuracy sources through auxiliary classifiers. Because, forgetting still occurs, with the method seemingly redistributing accuracy rather than eliminating forgetting. This distinction needs clarification, particularly around Line 190, where the claim that the method is \"less prone to forgetting\" may need more evidence.\n\n- Previous studies have already shown that early layers capture more generic features, while later layers capture task-specific semantics, so just early layers alone are often insufficient for reliable predictions. Further, though the paper incorporates auxiliary classifiers across layers, this approach introduces computational overhead. The lack of consistent patterns in the ablation studies also leaves it unclear how to optimally position these classifiers for a more efficient solution.\n\n- The motivation to introduce auxiliary classifiers (ACs) stems from empirical analysis, but the results show inconsistent patterns across different continual learning methods. For instance, in replay-based methods, weights remain relatively stable even without ACs, suggesting that the benefits of ACs may not be as universal as claimed. This raises the question of whether adding classifiers could be unnecessary overhead for certain methods.\n\n- LP works on frozen networks, however the hypothesis in Line 253, aims to train all classifiers, and the criteria changes. Training multiple classifiers concurrently may impact the final classifier's performance by diluting its specificity and potentially reducing network plasticity. Hence the training and the final classifier accuracy and the patterns learnt to make the prediction, can get affected ?\n\n- Empirical analysis could be more detailed. There’s limited discussion on the scalability of this method to larger networks or more extended task sequences. The claim of reduced forgetting (Line 190) would benefit from testing on longer task sequences (>10) and more complex (deeper) architectures. Also does the phase of training play a part, during initial epochs vs near the end of the final epochs for a task? \n\n- Other accuracy criteria such as stability and plasticity or forward/backward transfer is not provided which are important for assessing the method's full impact on continual learning.\n\n- Will this work when classes overlap, say in domain incremental learning?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. What is the distribution of the final selected classifiers during inference? \n2. The paper observes only six intermediate layers; it would be interesting to know if similar results apply to other layers as well."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "**Originality:** The focus on leveraging intermediate layer features to train ACs as a means to combat catastrophic forgetting is an innovative contribution to the field. \n**Quality:** The experimental results demonstrate that the proposed ACs significantly improves the performance of current CL methods, validating the effectiveness of the approach. \n**Clarity:** The paper is well-organized and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the stability of intermediate neural network layers and addresses the catastrophic forgetting problem in continual learning (CL) by utilizing features from these layers to train auxiliary classifiers (ACs). The proposed approach is novel and aims to enhance the robustness of existing CL methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper lacks a detailed analysis of time complexity and computational overhead. Specifically, how much additional time and memory are required for training and inference with the introduced ACs? This is a significant concern, as the practicality of the proposed method may be limited by increased resource requirements. \n2. The description of how to train the ACs is unclear. Are the same strategies used for training all classifiers? What is the architecture of each classifier? \n3. The choice of static inference, where the classifier with the maximum probability is selected, lacks further analysis and justification. More explanation is needed on this decision-making process. \n4. In Figure 5, what does the x-axis labeled \"cost\" represent? Additionally, what value of $\\lambda$ was used in the reported results for dynamic inference?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the Weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper is essentially well-organized and easy to follow.\n\n2. The proposed ACs seem to be easy to implement and provide significant improvements over a range of continual learning baselines.\n\n3. The proposed ACs may also reduce the computation through dynamic inference."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigated the stability of intermediate neural network layers during continual learning, where early network layers tend to be more stable. The authors then proposed to integrate auxiliary classifiers (ACs) into intermediate layers and ensemble them for improving continual learning. The authors then provided extensive experiments to demonstrate the effectiveness of the proposed ACs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The authors claimed that “no work has yet explored the use of intermediate classifiers in the continual learning setting”. However, there are at least two papers focusing on using multiple ACs in continual learning. [1] proposed to use multiple side classifiers on the top of regularization-based methods. [2] added multiple ACs to the intermediate outputs and integrated their outputs for online continual learning.\n\n2. The entire work is essentially based on the observations that the intermediate outputs behave differently and may outperform the final outputs in some cases. Is it possible to provide some mechanistic explanation for this phenomenon? Also, the advantages of intermediate outputs in unique accuracy (Figure 3) seem to be marginal for continual learning baselines. I'm not sure this is the main reason for the improved performance of the ACs.\n\n3. The authors claimed that the dynamic inference can reduce the computation. Does this mean training costs and/or testing costs? From my understanding, the proposed ACs still need to train the entire model while skip some layers for inference.\n\n4. The experiments are mainly performed with ResNet-based architectures. Do the proposed ACs also apply to the intermediate outputs of transformer-based architectures?\n\n[1] More classifiers, less forgetting: A generic multi-classifier paradigm for incremental learning. ECCV 2020.\n\n[2] Orchestrate Latent Expertise: Advancing Online Continual Learning with Multi-Level Supervision and Reverse Self-Distillation. CVPR 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose to use of auxiliary classifiers, demonstrate that they improve the results across multiple continual learning methods and show how they can be used to also accelerate computation."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024auxiliary,\ntitle={Auxiliary Classifiers Improve Stability and Efficiency in Continual Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1nHQRsb3Ze},\nnote={under review}\n}"
},
"abstract": {
"value": "Continual learning is crucial for applications in dynamic environments, where machine learning models must adapt to changing data distributions while retaining knowledge of previous tasks. Despite significant advancements, catastrophic forgetting — where performance on earlier tasks degrades as new information is learned — remains a key challenge. In this work, we investigate the stability of intermediate neural network layers during continual learning and explore how auxiliary classifiers (ACs) can leverage this stability to improve performance. We show that early network layers remain more stable during learning, particularly for older tasks, and that ACs applied to these layers can outperform standard classifiers on past tasks. By integrating ACs into several continual learning algorithms, we demonstrate consistent and significant performance improvements on standard benchmarks. Additionally, we explore dynamic inference, showing that AC-augmented continual learning methods can reduce computational costs by up to 60\\% while maintaining or exceeding the accuracy of standard methods. Our findings suggest that ACs offer a promising avenue for enhancing continual learning models, providing both improved performance and the ability to adapt the network computation in environments where such flexibility might be required."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"continual learning",
"class incremental learning",
"auxiliary classifiers"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2a3ccdd10ba448c6a6d7dbbb82d4e2b0267c70f8.pdf"
},
"presentation": null,
"primary_area": {
"value": "transfer learning, meta learning, and lifelong learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/c91ba6209980bfa581683abcefb787e8d0f42744.zip"
},
"title": {
"value": "Auxiliary Classifiers Improve Stability and Efficiency in Continual Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1o3fKLQPRA | DiffPath: Generating Road Network based Path with Latent Diffusion Model | main | Active | Path Generation;Latent Diffusion Model;Path Distribution;Long-range Dependencies | other topics in machine learning (i.e., none of the above) | 3;5;5;5 | 5;4;3;4 | 1;3;2;2 | 2;2;2;2 | 2;2;3;3 | 4.5 | 4 | 2 | 2 | 2.5 | -0.816497 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see my review above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- This paper tackles a practical problem in the urban computing scenario. It aims to address privacy concerns and data limitations in urban navigation and planning, which is of high practical value.\n\n- The paper proposes a unique angle that is overlooked in previous works. They tend to focus on the local smoothness of the path but lose global-level constraints.\n\n- The paper is well-written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces DiffPath, a path generation model that uses a latent diffusion model (LDM) and a transformer to generate realistic synthetic road paths, addressing privacy concerns and data limitations in urban navigation and planning. DiffPath embeds discrete paths into a continuous latent space, allowing it to capture complex path distributions and ensuring coherence between adjacent and distant road segments. By incorporating a customized loss function, the model aims to generate paths with rare segments often missed by traditional methods. Experimental results on datasets from Chengdu and Xi’an show that DiffPath outperforms existing approaches in generating synthetic paths that align well with real-world road networks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The experiments conducted are not enough to evaluate the claimed advantages, i.e., generate more realistic paths, especially those low-frequency ones.\n\n- The proposed method is rather straightforward. Moreover, I think using the transformer and diffusion modeling instead of autoregressive modeling are both vital for capturing long-range correlation within a path.\n\n- Similarity matric seems to suffer from bias issues. What if the generated paths are all the same but highly similar to one ground truth?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. What is the core contribution of this work?\n2. How does the proposed framework tackle the claimed challenges?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The methodology is straightforward and easy to follow.\n2. The writing is clear and accessible.\n3. The framework has good performance on real-world datasets."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces DiffPath, a framework aimed at addressing path generation using a latent diffusion model combined with a transformer. The authors highlight two key challenges in prior work on path generation: complex path distributions and ensuring global coherence in generated paths. They suggest that these issues can be addressed through the integration of latent diffusion models with a transformer architecture. The experimental results indicate that DiffPath performs well on two real-world datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The core contribution is confusing. This work seems to simply apply the diffusion transformer model on the path generation task without additional optimization specific to this task.\n2. While the authors claim that the proposed model addresses the challenges of capturing complex path distributions and ensuring coherence in generated paths, there is a lack of experimental evidence and analysis to support these claims."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Q1. Due to the errors in the legend and related descriptions, I do not understand why \"P2 does not consider that selecting $v_4$ will result in a longer path to reach $v_7$.\" Is the distance from $v_2$ to $v_7$ indeed longer? More justification is needed to demonstrate that the generated path adheres to the constraints of the road network to substantiate this challenge. \nQ2. Diffusion-based models typically exhibit high complexity; how does the computational complexity of DiffPath compare to the baseline?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "S1. The solution to the path generation problem offers a certain degree of protection for personal privacy. \nS2. This paper is the first to attempt the use of latent diffusion models, which excel in generative tasks, in the context of path generation, along with targeted design considerations."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents DiffPath to address the challenges of complex segment distribution in path generation and to ensure global consistency of the generated paths. Experimental results validate its effectiveness in generating realistic paths."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1. Compared to the de-identification of real path data, the issues of accuracy and computational complexity in path generation appear more complex and unreliable. \nW2. In related studies, the assumption of maintaining symmetry in the adjacency matrix of existing diffusion models may inaccurately represent one-way streets as bidirectional. This warrants a more in-depth discussion, as directed graphs do not necessarily require a symmetric structure in their adjacency matrices. \nW3. The legend does not correspond with the paper's description; please verify the relationship between paths P1 and P2 in Figure 2 and the accuracy of the related statement in line 64. \nW4. The ablation study analyzes replacing the Transformer with UNet but lacks a thorough analysis of the Diffusion module. \nW5. No reproducible code is provided, making it impossible to verify the validity of the research findings."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "None"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. Propose transformer-based diffusion framework for path generation and validate on real-world dataset."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This study model path generation using diffusion model and take advantage of transformer architecture to consider the long-term input."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The motivation of this study is not convincing. In line 59, they claimed that “”\nAnother significant challenge in path generation for urban road networks is… because they do not conform to most situations in reality”, if the previous model is trained based on the real-world dataset, why do these models fail to capture suck kind of reality? Besides, it is also unclear how this study addresses the claimed challenge.\n2. The novelty is limited compared with the previously proposed diffusion based trajectory generation method[1,2]. The difference between this study and the previous one is only that this study adopts transformer architecture. Moreover, how do this study ensure topology constraint during path generation is not convincing. They proposed to clamp the predicted latent state to the nearest valid road segment embedding. How can generation convergence is guaranteed under this kind of operation? Besides, this operation is not theoretically guaranteed to meet the topology constraint.\n3. The experimental studies are not sufficient, for example, they don’t compare with other diffusion-based trajectory generation methods [1,2].\n\n[1] Zhu Y, Yu J J, Zhao X, et al. Controltraj: Controllable trajectory generation with topology-constrained diffusion model[C]//Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024: 4676-4687.\n\n[2] Zhu Y, Ye Y, Zhang S, et al. Difftraj: Generating gps trajectory with diffusion probabilistic model[J]. Advances in Neural Information Processing Systems, 2023, 36: 65168-65188."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024diffpath,\ntitle={DiffPath: Generating Road Network based Path with Latent Diffusion Model},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1o3fKLQPRA},\nnote={under review}\n}"
},
"abstract": {
"value": "With the increasing use of GPS technology, path has become essential for applications such as navigation, urban planning, and traffic optimization. However, obtaining real-world path presents challenges due to privacy concerns and the difficulty of collecting large datasets. Existing methods, including count-based and deep learning approaches, struggle with two main challenges: handling complex distributions of path segments and ensuring global coherence in generated paths. To address these, we introduce DiffPath, a path generation model based on Latent Diffusion Models (LDMs). By embedding path into a continuous latent space and leveraging a transformer architecture, DiffPath captures both local transitions and global dependencies, ensuring the generation of realistic paths. Experimental results demonstrate that our model outperforms existing approaches in generating paths that adhere to real-world road network structures while maintaining privacy."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Path Generation",
"Latent Diffusion Model",
"Path Distribution",
"Long-range Dependencies"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/804cb5324356625a9ca68254f7d734ce559f4a78.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "DiffPath: Generating Road Network based Path with Latent Diffusion Model"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1oIXRWK2WO | Learning to Optimize for Mixed-Integer Nonlinear Programming | main | Active | Mixed-Integer Nonlinear Programming;Learning to Optimize;Differentiable Optimization;Constrained Neural Networks;Deep Learning;Operations Research | optimization | 3;3;5;6 | 4;3;4;4 | 2;2;3;3 | 1;2;2;2 | 2;3;3;3 | 4.25 | 3.75 | 2.5 | 1.75 | 2.75 | 0.555556 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Could you please expand on the constrain violations produced by your method? Do you have an understanding of how the proposed method handle different constraint functions? For instance, what type of constraint are well-handled? What, instead, are more difficult to satisfy?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well written and organized. The idea of integrating the rounding operations within model training is sound and allows to obtain superior models with respect to Learning to Optimize models who solve a relaxed version and perform the rounding operations at inference time, as shown in the experimental sections. Computational advantages are also significant with respect to traditional numerical solver."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes an end-to-end method for learning solutions of integers programs by enabling differentiation through the rounding operation within model training. This is done by using the Straight-through Estimator (STE) combined with the Gumbel-noise method, which smooths the discrete function representing the rounding operations to obtain useful gradients for backpropagation. The paper provides a comprehensive evaluations of the proposed method across several optimization tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My main concern is that the proposed method cannot ensure constraint satisfactions, since it uses a soft constraint approach. I believe that integer variables also makes difficult to perform projections to restore feasibility at inference time. Nonetheless, the percentage of infeasible solution generated by the proposed method is low, and the results shown in the Table 5 suggest that using a Lagrangian-inspired method might allow to obtain a better estimate of the dual variables, which might help to reduce constraint violations. \nThe paper might benefit from a more systematic evaluation of the impact of different constraint functions to the feasibility/violations produced by the proposed method, which might allow to identify scenarios and pattern where the proposed method (does not) produce constraint violation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Could you please explain how the proposed method handles problems with different sizes?\n- Could the proposed method generalize to large instances, such as those with thousands of constraints and variables?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The topic of MINLP is an interesting and important topic in the field of learning to optimize.\n- This paper combines the gradient information during the optimization.\n- The presentation in this paper is good."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes two differential correction layers (rounding classification and learnable threshold) that generate integer outputs while preserving gradient information. The experiments demonstrate that the proposed learning-based approach consistently produces high-quality solutions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The STE is not novel in the ML field. Moreover, the author may want to explain that combining the gradient information cannot lead to local optima.\n- While many works on learning to optimize use GNN to process problems with different sizes, the proposed method seems to use MLP with fixed-size inputs. Thus, the network may fail to process problems of various sizes.\n- The author may investigate the effects of different $\\lambda$ on the performance.\n- The author may conduct experiments on more complex instances, and the 60-second time limit is too short. Existing works in learning to optimize conduct experiments on challenging instances with at least 1000 sec of time limit [1,2].\n\n[1] A GNN-Guided Predict-and-Search Framework for Mixed-Integer Linear Programming\n\n[2] GNN&GBDT-Guided Fast Optimizing Framework for Large-scale Integer Programming"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In lines 330 to 334, the authors mention modifications made to the original quadratic problems from Donti et al. (2021). However, it remains unclear whether these modifications provide any advantages to the proposed method. A clarification on this point is necessary.\n\n2. In Algorithm 1, the authors only consider a round-down direction for integer variables. It would be beneficial to explain why the round-up direction is excluded. If the round-up direction is relevant, this should be described in detail.\n\n3. In the experiments, the authors allocate only a 60-second time budget to the exact solver. This limited timeframe may hinder the solver’s ability to find the optimal feasible solution, even if a few additional seconds are provided. It would be more informative to present a statistical distribution of % Infeasible versus Time (seconds) for the various methods evaluated."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. MINLPs arise in numerous real-world applications, making the techniques proposed in this paper significantly relevant to practical problem-solving.\n\n2. The paper is well-structured and clearly articulated, making it accessible to the reader.\n\n3. The authors assert that they are the first to introduce Straight-Through Estimator (STE) and Gumbel-Sigmoid techniques in the context of learning-to-optimize, which they identify as pivotal for efficiently generating solutions to large-scale MINLP problems."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the challenging problem of Mixed-Integer Nonlinear Programming (MINLP) within a learning-to-optimize framework, a crucial area of research with significant applications across various domains. The integration of learning approaches into MINLPs is particularly complex due to the presence of integer decision variables, which complicates gradient-based optimization techniques. To tackle this issue, the authors propose two differentiable correction methods that enable neural networks to generate high-quality integer solutions while maintaining gradient information for backpropagation. Additionally, the authors conduct a comprehensive set of experiments to demonstrate the superiority of their proposed methods compared to traditional exact search algorithms and heuristic approaches."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The fairness of the comparisons and the definitions used in the experiments are unconvincing for several reasons:\n\n - In lines 324 to 327, the authors list the solvers compared and the corresponding types of problems. However, they do not provide sufficient justification for the selection of these solvers or explain their relevance to the specific problem types addressed.\n\n - In lines 330 to 334, the authors mention modifications made to the original quadratic problems from Donti et al. (2021), but it remains unclear whether these modifications confer any advantages to the proposed method. Clarification is needed.\n\n - The metrics employed in the experiments raise concerns. For instance, while generating low percentages of infeasible solutions quickly is noted, the implications of this metric are questionable. The time required to convert an infeasible solution into a feasible one can be substantial, thus diminishing the significance of the reported speed.\n\n - In the experiments involving simple nonconvex problems, the use of the %Unsolved metric is unconventional. It is problematic to claim a problem is solved when the provided solution is still infeasible.\n\n2. The loss function introduced in the paper essentially applies the Lagrangian multiplier method, which is not particularly novel in this field.\n\n3. Additionally, there are several typographical errors throughout the paper. The authors should conduct a thorough proofreading before submission."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See the weakness part."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper focuses on applying machine learning methods to solve MINLPs. \n- It proposes novel differentiable correction layers that can potentially handle the non-differentiability of integer outputs in deep learning models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an end-to-end optimization method for solving general mixed-integer nonlinear programs (MINLPs). The proposed approach consists of two steps to generate solutions. In the first step, a neural network is employed to generate a relaxed solution that is close to the optimal solution. In the second step, another neural network provides update directions for continuous variables and rounding rules for integer variables. All of these neural networks are trained in a self-supervised manner. The Straight-Through Estimator is utilized to manage non-differentiable operations, such as rounding."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I have a few serious concerns below.\n\n- First and foremost, since the proposed approach does not take advantage of the non-linear part, I think it could also be applicable to mixed-integer linear programs. Then why not also conduct computational experiments on those instances and show how good or bad it performs? We know that learning to solve MINLPs is rarely studied but being the first to address such a problem could be a trivial thing (not a significant contribution). \n\n- Note that in the computational studies, only the right hand sides of constraints are perturbed, I recommend the authors perturb all parameters in the MINLP formulations and conduct experiments. The reason I ask such a question is, representing MINLPs using neural networks itself is a very important question (and challenging). Note that representing linear programs or mixed-integer linear programs via neural networks has theoretical foundations, see [1] [2]. Furthermore, I do not see equality constraints in the dataset. \n\n- Can the authors consider more practical MINLP instances? Such as MINLPLIB (https://www.minlplib.org/). The dataset used in the manuscript is kind of like toy problems. I'm expecting to see the computational performances on real-life instances.\n\n- The parameter $\\lambda$ in loss function is an import hyper-parameter for balancing feasibility and optimality, and should be analyzed more carefully. Usually, penalty methods in L2O demonstrate very weak generalization capabilities. This kind of explains why the infeasibility ratio in Table 4 is so high. I do not think penalizing constraints in the loss function is a good way. Rather, the authors should design special algorithms to handle nonlinear (and possibly non-convex) constraints. \n\n[1] Chen, Z., Liu, J., Wang, X., Lu, J. and Yin, W., 2022. On representing linear programs by graph neural networks. arXiv preprint arXiv:2209.12288.\n\n[2] Chen, Z., Chen, X., Liu, J., Wang, X. and Yin, W., 2024. Expressive Power of Graph Neural Networks for (Mixed-Integer) Quadratic Programs. arXiv preprint arXiv:2406.05938."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning to Optimize for Mixed-Integer Nonlinear Programming},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1oIXRWK2WO},\nnote={under review}\n}"
},
"abstract": {
"value": "Mixed-integer nonlinear programs (MINLPs) arise in various domains, such as energy systems and transportation, but are notoriously difficult to solve. Recent advances in machine learning have achieved remarkable success in optimization tasks, an area known as learning to optimize. This approach includes using predictive models to generate solutions for optimization problems with continuous decision variables, thereby avoiding the need for computationally expensive optimization algorithms. However, applying learning to MINLPs remains challenging primarily due to integer decision variables, which complicate gradient-based learning. To address this limitation, we propose two differentiable correction layers that generate integer outputs while preserving gradient information. The experiments demonstrate that the proposed learning-based approach consistently produces high-quality solutions for parametric MINLPs extremely quickly. As problem size increases, traditional exact solvers and heuristic methods struggle to find feasible solutions, whereas our approach continues to deliver reliable results. Our work extends the scope of learning-to-optimize to MINLP, paving the way for integrating integer constraints into deep learning models."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Mixed-Integer Nonlinear Programming",
"Learning to Optimize",
"Differentiable Optimization",
"Constrained Neural Networks",
"Deep Learning",
"Operations Research"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/0f1b7892bdc29a1c970846e717a71f74bcdb5a24.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Learning to Optimize for Mixed-Integer Nonlinear Programming"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1olDGAXncb | $f$-Divergence Policy Optimization in Fully Decentralized Cooperative MARL | main | Active | multi-agent;reinforcement learning;fully decentralized learning;policy optimization;convergence;independent learning | reinforcement learning | 3;3;3 | 4;4;3 | 1;3;2 | 1;2;2 | 2;3;3 | 3 | 3.666667 | 2 | 1.666667 | 2.666667 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Is the $V^*$ in Theorem 4.6 the stationary point instead of the value function corresponding to the optimal policy?\n- In the second line of Eq (23), it seems to be $\\Rightarrow$ instead of $\\Leftrightarrow$. Because $f$ is convex instead of strongly convex"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The empirical performance of TVPO is superior to previous SOTA\n- The writing is clear except for several typos (see weaknesses)\n- The proofs are easy to follow\n- Compared to previous algorithms, TVPO is easy to implement"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes TVPO for cooperative Markov games, with the update rule of each agent as $\\pi^i_{t+1}=\\arg\\max_{\\pi^i} \\sum_{a_i} \\pi^i(a_i | s)Q_i^{\\pi_t}(s,a_i)-\\omega D_{TV}(\\pi^i(\\cdot|s)|| \\pi_t^i(\\cdot|s) )$ and shows that the algorithm can converge monotonically to the NE of the game. Moreover, TVPO with the adaptive $\\beta$ in PPO shows superior empirical performance over previous algorithms."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "## Comparison to Related Work\nMy major concern is that this paper seems to miss several relevant literature. For instance, [1], [2] both proposed algorithms for independent learning in potential Markov games, which include the cooperative Markov games investigated in this paper. Further, [1] proposed a policy gradient algorithm and [2] proposed a policy iteration algorithm, which is highly relevant to this paper.\n\nMoreover, the algorithm in [2] can also use the adaptive $\\beta$ in PPO. Therefore, I'm wondering if TVPO will be superior to [2] when both using an adaptive $\\beta$.\n\n## Writings\n- $i$ is superscript for $\\pi$ but subscript for $V,Q$\n- The $M$ in Proposition 4.2 and Section 4.2 differs\n- Line 152: such as...\n\nI would be happy to raise the score if the author can resolve the issues above.\n\n[1] Leonardos, Stefanos, et al. \"Global convergence of multi-agent policy gradient in markov potential games.\" arXiv preprint arXiv:2106.01969 (2021).\n\n[2] Fox, Roy, et al. \"Independent natural policy gradient always converges in markov potential games.\" International Conference on Artificial Intelligence and Statistics. PMLR, 2022."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Why use different metrics for SMAC (win rate) and SMACv2 (return)?\n2. Due to the assumption of the global state, I suggest using Markov games [a] as the multi-agent framework.\n\na. Littman, Michael L. \"Markov games as a framework for multi-agent reinforcement learning.\" Machine learning proceedings 1994. Morgan Kaufmann, 1994. 157-163."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Detailed related work in the Fully Decentralized Learning field.\n2. The paper introduces a well-grounded technique for achieving monotonic improvement in multi-agent optimization through decentralized learning.\n3. The paper is well-structured and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores independent learning in the multi-agent reinforcement learning (MARL) setting and introduces f-divergence policy optimization. The authors analyze the limitations of the method with an illustrative example and propose defining the f-divergence as the total variation distance. Theoretical and experimental results confirm the effectiveness of the proposed approach."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The relevant work of CTDE is incomplete and lacks recent work, such as HASAC[a] and MAT[b].\n2. Assuming global information might influence the impact of this work.\n3. While the experiment results appear promising, the contribution is slightly insufficient compared with existing work[c,d].\n\n\na. Liu, Jiarong, et al. \"Maximum Entropy Heterogeneous-Agent Reinforcement Learning.\" The Twelfth International Conference on Learning Representations.\n\nb. Wen, Muning, et al. \"Multi-agent reinforcement learning is a sequence modeling problem.\" Advances in Neural Information Processing Systems 35 (2022): 16509-16521.\n\nc. Grudzien, Jakub, Christian A. Schroeder De Witt, and Jakob Foerster. \"Mirror learning: A unifying framework of policy optimisation.\" International Conference on Machine Learning. PMLR, 2022.\n\nd. Su, Kefan, and Zongqing Lu. \"Decentralized policy optimization.\" arXiv preprint arXiv:2211.03032 (2022)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See Weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- The presentation is clear and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper utilizes f-divergence, specifically the total variation, to generalize the KL divergence in independent policy optimization."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The application of f-divergence in policy optimization is not new; a comprehensive analysis of various distance constraints in policy gradients has been provided in [1].\n\n- Extending existing single-agent analysis to the multi-agent setting is reasonable, but some assumptions are questionable. Specifically, the approach assumes full observability in MARL making the setting difficult to distinguish from single-agent reinforcement learning. Under full observability, what meaningful difference remains between centralized and decentralized control?\n\n- The performance improvement appears marginal. With full observability, IPPO has already demonstrated near-optimal performance on SMAC and Multi-Agent MuJoCo. Were the baseline hyperparameters tuned to achieve their optimal reported performance?\n\n- Why is win rate not used as the evaluation metric for SMAC-v2 tasks?\n\n\n[1] Zhang, Junyu, et al. \"Variational policy gradient method for reinforcement learning with general utilities.\" Advances in Neural Information Processing Systems 33 (2020): 4572-4583."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024fdivergence,\ntitle={\\$f\\$-Divergence Policy Optimization in Fully Decentralized Cooperative {MARL}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1olDGAXncb},\nnote={under review}\n}"
},
"abstract": {
"value": "Independent learning is a straightforward solution for fully decentralized learning in cooperative multi-agent reinforcement learning (MARL). The study of independent learning has a history of decades, and the representatives, such as independent Q-learning and independent PPO, can obtain good performance in some benchmarks. However, most independent learning algorithms lack convergence guarantees or theoretical support. In this paper, we propose a general formulation of independent policy optimization, $f$-divergence policy optimization. We show the generality of such a formulation and analyze its limitation. Based on this formulation, we further propose a novel independent learning algorithm, TVPO, that theoretically guarantees convergence. Empirically, we show that TVPO outperforms state-of-the-art fully decentralized learning methods in three popular cooperative MARL benchmarks, which verifies the efficacy of TVPO."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"multi-agent",
"reinforcement learning",
"fully decentralized learning",
"policy optimization",
"convergence",
"independent learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/842e3e37b9d28504e3c60d30deedcbffd7c07683.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/c1ecd1804f1643ae8c3c2cfd13f4b669b01993bb.zip"
},
"title": {
"value": "$f$-Divergence Policy Optimization in Fully Decentralized Cooperative MARL"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1ou5noWgHM | Source Attribution for Large Language Model-Generated Data | main | Active | Large Language Model;Source Attirbution | foundation or frontier models, including LLMs | 3;3;5;5;6 | 4;4;4;3;4 | 2;2;2;2;2 | 2;2;2;3;3 | 2;2;3;3;3 | 4.4 | 3.8 | 2 | 2.4 | 2.6 | -0.25 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The authors claim that source attribution is a new task proposed by them. I need more explanation of the differences between source attribution and text watermarks for copyright protection.\n\n2. The authors claim that through the design of splitting the linear layer, the WASA-LLM can avoid pertubations. However, as far as I'm concerned, as described in Figure 3, all hidden states(including hidden embeddings of word tokens) will be in the forward pass of We′[V + 1 : V + V ′], and will influence the outputs(generated watermark tokens). So, pertubations on input words have effects on the output watermarks. \n\n3. There may be some challenges in proving the authors' claim. The authors utilize a one-hot vector for data from a single provider. However, data from the same provider may be very different in distribution, and data from different providers may be similar. For instance, data from Arxiv and DBLP may have similar distributions, as they all contain scientific papers. And, data from a social media may be very different in topics and ideas. How can the author prove that with this problem, their proposed method can also work well? Extra experiments needed.\n\n4. I also want to know the implementation details. As we all know, the way of adding tokens to the vocabulary is important for the final results. How do you initialize your embeddings of the watermark Unicode tokens? And, do you update the embedding parameters during training? This design may be important for results.\n\n4. The author use GPT-2(maybe not an LLM) and llama-2 for experiment results. However, open-source LLMs with better capability have been proposed after them. LLaMA-3-8B[1] and other LLMs may be good choices. You can do supplementary experiments on LLaMA-3-8B to show me the performances.\n\n\n[1] Dubey, Abhimanyu, et al. \"The llama 3 herd of models.\" arXiv preprint arXiv:2407.21783 (2024)."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- Provide convincing real-world application for readers.\n- Clear definition of the source attribution problem.\n- Large amount of main experiments and ablation studies to show the aspects of a good source attribution algorithm the author claims."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce a framework named WASA (Watermarking for Source Attribution) that embeds unique, imperceptible watermarks into the data used for training LLMs. This approach enables the identification of specific data providers when synthetic texts are generated, thus providing a solution for source attribution. The paper discusses the key properties of an effective source attribution system, including accuracy, robustness against attacks, scalability, and performance preservation. WASA is demonstrated to achieve high source attribution accuracy while maintaining the generation quality of the LLMs. It utilizes unique Unicode characters as watermarks and is shown to be effective in empirical evaluations, even under adversarial conditions such as text modification. This work positions itself as a pioneering solution for source attribution in LLM-generated outputs, offering significant implications for data protection and IP verification in AI-generated content."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Not clear difference from text watermark for copyright protection\n- Not enough evidence for avoiding pertubation attack on word tokens.\n- Data distribution problems of different data providers.\n- Implementation details: embedding settings. \n- Lack of experiments on recently proposed LLMs."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Q1: The method is a pre-training source attribution method. How would this method integrate into pipelines of continuous training, where the number of data providers may also be growing? \n\nQ2: The authors discuss the performance drop as data provider number grows. How would this method scale to thousands or millions of data providers? \n\nQ3: Can you motivate the argument for source attribution via training rather than search more? Results in the paper show for 500 data providers, WASA is better than BM25. But practically, data providers may be in the number of millions rather than hundreds."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The authors propose a novel method that tackles source attribution, an important and difficult problem.\n- The authors lay out clear desiderata for source attribution and demonstrate that the proposed method has promise in satisfying the desiderata.\n- The writing and presentation of the paper is clear and easy to follow. Experiments are well set up and detailed for each desiderata."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors study how to generate source attribution—identifying data sources that influence specific outputs—for LLMs. The authors discusses a list of effective source attribution desiderata: 1) accuracy, 2) robustness, 3) performance preservation, 4) scalability, 5) transferability, 6) adaptability. The authors propose WASA which embeds invisible characters into the sentences that are most representative of a data provider. WASA-LLM can fit in during or after the pre-training stage. The framework learns to insert watermark randomly in the desired sentence, by a modified transformer structure, where there is a separation of text and watermark token predictions. This benefits WASA-LLM in generating watermark for clean sentences."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The proposed source attribution method requires pre-training and performs worse with growing number of data providers. See Q1, Q2, Q3.\n2. Other related work: \nhttps://arxiv.org/pdf/2302.14035 \nhttps://arxiv.org/pdf/2403.03187\nhttps://arxiv.org/pdf/2311.12233\n3. Main experimental comparison is against BM25, though BM25 has limitations related to changed word order, and less semantic relationship captured. Experiments would be stronger compared with other retrieval methods."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Can this framework be applied to code data?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- This is a popular topic that explores the attribution of sources for text generated by LLMs, a crucial issue for effective data regulation in the age of large language models.\n- The proposed WASA framework is well-defined, considering key attributes for practical application such as accuracy, robustness, and scalability."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The article addresses the challenge of attributing sources for synthetic text generated by large language models (LLMs). It presents a framework called \"Watermark for Source Attribution\" (WASA), which embeds watermarks in the generated text to identify the data sources used during LLM training. This framework aims to ensure accurate source attribution, considering factors such as robustness to attacks, scalability, performance retention, transferability, and adaptability."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Despite this, the method's practical applicability remains weak and raises concerns:\n\n- Unlike recent watermarking efforts that focus on injecting watermarks during the model generation process, this approach targets pre-training data for various providers. Therefore, a potential attack could involve provider B using provider A's data and repeatedly injecting watermarks to attribute the content to provider B. In a more common scenario under AI-assisted writing, if provider A uses provider B's WASA-LLM for text refinement, even for simple grammar checks, provider B's content might inadvertently receive provider A's watermark, leading to intellectual property conflicts.\n- The consideration for attacks is insufficient; stronger paraphrasing is necessary beyond simple changes to prepositions, tenses, and syntax. This means semantically equivalent rewriting, as demonstrated by the DIPPER paraphraser [1]'s effectiveness against watermarks.\n- The technique relies on classic text steganography. Effective defenses include: 1. Scanning and cleaning all Unicode characters; 2. Injecting numerous Unicode characters for perturbation. This raises questions about the effectiveness of WASA-LLM.\n\n- Additionally, if the method cannot attribute output to multiple data sources, it cannot truly identify specific sources influencing a particular output, as claimed. This is similar to data provenance, offering only binary determination. Techniques like those by Kirchenbauer et al. [2] can assign keys to each provider to achieve this identification, which diminishes the distinct contribution of this paper compared to other watermarking work.\n\nOverall, while the motivation is novel, the method seems insufficiently comprehensive. If the authors address these weaknesses convincingly, I am open to revising my evaluation.\n\n[1] Krishna, K., Song, Y., Karpinska, M., Wieting, J., & Iyyer, M. (2024). Paraphrasing evades detectors of ai-generated text, but retrieval is an effective defense. Advances in Neural Information Processing Systems, 36.\n[2] Kirchenbauer, J., Geiping, J., Wen, Y., Katz, J., Miers, I., & Goldstein, T. (2023, July). A watermark for large language models. In International Conference on Machine Learning (pp. 17061-17084). PMLR."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Why is it more effective to allow a data provider to verify if their data was used to train an honest LLM when addressing IP issues?\n2. In the effectiveness experiments, the comparative baselines for source attribution seem limited. They rely solely on the simple probabilistic model BM25. More advanced methods, such as machine learning approaches, exist for estimating the relevance of generated texts to data providers. How does the proposed WASA method perform compared to these machine learning techniques?\n3. What is the specific impact of the watermarking process on the computational resources and performance of the LLM, especially in large-scale applications?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "+ This paper introduces a new task that is more challenging than traditional data provenance, as it requires more detailed information about the data source. It successfully tackles this challenge by using watermarking techniques, which enable precise identification and tracking of the original data sources.\n+ This paper identifies six key properties essential for successful source attribution. To address these, the authors develop a framework designed to meet multiple critical requirements, ensuring that the system is both versatile and functional.\n+ Through extensive empirical evaluations, including ablation studies and comparisons with alternative methods, the paper demonstrates the effectiveness, robustness, scalability, performance preservation, and adaptability of the WASA framework."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper tackles the challenge of source attribution for texts generated by LLMs, aiming to protect intellectual property. It introduces a framework called WASA, which embeds watermarks in generated texts to trace back the data providers involved in training the LLM. WASA is designed to ensure accurate attribution while maintaining robustness against adversarial attacks, preserving performance, and scaling to accommodate a large number of data providers. Additionally, it is transferable and adaptable across different LLMs. Extensive empirical experiments demonstrate the framework’s effectiveness in source attribution."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The writing style is unclear, making the paper's motivation less apparent. It claims that source attribution addresses IP concerns related to synthetic texts generated by LLMs. However, it fails to clearly explain why allowing a data provider to verify the use of their data in training an honest LLM is a more effective solution for these IP issues.\n2. This paper highlights robustness as a key feature and demonstrates it against multiple attacks. However, it overlooks a simple method for watermark removal. Specifically, the watermark could be removed using basic standard formatting methods.\n3. Embedding and regenerating watermarks may increase computational overhead, particularly in large-scale applications. Yet, the paper does not offer a detailed analysis of how this affects performance and resource usage."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Q1: What's the impact of the random insertion of the watermark during training and inference? Can it rather be fixed?\n- Q2: How is this approach different from training the model to generate the citation like \"Sentence [arxiv:math]\"? If the citation can be reconstructed anyway, we don't need to be limited by the invisible unicode characters.\n- Q3: How do we control the model to memorize the watermark/citation? Or how are we sure about it?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- S1: The proposed approach is simple and generally applicable many existing LLM architecture and training scheme.\n- S2: The evaluation shows that the proposed approach outperform the baseline by a large margin.\n- S3: The approach is generally well presented.\n- S4: The paper presents the negative result where the normal performance of the LLM can degrade with this defense, setting the expectation when this approach is adopted."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper tries to attribute the source of the generated text by LLM using invisible unicode characters included during training. The approach is evaluated with 20 sources to show that the source could be correctly identified. The proposed approach is compared with BM25 and shown to outperform it by 17-29% margin."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- W1: The approach is evaluated with only 20 sources, limiting the understanding of its real world impact. Thus, it is unclear if the watermark will survive with a lot more sources (e.g., thousands to millions) that would be closer to the real world.\n- W2: The baseline approach can be a little bit better. The first and simplistic approach would be training BERT to classify the generated text to sources (similarly to Matching Pairs, Foley et al., 2023 @ ACL) given the number of sources is only 20. For a large number of sources, a Siamese model or 1/k-shot classification can be used. BM25 is not a conventional baseline for a classification task.\n- W3: The accuracy evaluation uses the samples directly from the data providers, which is not realistic in a modern LLM usage since there will be more information, context, structure, or other utterances will be present. This trivialize the problem to a typical classification task such as topic classification, etc."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper proposes a WASA framework, which is the first framework capable of producing LLMs whose generated texts allow for effective source attribution."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024source,\ntitle={Source Attribution for Large Language Model-Generated Data},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1ou5noWgHM},\nnote={under review}\n}"
},
"abstract": {
"value": "The impressive performances of Large Language Models (LLMs) and their immense potential for commercialization have given rise to serious concerns over the Intellectual Property (IP) of their training data. In particular, the synthetic texts generated by LLMs may infringe the IP of the data being used to train the LLMs. To this end, it is imperative to be able to perform source attribution by identifying the data provider who contributed to the generation of a synthetic text by an LLM. In this paper, we show that this problem can be tackled by watermarking, i.e., by enabling an LLM to generate synthetic texts with embedded watermarks that contain information about their source(s). We identify the key properties of such watermarking frameworks (e.g., source attribution accuracy, robustness against adversaries), and propose a source attribution framework that satisfies these key properties due to our algorithmic designs. Our framework enables an LLM to learn an accurate mapping from the generated texts to data providers, which sets the foundation for effective source attribution. Extensive empirical evaluations show that our framework achieves effective source attribution."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large Language Model",
"Source Attirbution"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ebe8450a80c5918826dd79cf39a506ad8d35af2c.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/dcd27123e842caf6eef458314b0007b56e90eece.zip"
},
"title": {
"value": "Source Attribution for Large Language Model-Generated Data"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1p6xFLBU4J | GenSE: Generative Speech Enhancement via Language Models using Hierarchical Modeling | main | Active | speech enhancement;language model;semantic information | applications to computer vision, audio, language, and other modalities | 3;5;6;6 | 5;5;4;2 | 2;2;3;3 | 2;2;3;2 | 3;3;4;3 | 5 | 4 | 2.5 | 2.25 | 3.25 | -0.666667 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "n/a"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- The lack of novelty mentioned in the weaknesses section diminishes the overall contribution of this paper. Without a substantially innovative approach, I am inclined to recommend rejection."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper is clearly written and easy to follow.\n- The proposed approach demonstrates the effectiveness of the decoder-only architecture for conventional signal processing tasks, such as speech enhancement (SE)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "- This paper introduces a language model-based generative speech enhancement system, termed GenSE.\n- The system comprises two primary components: a decoder-only model that enhances noisy tokens into clean tokens, and a neural speech codec, SimCodec, which reconstructs waveforms from the enhanced clean tokens."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The proposed approach lacks significant novelty, which is the primary reason for my decision to reject the paper. However, please correct me if I am mistaken, as I am open to revisiting my assessment.\n\n- Concerning speech enhancement (SE) using language models (or the decoder-only architecture), similar approaches have already been introduced in:\n\n[1] Wang, X., Thakker, M., Chen, Z., Kanda, N., Eskimez, S. E., Chen, S., ... & Yoshioka, T. (2024). Speechx: Neural codec language model as a versatile speech transformer. IEEE/ACM Transactions on Audio, Speech, and Language Processing. \n[2] Yang, D., Tian, J., Tan, X., Huang, R., Liu, S., Chang, X., ... & Meng, H. (2023). Uniaudio: An audio foundation model toward universal audio generation. arXiv preprint arXiv:2310.00704.\n\nNeither of these references are cited.\n\n- Similarly, with regard to the neural speech codec, an analogous method was proposed in:\n\n[3] Li, H., Xue, L., Guo, H., Zhu, X., Lv, Y., Xie, L., ... & Li, Z. (2024). Single-Codec: Single-Codebook Speech Codec towards High-Performance Speech Generation. arXiv preprint arXiv:2406.07422.\n\nThis work is also not referenced. Given these omissions, I judge the paper as lacking sufficient originality for acceptance. I believe all referenced works were available prior to the ICLR submission."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Could you provide a comparison between SimCodec and Vocos (Siuzdak, 2023) and WavTokenizer(Ji, 2024)?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The proposed generative framework leverages language models and discrete speech tokens to outperform state-of-the-art speech enhancement systems in terms of speech quality and generalization capability.\n2. The paper introduces a hierarchical modeling approach that separates the denoising and generation stages, improving the stability and performance of the LM-based generation process.\n3. The paper is clearly written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents GenSE, a novel generative framework for speech enhancement that leverages language models (LMs) and discrete speech tokens. GenSE employs a single-quantizer neural codec model called SimCodec to extract acoustic tokens from speech, reducing the complexity compared to previous multi-quantizer codecs. It also introduces a hierarchical modeling approach that separates the denoising and generation stages, with a noise-to-semantic (N2S) module transforming noisy speech into clean semantic tokens, and a semantic-to-speech (S2S) module generating clean acoustic tokens."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The ablation studies are relatively insufficient. For example, it would be helpful to provide detailed analysis on what information are contained in noisy/clean semantic tokens and noisy/clean acoustic tokens, respectively."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. Could the authors provide more insight into how SimCodec might perform under different network conditions, especially with low latency or limited bandwidth?\n2. How does the system handle speaker identity in cases of domain shifts, such as across different languages, accents, and ages, and would an alternative to XLSR affect GenSE’s generalization capability?\n3. For practical implementation, are there considerations for reducing the computational overhead of the hierarchical modeling method, perhaps through model pruning or compression techniques?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. GenSE offers a unique perspective by reframing SE as a language modeling task, using semantic information to enhance robustness. This represents a notable departure from conventional deterministic mapping in SE.\n2. The hierarchical modeling method, separating semantic and acoustic token generation, improves both quality and intelligibility of enhanced speech, as evidenced by superior metrics across DNSMOS and SECS.\n3. The authors present a detailed breakdown of the methodology and technical architecture, providing clear diagrams and tables that make complex processes accessible.\n4. By addressing the limitations of traditional SE approaches in handling complex noise environments, GenSE has the potential to impact real-world applications in noisy and challenging acoustic settings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces GenSE, a generative speech enhancement (SE) framework that integrates language models (LM) to leverage semantic information for enhancing speech signals. Unlike traditional SE methods that focus on signal mapping, GenSE treats SE as a conditional language modeling task. By tokenizing speech into semantic and acoustic tokens using a novel codec (SimCodec) and employing a hierarchical approach, GenSE aims to maintain speaker consistency and improve speech quality under noisy conditions. Experiments demonstrate GenSE’s significant improvements over state-of-the-art SE systems in both quality and robustness to noise."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The hierarchical design and multiple components in GenSE, while effective, may pose a challenge in real-time applications. Simplifying or optimizing these processes further could improve usability.\n2. Although SimCodec effectively reduces token count, further exploration into balancing token complexity and quality in low-bandwidth scenarios could enhance GenSE’s adaptability.\n3. The two-stage quantizer reorganization might benefit from more empirical comparisons with other single-quantizer methods such as WavTokenizer, as these details are relatively underexplored."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "My questions are included in the weaknesses part."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The proposed hierarchical modeling method that separates the denoising and generation stages is effective.\n2. The proposed SimCodec reduces the number of tokens in the generation process, which would benefit all speech generation tasks.\n3. The experimental results and demo audios are promising."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel approach to speech enhancement (SE) called GenSE, which integrates semantic information into the enhancement process using language models (LMs). Traditional SE methods often ignore the semantic context, focusing solely on mapping noisy to clean speech, which can lead to performance issues in challenging environments. GenSE redefines SE as a conditional language modeling task by leveraging LMs to predict discrete acoustic tokens based on semantic information. It also separates the denoising and generation stages, improving prediction stability and incorporating a token chain prompting mechanism to maintain timbre consistency. The proposed SimCodec Model achieves remarkable reconstruction quality at a lower bit rate. Experimental results show that GenSE outperforms existing SE systems, demonstrating improved intelligibility and robustness in noisy conditions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**The main issues with this paper lie in the design of SimCodec and the lack of some experimental details:** \n*SimCodec*:\n1. The issue of low codebook usage with a large codebook size has been identified in the field of computer vision for a long time, and there are already many solutions available [1, 2]. Although this work proposes the codebook reorganization strategy to solve this issue, there are no ablation comparisons between this strategy and baselines like CVQ [2] and FSQ [3]. These comparisons are important for validating the effectiveness of the reorganization strategy proposed in this paper. \n2. The codebook reorganization strategy employs two quantizers at the first stage and concat the two quantizers at the second stage. This process is slightly similar to the GRVQ technique of Hifi-Codec [4] and the multichannel quantization of MoVQ [5]. I think the comparative experimental results of these two techniques should be added to Table 3. And the authors should discuss how their approach differs from or improves upon GRVQ and MoVQ. \n3. I think Figure 6 looks extremely similar to Figure 1 in WavTokenizer [8], even the colors of the baselines are the same. However, this paper does not compare with WavTokenizer. An explanation why WavTokenizer was not included in the comparison and how their work differs from or builds upon WavTokenizer is required, or 2) Include WavTokenizer as a relevant baseline. \n\n*Some experimental details*: \n1. Real-time generation is crucial for speech enhancement models, but the experiments of this paper do not mention the real-time factor (RTF) of the GenSE model. While Table 4 demonstrates that token chain prompting and hierarchical modeling are highly effective, it also does not indicate how much delay these methods introduce.\n2. In Section 3.3.2, the prefix token of GenSE at the S2S stage contains noisy acoustic tokens, clean semantic tokens, and noisy semantic tokens, which significantly increase the sequence length in training and inference. This paper lacks a specific analysis of the trade-offs between performance gains and computational costs of the introduced prefix sequence. \n3. Mapping from semantic to acoustic using a flow-matching model has proven to be highly effective in many previous studies [6, 7]. The authors could explain why they chose their current approach instead of a flow-matching model for the S2S module, discussing potential advantages and disadvantages. Alternatively, they might consider implementing a flow-matching model as an additional baseline in their experiments to compare its performance with their current method. \n\n**Minor questions that would not influence the scores:**\n1. Do you use greedy decoding for decoder LM? Will beam search improve the performance of the model? \n\n**Minor clarity issues**:\n1. In Section 3.2.3, Line 264, ``we reinitialize the encoder and decoder parameters to fit the new codebook dimension, while copying the parameters from the first stage``, the use of \"reinitialize\" in the first half of the sentence introduces clarity issues;\n2. In Section 3.3.1, Line 293, ``Meanwhile, the self-supervised model is also noise-robust to some extent.`` Some citations can be added here to demonstrate that this phenomenon actually exists.\n\n**Minor typos**: \n1. In Section 1, Line 052, the quotes of ``textless NLP``;\n2. In Figure 6, `Our` -> `Ours`.\n\n**Conclusion**: \nThe SimCodec and hierarchical modeling method proposed in this paper are not particularly novel, as there have been related studies in fields such as Computer Vision and Speech Generation. However, the experimental results are still quite impressive. If the authors could address my concerns, I would increase the score.\n\n[1] Yu, Jiahui, et al. \"Vector-quantized image modeling with improved vqgan.\" arXiv preprint arXiv:2110.04627 (2021). \n[2] Zheng, Chuanxia, and Andrea Vedaldi. \"Online clustered codebook.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. \n[3] Mentzer, Fabian, et al. \"Finite scalar quantization: Vq-vae made simple.\" arXiv preprint arXiv:2309.15505 (2023). \n[4] Yang, Dongchao, et al. \"Hifi-codec: Group-residual vector quantization for high fidelity audio codec.\" arXiv preprint arXiv:2305.02765 (2023). \n[5] Zheng, Chuanxia, et al. \"Movq: Modulating quantized vectors for high-fidelity image generation.\" Advances in Neural Information Processing Systems 35 (2022): 23412-23425. \n[6] Du, Zhihao, et al. \"Cosyvoice: A scalable multilingual zero-shot text-to-speech synthesizer based on supervised semantic tokens.\" arXiv preprint arXiv:2407.05407 (2024). \n[7] Anastassiou, Philip, et al. \"Seed-TTS: A Family of High-Quality Versatile Speech Generation Models.\" arXiv preprint arXiv:2406.02430 (2024). \n[8] Ji, Shengpeng, et al. \"Wavtokenizer: an efficient acoustic discrete codec tokenizer for audio language modeling.\" arXiv preprint arXiv:2408.16532 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A generative speech enhancement framework tailored for language model-based speech enhancement."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024gense,\ntitle={Gen{SE}: Generative Speech Enhancement via Language Models using Hierarchical Modeling},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1p6xFLBU4J},\nnote={under review}\n}"
},
"abstract": {
"value": "Semantic information refers to the meaning conveyed through words, phrases, and contextual relationships within a given linguistic structure. Humans can leverage semantic information, such as familiar linguistic patterns and contextual cues, to reconstruct incomplete or masked speech signals in noisy environments. However, existing speech enhancement (SE) approaches often overlook the rich semantic information embedded in speech, which is crucial for improving intelligibility, speaker consistency, and overall quality of enhanced speech signals. To enrich the SE model with semantic information, we employ language models as an efficient semantic learner and propose a comprehensive framework tailored for language model-based speech enhancement, called GenSE. Specifically, we approach SE as a conditional language modeling task rather than a continuous signal regression problem defined in existing works. This is achieved by tokenizing speech signals into semantic tokens using a pre-trained self-supervised model and into acoustic tokens using a custom-designed single-quantizer neural codec model. To improve the stability of language model predictions, we propose a hierarchical modeling method that decouples the generation of clean semantic tokens and clean acoustic tokens into two distinct stages. Moreover, we introduce a token chain prompting mechanism during the acoustic token generation stage to ensure timbre consistency throughout the speech enhancement process. Experimental results on benchmark datasets demonstrate that our proposed approach outperforms state-of-the-art SE systems in terms of speech quality and generalization capability. Codes and demos are publicly available at https://anonymous.4open.science/w/gen-se-7F52/."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"speech enhancement",
"language model",
"semantic information"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/615d43eb0a165e2fc969edaec10f54a029f5713e.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "GenSE: Generative Speech Enhancement via Language Models using Hierarchical Modeling"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1pXzC30ry5 | RMP-SAM: Towards Real-Time Multi-Purpose Segment Anything | main | Active | segment anything; real-time segmentation; multi-purpose model; | applications to computer vision, audio, language, and other modalities | 5;6;6;6 | 4;3;4;3 | 3;3;3;3 | 2;3;3;3 | 2;2;3;3 | 5.75 | 3.5 | 3 | 2.75 | 2.5 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Do you have the results for TopFormer in Table 3? Additionally, please bold the results in all comparison tables for clarity."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The model achieves multi-purpose segmentation through an efficient structure and unified training approach.\n2. The paper is well-written and easy to follow.\n3. The experiments on panoptic segmentation, interactive segmentation, and video segmentation are solid, comprehensive, and persuasive, effectively demonstrating the model's contribution."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a real-time, versatile segmentation model capable of interactive segmentation, panoptic segmentation, and video instance segmentation. \nWhile retaining the SAM encoder-decoder structure, the model incorporates an efficient encoder and adapter to enhance performance.\nIn the decoder, RAP-SAM introduces a three-stage pipeline that leverages novel pooling-based dynamic convolutions to refine mask tokens. Following the decoder, two additional prompt adapters are implemented to improve interaction between visual prompts and segmentation tokens.\nRAP-SAM demonstrates efficiency and generalizability across various segmentation benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper lacks a detailed comparison with other SAM-like methods. A single COCO instance segmentation comparison in Table 4 is insufficient to substantiate claims of superiority over SAM. The results presented in Table 4 are not particularly outstanding. Additional experiments, such as on the SegAny task, with detailed metrics (AP for small, medium, large objects) on COCO instance segmentation, and evaluations with different object detectors, would strengthen the case.\n\n2. Efficiency benchmarks are insufficiently detailed. For a model promoting efficiency, there should be a more comprehensive evaluation across different GPU platforms, such as the 3090 and V100, testing throughput and latency. Additionally, plotting latency versus performance compared to other SAM-like methods would provide a clearer visualization of the model's efficiency."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weakness."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.Demonstrates impressive performance and inference speed.\n\n2.Filling the gap in real-time multi-purpose segmentation.\n\n3.The whole method is very simple and easy to understand.\n\n4.Code is provided for easy reproduction by the reader."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work addresses the need for real-time multi-purpose segmentation by introducing a novel setting that encompasses interactive, panoptic, and video instance segmentation, striving for a single end-to-end model capable of handling all tasks in real-time. The proposed Real-Time Multi-Purpose SAM (RMP-SAM) utilizes an efficient encoder and a decoupled adapter for prompt-driven decoding, along with innovative training strategies and adapter designs, demonstrating effectiveness and strong generalization across benchmarks and specific semantic tasks while achieving an optimal balance between accuracy and speed."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.Based on existing technology development, the entire pipeline is not novel.\n\n2.Differences with SAMv2 should be further clarified, especially in terms of claimed semantic labels?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- RMP-SAM unifies interactive segmentation, panoptic segmentation, and video instance segmentation within a single model.\n- RMP-SAM offers a good trade-off between speed and accuracy.\n- Extensive experiments demonstrate the model's effectiveness."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a real-time multi-purpose segmentation model called RMP-SAM. RMP-SAM handles various tasks such as interactive segmentation, panoptic segmentation, and video instance segmentation using a single model. To balance the accuracy and speed, RMP-SAM utilizes a lightweight encoder and a dynamic convolution-based decoder. RMP-SAM achieves fast inference while maintaining satisfactory performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The authors do not provide detailed information for joint training. Joint training for multiple tasks can be complex. How do the authors train RMP-SAM for some potential problems, such as avoiding the model being dominated by a single task and performance degradation by conflicts between different tasks?\n\n- This paper ignores some related methods, making it difficult to assess the model's performance relative to existing SOTA approaches. For example, some universal methods[1,2,3] obtain better results than RMP-SAM using ResNet50. The authors should make a comprehensive comparison with other methods. \n\n[1] Tube-Link: A flexible cross tube framework for universal video segmentation. CVPR 2023.\n\n[2] Dvis: Decoupled video instance segmentation framework. CVPR 2023.\n\n[3] Univs: Unified and universal video segmentation with prompts as queries. CVPR 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- What does the dot size in Fig. 1b indicate?\n- The abstract says \"generalization ability of these models across diverse scenarios\", a learnable classifier with CLIP text embeddings is also used and “segment anything” is in the title. Is there a connection to open-vocabulary?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Large models can perform many tasks, but are not real-time capable because of the large encoders, while real-time models are often specialized in only one task. The method presented here aims to combine the two things, i.e., \"the first real-time multi-purpose segmentation model\".\n- Precise implementation details are given and the comparisons with the other methods appear to be fair.\n- The method achieves good results in the trade-off between performance and speed across the various tasks and datasets.\n- The ablation studies are useful and show interesting insights."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors explore a novel real-time segmentation setting called real-time multi-purpose segmentation. It contains three fundamental sub-tasks: interactive segmentation, panoptic segmentation, and video instance segmentation. In contrast to previous methods that use a separate design for each task, the authors use only a single end-to-end model to handle all these tasks in real time. To fulfill the real-time requirements and balance multitask learning, a new dynamic convolution-based method, Real-Time Multi-Purpose SAM (RMP-SAM), is introduced. They benchmark several strong baselines by extending existing work to support multi-purpose segmentation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Many architectural elements were adopted from other works, it is not clear to me if there are already similar architectures as proposed here, or where exactly is the innovation (except the jointly training).\n- In the related work section, many works are cited and also compared at the task level, but I also miss a comparison at the architectural level.\n- The tables, especially table 3, are difficult to read because nothing is in bold print and you have to search for the trade-off here. A plot like Fig. 1b would be more useful.\n- The references to the appendix could be a little more precise and there is no reference to Table 2."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024rmpsam,\ntitle={{RMP}-{SAM}: Towards Real-Time Multi-Purpose Segment Anything},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1pXzC30ry5},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent segmentation methods, which adopt large-scale data training and transformer architecture, aim to create one foundation model that can perform multiple tasks.\n However, most of these methods rely on heavy encoder and decoder frameworks, hindering their performance in real-time scenarios.\n To explore real-time segmentation, recent advancements primarily focus on semantic segmentation within specific environments, such as autonomous driving. However, they often overlook the generalization ability of these models across diverse scenarios.\n Therefore, to fill this gap, this work explores a novel real-time segmentation setting called real-time multi-purpose segmentation.\n It contains three fundamental sub-tasks: interactive segmentation, panoptic segmentation, and video instance segmentation. \n Unlike previous methods, which use a specific design for each task, we aim to use only a single end-to-end model to accomplish all these tasks in real-time.\n To meet real-time requirements and balance multi-task learning, we present a novel dynamic convolution-based method, Real-Time Multi-Purpose SAM (RMP-SAM). \n It contains an efficient encoder and an efficient decoupled adapter to perform prompt-driven decoding. \n Moreover, we further explore different training strategies and one new adapter design to boost co-training performance further. \n We benchmark several strong baselines by extending existing works to support our multi-purpose segmentation.\n Extensive experiments demonstrate that RMP-SAM is effective and generalizes well on proposed benchmarks and other specific semantic tasks. \n Our implementation of RMP-SAM achieves the optimal balance between accuracy and speed for these tasks.\n Code and model will be available.\n %"
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"segment anything; real-time segmentation; multi-purpose model;"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/30979426da6d204ba216f640a2788532112ac7fc.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "RMP-SAM: Towards Real-Time Multi-Purpose Segment Anything"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1poUSIGSCI | Unsupervised Panoptic Interpretation of Latent Spaces in GANs Using Space-Filling Vector Quantization | main | Desk Reject | Interpretability;Interpretable Latent Space;Interpretable Directions;Space-Filling Vector Quantization | interpretability and explainable AI | Mohammad Hassan Vali;Tom Bäckström | ~Mohammad_Hassan_Vali1;~Tom_Bäckström1 | 0 | 0 | 0 | 0 | 0 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": {
"value": "This submitted PDF is not anonymous, which violates the ICLR double blind policy."
},
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Submission Desk Rejected by Program Chairs"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@misc{\nvali2024unsupervised,\ntitle={Unsupervised Panoptic Interpretation of Latent Spaces in {GAN}s Using Space-Filling Vector Quantization},\nauthor={Mohammad Hassan Vali and Tom B{\\\"a}ckstr{\\\"o}m},\nyear={2024},\nurl={https://openreview.net/forum?id=1poUSIGSCI}\n}"
},
"abstract": {
"value": "Generative adversarial networks (GANs) learn a latent space whose samples can be mapped to real-world images. Such latent spaces are difficult to interpret. Some earlier supervised methods aim to create an interpretable latent space or discover interpretable directions that require exploiting data labels or annotated synthesized samples for training. However, we propose using a modification of vector quantization called space-filling vector quantization (SFVQ), which quantizes the data on a piece-wise linear curve. SFVQ can capture the underlying morphological structure of the latent space and thus make it interpretable. We apply this technique to model the latent space of pretrained StyleGAN2 and BigGAN networks on various datasets. Our experiments show that the SFVQ curve yields a general interpretable model of the latent space that determines which part of the latent space corresponds to what specific generative factors. Furthermore, we demonstrate that each line of SFVQ's curve can potentially refer to an interpretable direction for applying intelligible image transformations. We also showed that the points located on an SFVQ line can be used for controllable data augmentation."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Mohammad_Hassan_Vali1",
"~Tom_Bäckström1"
]
},
"authors": {
"value": [
"Mohammad Hassan Vali",
"Tom Bäckström"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Interpretability",
"Interpretable Latent Space",
"Interpretable Directions",
"Space-Filling Vector Quantization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "vali|unsupervised_panoptic_interpretation_of_latent_spaces_in_gans_using_spacefilling_vector_quantization"
},
"pdf": {
"value": "/pdf/69f2ded10bb288482b1a8042e98632d4a6275079.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Unsupervised Panoptic Interpretation of Latent Spaces in GANs Using Space-Filling Vector Quantization"
},
"venue": {
"value": "ICLR 2025 Conference Desk Rejected Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Desk_Rejected_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
||||||||||
1qGkuxI9UX | Aligning Language Models with Demonstrated Feedback | main | Active | personalization;few-shot learning;human computer interaction;alignment | alignment, fairness, safety, privacy, and societal considerations | 5;6;6;8 | 4;4;4;4 | 3;3;3;2 | 3;3;2;3 | 3;4;3;3 | 6.25 | 4 | 2.75 | 2.75 | 3.25 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Do you think DITTO would be effective for the coding skills or mathematical problem solving skills of an LLM?\n- Have you attempted training the LLM without LoRA, using full fine-tuning instead?\n- What kind of source code is used to generate online responses? If you were to train a much larger LLM (such as LLAMA 72B), would it be feasible to apply the online imitation learning method in the same way?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper proposes a data-efficient training method that enables LLMs to follow expert demonstrations. The Reinforcement Learning from Human Feedback (RLHF) data can be continuously generated by simply comparing expert demonstrations with the intermodel's responses. This approach can also be seen as a blend of Reinforcement Learning from AI Feedback (RLAIF) and RLHF, making it a reasonable and effective method.\n- The authors demonstrate the performance improvements of DITTO-trained models using GPT-4 evaluation and validate the method's effectiveness through a large-scale user study.\n- They provide a theoretical perspective on the connection between online imitation learning and demonstrate that online imitation learning can outperform Supervised Fine-Tuning (SFT). The mathematical derivation and explanations are clear, and the results are further supported by meticulously designed ablation studies."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel method, Demonstration Iterated Task Optimization (DITTO), for training large language models (LLMs) with expert demonstration datasets in a more data-efficient manner. Through a mathematical derivation, the authors illustrate how DITTO functions as a form of online imitation learning. They validate the method's effectiveness by utilizing a GPT-4 evaluation scheme and compare it against several other approaches, including Supervised Fine-Tuning (SFT), SPIN, and few-shot prompting. The authors conclude that DITTO is particularly advantageous for training LLMs to adopt specific writing styles or user preference tuning, outperforming other methods in these areas."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The authors did not investigate potential side effects, such as performance degradation on other benchmark datasets, after training with DITTO. Since the LLM is fine-tuned exclusively on targeted demonstrations, there’s a risk of significant performance drops in broader tasks. It is essential to preserve the LLM's original knowledge and abilities while adjusting its output to align with specific style and preference.\n- Also they overlooks the computational inefficiency of iterative training in an online imitation learning framework. This process requires substantial time and GPU resources, as it involves initializing the policy 𝜋0 (equivalent to SFT), generating responses from 𝜋0, training with DPO, and then iterating to produce 𝜋1, and so forth. These steps are difficult to reproduce and demand more computational power than SFT baseline. Furthermore, achieving faster response generation in the trained LLM would require additional engineering efforts. Although DITTO improves data efficiency, it is also crucial to consider computational efficiency, given the high costs of training and generating responses with LLMs.\n- The authors did not explore the limitations of the DPO algorithm or other potential approaches for training LLMs in a Reinforcement Learning from Human Feedback (RLHF) framework. It is known that the DPO algorithm can pose risks when training on preference datasets, as it may forget data from the \"winning\" side due to inherent mathematical issues."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In Section 3, you introduce the core method of DITTO and compare it with online imitation learning. What is the purpose of Section 3.3?\n2. How did you determine the percentage distribution of the paired data, specifically the 70% online data, 20% replay data, and 10% intermodel pair data?\n3. In Table 1, for the CMCC dataset, why do the zero-shot and few-shot results from GPT-4 appear the same in column a9, both at 40.28%? Additionally, why do both SFT and DITTO show results of 81.94% without any improvement? How would you comment on this?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper introduces DITTO, a novel method designed to guide LLMs toward specific settings for effective customization, achieving sample efficiency with fewer than 10 demonstrations. DITTO outperforms strong baselines, including SFT and GPT-4 with few-shot prompting. Additionally, a detailed user study further reinforces the reliability of DITTO."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper identifies a key issue: current LLMs, aligned to represent the collective voice of many, often fail to align specifically with any individual preference due to contradictions among them. While guiding LLMs toward a general preference is feasible, it requires substantial preference data. The authors propose a method, DITTO, to align LLMs to specific settings using fewer than 10 demonstrations drawn from existing interaction logs or direct edits to LLM outputs. These demonstrations are treated as \"golden\" examples, while outputs from current and previous LLM checkpoints are rejected. Through author attribution tasks and user studies, they demonstrate the effectiveness and sample efficiency of DITTO."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The static experiments in Section 4.1 are not particularly convincing. Have you considered testing additional baselines or employing other automatic evaluation methods, such as calculating sentence embedding similarity to compare styles?\n2. Have you evaluated DITTO on more benchmarks or tested its generalization ability? I noticed that only three authors were used for validation or testing. Can the DITTO method generalize to tasks beyond writing?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- How does the method scale with larger LLMs, and are there specific challenges in aligning models that have stronger RLHF priors?\n- How does DITTO perform in broader tasks that require more generalized alignment rather than user-specific customization? Could you provide insights into its scalability beyond niche tasks?\n- How sensitive is DITTO to the quality of demonstrations? Could you elaborate on strategies to mitigate the impact of poorly constructed or ambiguous demonstrations?\n- In terms of computational efficiency, how does DITTO compare with existing approaches when scaling to larger datasets or more complex tasks?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- DITTO introduces a new approach to user-specific alignment by using a small set of demonstrations to generate online comparison data. This is innovative and practical for settings where data collection is costly.\n- The paper provides a strong theoretical justification for DITTO, grounding it in online imitation learning. The derivation explains why DITTO can outperform traditional methods like SFT in low-data scenarios.\n- The paper completes various experiments, demonstrating DITTO’s effectiveness across static benchmarks (e.g. email writing, news articles) and in a user study. The method consistently outperforms traditional techniques like few-shot prompting and SFT, providing convincing empirical support.\n- The authors have made the code accessible, allowing for others to reproduce and validate their results"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a method called Demonstration Iterated Task Optimization (DITTO), designed to align large language models (LLMs) with user-specific preferences using a minimal number of user-provided demonstrations. This method eliminates the need for large-scale datasets typically required for supervised fine-tuning or RLHF. The paper claims that DITTO can significantly improve the alignment of LLMs for user-driven tasks and offers a practical solution for customizing language models. The paper explains their theoretical insights from online imitation learning with practical implementations, demonstrating effective customization for real-world applications like email writing and author-specific content generation.\n\nI recommend accepting this paper, as it tackles a significant challenge and presents an interesting solution that is well-supported in theory and through empirical evidence. This method can have a strong impact on making LLMs more customizable and accessible. However, I strongly recommend that the author provide further empirical evidence that demonstrate the effectiveness of this method on more tasks/datasets - this would significantly improve the quality of this work.\n\nComments:\n- The theoretical grounding in online learning is well-detailed and provides a clear explanation as to why the method works. The empirical validation further strengthens these theoretical claims.\n- The proposed method is designed for practical applications. This is an important factor when applying LLMs in real-world situations.\n\nSuggestions for improvement\n- Consider expanding the evaluation to include a wider range of domains. Specifically, investigate tasks tasks that require general alignment rather than user-specific tasks. This would provide a clearer picture of DITTO’s versatility and scalability. I think even negative results would be very informative.\n- It would be helpful to include a more detailed analysis of how the quality of demonstrations impacts performance. This could include testing DITTO with intentionally ambiguous or low-quality demonstrations to assess robustness.\n- The limitations section could be expanded with a deeper discussion on the trade-offs of using few-shot demonstrations. Exploring scenarios where the approach might fail or require adjustments would strengthen the paper’s transparency.\n- A more granular analysis of failure cases would add depth to the evaluation. This could involve detailed case studies highlighting scenarios where the method struggles."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Limited exploration is done into how DITTO scales to broader and more diverse tasks that may require a more generalized alignment. This is seen in how the experiments primarily focus on a small number of demonstrations.\n- DITTO’s approach heavily relies on the quality of user-provided demonstrations. If demonstrations are unclear or poorly constructed, the alignment could suffer. This could limit DITTO’s real-world applicability when high-quality demonstrations are not readily available.\n- The paper primarily focuses on text-based tasks. However, it would be interesting to understand the effectiveness of DITTO’s method in aligning LLMs in other modalities or more complex reasoning situations."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Some questions I had while reading the paper (some might be out of scope for this paper or for the rebuttal period):\n\nDoes this work for 1-shot learning?\nDo all fine-tuning runs use LoRA?\nDoes this work better for highly realistic/plausible synthetic data? Does this look indistinguishable to an LLM from some other real distribution, even after the LLM is fine-tuned? That would be a really compelling use case for this (to help with doing automated red teaming, with realistic looking inputs that closely match the target data distribution)\nDoes it help few-shot to explicitly instruct needed to be very close to few-shot examples in style? Or was that just tried for fitting zero-shot?\nHow do you choose hyperparameters with such a small number of examples? Like SFT/DPO ones? If you were doing any hyperparam selection, you might run into issues like described here: https://arxiv.org/abs/2105.11447\nHow did you pick the 20/80 data mix? How robust is that across datasets/settings?\nHow well does DITTO work in higher data regime? That would be the most compelling result, if it could replace RLHF when using large amounts of data (which is how it's often used in practice)"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "-The method outperforms few-shot learning, which is surprising/impressive to me, I didn't expect that and it was one of my main doubts about the method from just reading the abstract. I think this could be a pretty compelling method potentially for doing automated red teaming, where you'd want to match some target data distribution as closely as possible, in order to elicit the most representative behavior from the model you're red teaming. This could then help with eliminating backdoors or sleeper agents (https://arxiv.org/abs/2401.05566), which is probably the application of this that I think most stands out to me as different from what is covered from prior work (I'm not that aware of many effective supervised learning alternatives like DITTO)\n-The method seems useful for settings where fine-tuning an existing RLHF model (though I'm a bit less clear how broadly this would work / if this would replace RLHF for finetuning across lots of tasks or just some specific ones related to adapting the model's style or writing)\n-Well-written paper, easy to follow\n- The approach itself is clever, and it's interesting/surprising to me that it works well\n-Nice that there are some human eval results, those helped to convince me that there are real gains with the method over few-shot learning (where it's clear the model hasn't adapted its behavior much).\n-Likewise, the samples in the appendix are quite helpful for the above too\n-Analysis in Table 3 is great for explaining why this might work\n-Section 5 analysis is great/helpful.\nConnecting DITTO to imitation learning is helpful for explaining why this is interesting, and why it would work.\n\nI would give this paper a 7/10 rating, somewhere between marginal accept and accept (but the form would only allow a 6 or an 8)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an alternative to RLHF which is effective at learning from a few demonstrations. The paper shows that this method outperforms supervised finetuning and few-shot learning. The paper shows human eval results, qualitative samples, and various quantitative evals to show that DITTO is effective at getting models to adapt to a new task based on a few examples. The paper also discusses the connection between DITTO and imitation learning, explaining why the method might outperform just using supervised learning (as is common in LLM work) to do imitation learning, and why you might even expect to get better performance than the existing examples. The algorithm basically works by using the LLM to generate examples that are assumed to be worse than the demonstrations, then constructing pairwise preferences between the LLM generated samples and the expert demos (and possibly between earlier vs. later LLM checkpoints in the training run), then using DPO to learn from the constructed pairwise ranking."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "-Would be most compelling if evaluated on higher expertise tasks: like coding complex tasks or forecasting. Seems like one of the main areas of relevance, given that this is where we might expect to be in the low-data regime where we want to get the most of our a small amount of (high-quality or hard to obtain) data. I also expect it to be harder/more impressive to see gains in these domains. Currently, the tasks are fairly basic and all writing related. Enough for a proof-of-concept but probably not complex enough to make me want to use DITTO instead of RLHF everywhere.\n-One of the most interesting applications of the method would be to get generalization beyond what the demos are able to provide, it would be very compelling if this method led to generalization beyond the demos (which seems to be potentially possible if the method is working well, based on the discussion in the paper, if I understand correctly)\n-The paper would ideally compare to Constitutional AI, another popular RLHF-alternative. (Though this could take some time to reimplement, if there aren't publicly available implementations). More generally, I'm unsure if the method outperforms using principles to guide/instruct the model (especially if those principles are derived by an LLM from the few examples, which would be most comparable to the existing method/setting). The results showing that prompting doesn't fix all the issues help here, but more sophisticated methods like Constitutional AI could still outperform DITTO here\n- I'd love to see scaling trends on how well this works across model sizes -- it would be most compelling if the gains in task reward over supervised learning / few-shot learning seem to improve as models grow larger, rather than shrink\n- I'm not sure but it's possible to me that this method partly beats few-shot learning on RLHF models because RLHF models are resistant to adaptation with few-shot examples, but that the method wouldn't outperform few-shot learning if using pretrained LLMs (or maybe even just instruction-tuned/supervised learning finetuned models). That could potentially be a helpful experiment to run (and more compelling if DITTO also outperforms other adaptation techniques when comparing on a pretrained language model)\n\nMinor:\n-Would be nice to show at least 1-2 examples in main paper, to show the sample quality. (Having these in the appendix is helpful though)\n-The method could be explained more clearly sooner in the paper, I think that I didn't understand the actual algorithm until page 4 or so, when it would be nice to understand it from the intro or abstract itself"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We highlight the effectiveness of using a very small number of demonstrations (<10) for task or user-specific alignment; and contribute a method that iteratively aligns an LLM to a user’s demonstrations by treating default outputs as dispreferred."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024aligning,\ntitle={Aligning Language Models with Demonstrated Feedback},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1qGkuxI9UX},\nnote={under review}\n}"
},
"abstract": {
"value": "Language models are aligned to emulate the collective voice of many, resulting in outputs that align with no one in particular. Steering LLMs away from generic output is possible through supervised finetuning or RLHF, but requires prohibitively large datasets for new ad-hoc tasks. We argue that it is instead possible to align an LLM to a specific setting by leveraging a very small number ($<10$) of demonstrations as feedback. Our method, Demonstration ITerated Task Optimization (DITTO), directly aligns language model outputs to a user's demonstrated behaviors. Derived using ideas from online imitation learning, DITTO cheaply generates online comparison data by treating users' demonstrations as preferred over output from the LLM and its intermediate checkpoints. We evaluate DITTO's ability to learn fine-grained style and task alignment across domains such as news articles, emails, and blog posts. Additionally, we conduct a user study soliciting a range of demonstrations from participants ($N=16$). Across our benchmarks and user study, we find that win-rates for DITTO outperform few-shot prompting, supervised fine-tuning, and other self-play methods by an average of 19\\% points. By using demonstrations as feedback directly, DITTO offers a novel method for effective customization of LLMs."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"personalization",
"few-shot learning",
"human computer interaction",
"alignment"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a9b2f82fd92bdf2bf0f047b5c84d15fd4de72908.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Aligning Language Models with Demonstrated Feedback"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1qP3lsatCR | NetMoE: Accelerating MoE Training through Dynamic Sample Placement | main | Active | Mixture of Experts;All-to-All communication;Distributed training | infrastructure, software libraries, hardware, systems, etc. | 3;6;6;8;8 | 4;4;2;3;4 | 3;3;3;4;4 | 3;3;3;3;4 | 3;3;3;3;3 | 6.2 | 3.4 | 3.4 | 3.2 | 3 | -0.190941 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "see Weaknesses.\n\n**typo:** In table 1 *\"number of of nodes\"*."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "* **The problem is clearly motivated:** The challenges of routing samples in MoE are clearly written, making the goal of this paper feel natural after reading the first two sections.\n* **Challenges of ILP solving are made clear, and the proposed solution seem effective:** The building to the final approximate method is clear and well motivated through empirical results in Tab.4. The optimization gap between the optimal and the approximate solution seem reasonable in Fig.6.\n* **Non negligible empirical benefits of the method are demonstrated:** The speedup brought by NetMoE compared to Dynamic Expert Placement methods seem significant in the experiments displayed."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents NetMoE, a novel framework designed to optimize the routing of samples in Mixture of Experts (MoE) models by taking into account the actual inter an intro-node communication bandwidth. The goal is to minimize the *time* the routing process takes, which usually amount to minimize inter-node expert routing in the All-to-All communications, while being mathematically equivalent to the standard routing procedure. This paper formulates the problem as an integer linear programming optimization problem, and relaxes it so that an approximate solution can be found sufficiently fast dynamically at each stage of the MoE. Experimental results demonstrate that NetMoE outperforms existing MoE training systems, achieving up to a 1.67x speedup in training efficiency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* **Notations and problem formulation hard to follow:** Many notations are introduced, making the reading of section 3 a bit cumbersome. Maybe putting some of the mathematical details and ILP formulations in Appendix could help lighten the section and make it more readable?\n* **No comparison with methods using a modification in the model definition:** While methods introduced in Sec. 2.2 change the convergence property of the model in terms of iterations, the fact that they allow for more iterations per time unit could counter this. Would it be possible to also compare NetMoE to these methods (e.g., in terms of *\"time to reach a certain level of perplexity\"*)?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The sequence adjustment is done per iteration and per layer and composable of reducing the all-gather communication of this layer and all-scatter of next-layer (Eqn. 7). The reduction from all-gather is clear, but I don't understand how it is even possible to reduce the all-scatter costs of *next-layer* as we even don't know what is the routing probability due to an attention block before the MoE. \n\n2. I don't understand how does expert inline residual fix the position issues of residual stream (it might be helpful to give a diagram as line 12 in Algorithm 1 is not sufficiently clear)"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well motivated and the writing is pretty clear. I have no difficulty on understanding the overall idea of sample adjustment (from Figure 2) and the optimization challenges & solutions (Equation 5, 8, 10) upon the first time of reading. \n\n2. Clever design: reformulating the ILP to a weighted bipartite matching / assignment problem and using Hungarian algorithm that has shorter solving time than communication time (so we can have actual speedup)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The whole idea of NetMoE is that we want to reduce the All-to-All scatter & gather communications by reducing the amount of cross-node/device routing of tokens. To achieve this we will adjust the sample/sequence that would minimize the inter-node & intra-node communication volume. This is (approximately) solvable as a weighted bipartite matching / assignment problem between training samples and machines, as shown in Eqn 9 and 10. \n\nThe authors conduct experiments on GPT pretraining and compare with dynamic expert placement baselines as FasterMoE and SmartMoE. NetMoE generally has higher speedup (Figure 5) and the actual speedup is close to the theoretically optimal speedup (Figure 6)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I don't have strong opposition to the overall idea of sequence adjustments for MoE but I believe the scope and limitations should be more clearly defined:\n\n1. The authors should provide a summary statistics on how many sequences are actually adjusted across nodes/devices during training and how it is correlated with the MoE specialization / router probability. \n\n2. A small-scaled ablation experiment is definitely needed to show if this communication volume reduction is robust w.r.t. the choice of dataset mixtures, as the performance of NetMoE might be data dependent.\n\n3. Table 4 is concerning because the limit of KM algorithm to use less time than all-scatter is $I/J \\sim 24$ (24 is my scaling extrapolation of Table 4's $I/J = 16$ results as KM's time complexity scales cubically w.r.t. # nodes, and $(24/16)^3 * 1 > (24/16) * 2$). A batch size of 24 per device is not a sufficiently large number."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See Weaknesses. And\n\nMoving the sample should incurs more movements compared a subset of the tokens in the sample. Why moving sample gives less communication overhead?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper tackles the MoE training efficiency from a novel perspective, that is the data locality perspective. It dynamically locates the data to reduce the inter-node communication in all2all gathering. \n2. The results shows improvements compared with baselines, signifying the effectiveness of the method.\n3. The modeling of the networking problem is inspiring to the reviewer."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes to use a dynamic sample placement to speed up the MoE training. Specifically, this paper adopts a mathematical model to simulate the number of inter-node communication and intra-node communication and solve the integer programming problem to figure out the best sample allocation of the sample to reduce inter-node communication inspired by the locality in networks. This paper successfully reduces the all2all gather communication in training and achieve speed up."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The scalability of the method is questionable, e.g., the improvements for 32 GPUs is smaller then the improvements for 16 GPUs. This leads to the question that what will happen if we continue increasing the number of GPUs? Will the improve converges to zero? \n2. When there are more GPUs, the communication should take a larger portion in the total time? Why the method here, which primarily focuses on optimizing communication, have less significant improvements."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "•\tHow does the data locality used in the experiment compare to typical training scenarios, and what impact might this have on expected performance?\n\n•\tWhy is inter-node expert parallelism favored over pipeline or other model parallelism techniques in this context?\n\n•\tIs an auxiliary loss mechanism incorporated to mitigate expert selection skew, and if so, does it affect the performance of NetMoE?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "•\tTheoretical Rigor: This paper is thorough in formulating the communication challenges and solution as an optimization problem, with clear problem modeling and a detailed, polynomial-time solution.\n\n•\tPracticality: The method can integrate with existing MoE training systems while enhancing training efficiency.\n\n•\tEmpirical Validation: Experimental results across various configurations validate NetMoE’s improvements in All-to-All communication and overall training efficiency."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a topology-aware sample placement scheduling approach to optimize All-to-All communication in MoE training."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "•\tExperimental Context: The paper could benefit from a more comprehensive discussion on the \"data locality\" conditions required to achieve the claimed speedups in real-world setups. Also, details on the distribution of data locality across real-world training tasks (and the one used in experiment) would give more insight into NetMoE's practical performance.\n\n•\tDiscussion on Experiment Setup: Given that inter-node expert parallelism can incur heavy communication costs, it would help if the authors provided reasoning for prioritizing inter-node expert parallelism over potentially less intensive techniques like a hybrid one: intra-node expert parallelism + inter-node pipeline parallelism. \n\n•\tMore Baseline Comparisons: Additional baselines, particularly concerning dynamic expert placement, would highlight NetMoE’s comparative advantages and limitations."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Same as Weaknesses.\nQ1. In what scenarios would one choose a data perspective approach over a model perspective approach?\n\nQ2. Please revise your solution to ensure it aligns with the stated assumptions.\n\nQ3. Explain why the Kuhn-Munkres (KM) algorithm with highest time complexity is the best choice for this problem.\n\nQ4. Conduct additional experiments to demonstrate the impact of node and device variables on performance."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The author demonstrates strong writing skills, clearly stating the problem and solution. The system diagram is also very clear.\n2. They offer a new perspective on communication efficiency in distributed MoE by exploring how data placement can impact efficiency.\n3. Experiments are provided to validate the effectiveness of their approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Communication efficiency is a significant challenge for training efficiency in distributed Mixture of Experts (MoE) models. Unlike other papers that address this issue from a model perspective, this paper offers a solution from a data perspective. It introduces NetMoE, a method that reassigns data samples to different nodes based on locality to minimize all-to-all communication costs. The problem is formulated as an integer programming problem, and the authors derive a polynomial-time solution. Experimental results further validate the effectiveness of their approach."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The motivation is not clearly articulated. In the motivation section, the authors mention that previous works focus on the model perspective and do not explore the data perspective, which does not convey the true motivation. Instead, it should emphasize that in certain scenarios, the model perspective may be insufficient, while a data-focused approach can achieve better efficiency.\n\n2. The problem formulation and subsequent assumptions appear contradictory and I suspect the effectiveness of method. In Equation (1), the communication cost is defined as the maximum of intra-node and inter-node costs. However, in Section 3.2, the authors assume the maximum is the inter-node cost and address it first. This raises questions for the reviewer: if the inter-node assignment is fixed but minimizing intra-node communication results in a higher total cost than inter-node, this may lead an undesirable solution.\n\n3. The authors transform this problem into a weighted bipartite matching problem and solve it using the Kuhn-Munkres (KM) algorithm. However, based on the reviewer's knowledge, KM is sensitive to the sample input and has a time complexity of \nO(N^3) which may not be ideal for large models. The authors should justify their choice of KM as the solver.\n\n4. The experiments do not fully validate the approach. The impact of node and device count on performance is not examined. For instance, if there are very few devices in each node but many nodes overall, inter-node communication may dominate the time. Conversely, if there are numerous devices within fewer nodes, intra-node communication could become the dominant factor in training time."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024netmoe,\ntitle={NetMoE: Accelerating MoE Training through Dynamic Sample Placement},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1qP3lsatCR},\nnote={under review}\n}"
},
"abstract": {
"value": "Mixture of Experts (MoE) is a widely used technique to expand model sizes for better model quality while maintaining the computation cost constant. In a nutshell, an MoE model consists of multiple experts in each model layer and routes the training tokens to only a fixed number of experts rather than all. In distributed training, as experts are distributed among different GPUs, All-to-All communication is necessary to exchange the training tokens among the GPUs after each time of expert routing. Due to the frequent and voluminous data exchanges, All-to-All communication has become a notable challenge to training efficiency.\n\nIn this paper, we manage to accelerate All-to-All communication in MoE models from the training sample perspective, which is unexplored so far. In particular, we put forward the observation that tokens in the same training sample have certain levels of locality in expert routing. Motivated by this, we develop \\name, which takes such locality into account and dynamically rearranges the placement of training samples to minimize All-to-All communication costs. Specifically, we model the All-to-All communication given the sample placement and formulate an integer programming problem to deduce the optimal placement in polynomial time. Experiments with 32 GPUs show that \\name achieves a maximum efficiency improvement of $1.67 \\times$ compared with state-of-the-art MoE training frameworks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Mixture of Experts",
"All-to-All communication",
"Distributed training"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/889cafb278bdecbeda374b93bfeb5adedb458d24.pdf"
},
"presentation": null,
"primary_area": {
"value": "infrastructure, software libraries, hardware, systems, etc."
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "NetMoE: Accelerating MoE Training through Dynamic Sample Placement"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1qbZekXGrp | Generation and Comprehension Hand-in-Hand: Vision-guided Expression Diffusion for Boosting Referring Expression Generation and Comprehension | main | Active | Referring expression generation;referring expression comprehension;vision-guided expression diffusion;vision-text condition | applications to computer vision, audio, language, and other modalities | 3;5;6;8 | 4;4;3;3 | 2;3;3;3 | 2;3;3;3 | 3;2;2;4 | 5.5 | 3.5 | 2.75 | 2.75 | 2.75 | -0.83205 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weakness."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper explores the potential of applying diffusion models to the REG task, an area that has been largely underexplored.\n2. The authors introduce a vision-text conditioning module and a token selection strategy, which significantly enhance the alignment between visual and textual information.\n3. Extensive experiments and ablation studies validate the generalization capability and effectiveness of the proposed method’s design choices."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper explores the integration of referring expression generation (REG) and comprehension tasks. To address challenges such as the scarcity of image-expression pairs in training data for REC and the limitations of the REG methods in bridging visual and textual domains, the authors propose a novel vision-guided expression diffusion model for REG. Extensive experiments demonstrate that the proposed method produces high-quality and diverse generated data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In the visualization results shown in Figure 3, the response labeled as “recover” in the first sample of the third row appears to be an error, as does the response in the last sample of the same row. These results indicate that while the current method enhances diversity, it still includes some erroneous responses. How do you ensure the quality of the generated responses?\n2. It is intriguing that the ViT backbone of CLIP is considered as a unified vision encoder in MLLM. Could this architecture produce different patterns and further improve performance?\n3. The definition of CFG is missing in Table 1.\n4. While the paper provides extensive interpretation of the experimental results, it lacks an in-depth analysis of the reasons behind the observed patterns in the results.\n5. The writing is somewhat verbose. For instance, in Subsection 3.6, the second sentence is redundant as it repeats the information in the first sentence.\n6. Some equations could be improved; for example, Equations 9 and 10 differ by only one symbol.\n7. There are a few typos, such as a missing period on line 82 and an incorrect number on line 360."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "The reviewer would like to receive responses from the authors about the weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- To the best of the reviewer's knowledge, this is the first study to introduce language models using diffusion models into REG and REC.\n- The proposed method has been evaluated using multiple datasets and multiple ablation studies, and has shown a certain degree of effectiveness."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses referring expression generation (REG) and referring expression comprehension (REC). In particular, the paper proposes a method for REG that utilizes a language model with a diffusion model and experimental results for REC that augment the dataset with the REG method. The experiments are performed on five representative datasets, three RefCOCOs, Flickr30k, and Refclef, and show that the accuracy of the proposed method for REG is better than the existing methods and that the augmentation of the dataset by the proposed method contributes to multiple REC methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The proposed method is composed of a straightforward combination of existing methods. The Cross-Attention and Token Selection Strategy that make up the proposed method, Vision-Text Condtion, are known to the community, and the Minimum Bayes Risk (MBR) and classifier-free guidance (CFG) that are ablated in Table 1 are not newly proposed in this paper. (In addition, the REG performance of VIE-DM w/o CFG is reported as an ablation study, but the author forgot to explain what CFG stands for, so it is only the reviewer's guess that CFG means classifier-free guidance.)\n- As mentioned in the Introduction, the existing methods compared in this paper adopt the transformer-LSTM or CNN-LSTM framework. In other words, the proposed method differs from other methods not only in that it formulates a language model using a diffusion model, but also in that it uses a transformer-based decoder. It is not clear how the combination of a transformer-based decoder and diffusion model outperforms only a transformer-based decoder. Without this comparison, it is not possible to show the effect of introducing a diffusion model into REG.\n- In line 416, it is claimed that “These results demonstrate the robust data diversity and quality of our VIE-DM.” However, it is misleading to make such a claim of diversity based on Table 1, which discusses the similarity to the ground truth using Meteor and CIDEr. The argument regarding Table 3 is more convincing, so this claim should have been made elsewhere.\n- As the authors acknowledge, Table 5 shows that augmenting the REC data using VIE-DM only leads to a limited improvement due to the accuracy of the synthesized expressions. Therefore, it is essential to show whether VIE-DM is superior to existing approaches. On the other hand, it is not clear how much accuracy is improved when the data set is augmented using methods other than VIE-DM. The idea of amplifying REC data using REG methods has already existed since [Mao+, CVPR 2016]. If it is not possible to show how much REC accuracy is improved when expressions are augmented using methods other than VIE-DM, it will not be possible to understand the extent to which this paper contributes to REC."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "See weakness."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Introducing a diffusion model to REG is innovative. VIE-DM generates diverse, high-quality synonymous expressions that align with both the visual and textual context of target objects, enriching REC datasets.\n2. The experimental design is well-structured, including ablation studies. Extensive experiments on five datasets demonstrate significant improvements in REC and REG model performance, achieving state-of-the-art results.\n3. The paper is clearly written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces the Vision-guided Expression Diffusion Model (VIE-DM) to address limitations in referring expression generation (REG) and comprehension (REC) tasks, particularly the scarcity and low diversity of image-expression pairs in existing datasets. The model includes a vision-text condition (VTC) module and a token selection mechanism to mitigate feature discrepancies between the visual and textual domains."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "No obvious disadvantages were seen. \nLike any research work, this paper likely has its own limitations, though they are not explicitly discussed. Including a section on potential limitations would provide a more balanced perspective."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Why does performance on certain metrics improve after losing CFG in Table 1?\n2. What is the number of samples in each augmented dataset? This needs to be reported."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The method is described in detail, and the motivations are fairly well-founded."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Existing REC datasets often contain insufficient semantic pairs for training, hindering the REC model's generalization to unseen referring expressions. Additionally, REG methods, due to limited capacity, frequently struggle to bridge the visual and textual domains, resulting in low quality and diversity of generated expressions. In this work, the authors introduce diffusion models into the referring expression generation task, aligning visual features of varying granularity with noisy text. The experiemts are conduted on benchmark datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Some recent representative works, such as CCL[1], are not compared, even though these works use similar ideas to enhance REC performance through REG. \n[1] Cycle-Consistency Learning for Captioning and Grounding. AAAI 2024.\n2. Failure cases are lacking; diffusion data generation is usually unstable, and the authors need to analyze this point.\n3. Statistics on model parameters, training time, and inference time are required."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "we propose a novel VIsion-guided Expression Diffusion Model (VIE-DM) for the REG task, where diverse synonymous expressions adhering to both image and text contexts of the target object are generated to augment REC datasets."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024generation,\ntitle={Generation and Comprehension Hand-in-Hand: Vision-guided Expression Diffusion for Boosting Referring Expression Generation and Comprehension},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1qbZekXGrp},\nnote={under review}\n}"
},
"abstract": {
"value": "Referring expression generation (REG) and comprehension (REC) are vital and complementary in joint visual and textual reasoning. Existing REC datasets typically contain insufficient image-expression pairs for training, hindering the generalization of REC models to unseen referring expressions. Moreover, REG methods frequently struggle to bridge the visual and textual domains due to the limited capacity, leading to low-quality and restricted diversity in expression generation. To address these issues, we propose a novel VIsion-guided Expression Diffusion Model (VIE-DM) for the REG task, where diverse synonymous expressions adhering to both image and text contexts of the target object are generated to augment REC datasets. VIE-DM consists of a vision-text condition (VTC) module and a transformer decoder. Our VTC and token selection design effectively addresses the feature discrepancy problem prevalent in existing REG methods. This enables us to generate high-quality, diverse synonymous expressions that can serve as augmented data for REC model learning. Extensive experiments on five datasets demonstrate the high quality and large diversity of our generated expressions. Furthermore, the augmented image-expression pairs consistently enhance the performance of existing REC models, achieving state-of-the-art results."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Referring expression generation",
"referring expression comprehension",
"vision-guided expression diffusion",
"vision-text condition"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/45c5791c10de97c08053dc17631354b5bf0d6b7f.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/864300c1ff7810d58330fd284d8e05d8e109e19a.pdf"
},
"title": {
"value": "Generation and Comprehension Hand-in-Hand: Vision-guided Expression Diffusion for Boosting Referring Expression Generation and Comprehension"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1qgZXeMTTU | Coreset Spectral Clustering | main | Active | spectral clustering;kernel k-means;coresets | learning on graphs and other geometries & topologies | 3;6;6;8;10 | 4;2;3;3;4 | 2;3;3;4;4 | 2;2;3;3;3 | 3;3;3;4;4 | 6.6 | 3.2 | 3.2 | 2.6 | 3.4 | 0.045835 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "-The authors say their speed up is from nk to nd where d is the average degree in some sense. In the abstract this is a bit confusing: but is this necessarily a speedup; what if k is relatively small but the average degree in the kernel matrix leads to more non-zero entries? Perhaps this is good to clarify early on as you do later in the main body cause the reader might be confused.\n\n-While reading the paper, many ideas used in the analysis are actually coming from (and are cited) from the prior work by Jiang et al. I was curious if the authors could elaborate on what ideas were the novel part of the paper?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "-nice idea to rely on kernel sparsity that yields the first coreset construction for kernel spaces and leads to speed up which is especially useful for large graphs with many clusters.\n\n-the two main protagonists here which are spectral clustering and kernel k-means are studied often separately, and I view this approach of merging ideas/techniques interesting.\n\n-the coreset spectral clustering algorithm is interesting and gives a clean result statement: an \\alpha-approximation of the normalized cut problem on the coreset graph will in fact give an O(\\alpha)-approximation of the normalized cut problem on the original graph. To me this is a very useful and interesting statement as it can be used as a black box and lead to practical results as well."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper develops new tools in coreset construction merging ideas from two different problems: one is the coresets for k-means and kernel k-means clsutering and the other is spectral clustering, hence the name Coreset Spectral Clustering of the paper.\n\nThe main result is to give an approximation algorithm for the problem of normalized cut based on coresets. Specifically, they can approximately solve the problem on the coreset graph and prove that this is enough to get a reasonable approximation on the original input graph. The authors also perform experiments and demonstrate that their approach leads to asymptotically faster results on large real-world graphs with many clusters beating prior coreset kernel k-means approaches for sparse kernels.\n\nThe second result of the paper is to speed up the running time of the current state-of-the-art coreset algorithm for the problem of kernel k-means on sparse kernels, where the speed up depends on the average degree of the graph."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "-novely: while the paper draws inspiration and combines cleverly prior works on normalized cut, kernel clustering and coresets, I wanted to point out that the current paper seems to heavily rely on ideas and techniques that were developed before. Of course the authors had to cleverly combine them in order to get the clean statement as their main result. I also read parts of the technical proofs in the appendix, and I believe that in terms of techniques the paper is a bit weak. Perhaps the authors could elaborate on what crucial ideas in terms of techniques were the novel aspects of this work. Specifically the analysis of Jiang et al. seems to be doing the heavy lifting in many parts of the paper, and conditioned on that paper, I believe the current technical contribution appears to be slightly less solid. This is my only concern about the paper, otherwise I do like the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "The result: Why are the derivation and experiments discussing only NCUT?\nThe equivalence of Dhillon04 was extended in Dhillon07 to other criteria,\nin particular RatioCut. It should also apply to the newer stochastic box model.\n\n\nExperimental results: why is there no comparison of the NCUT values\nthat were obtained in the experiments? The current evaluation is\nin terms of ARI, but this is not what the algorithms attempt to maximize."
},
"rating": {
"value": 10
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The paper is very nicely written. It describes a result that appear interesting\nin theory and useful in practice."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper leverages the equivalence between kernel kmeans and spectral\nclustering to improve spectral clustering. As a secondary result they also\nimprove coreset construction for sparse matrices.\n\nThe equivalence between kernel kmeans and spectral clustering is well known.\nIt is, therefore, natural to expect improvements in kernel kmeans\nalgorithms to produce better spectral clustering. Results along this line\nwere recently described by Jiang24, performing kernel kmeans on\nweighted sampled data (coreset).\n\nThe paper argues that improving coresets do not necessarily lead to\nimproved spectral clustering because the kernel kmeans typically gets\nstuck in a local minimum. By contrast, spectral clustering computes\nan approximation to the global optimum, and does not gets stuck in local\nminima.\n\nUsing this key observation the authors propose a novel framework of going\nback and forth between \nthe graph and the points in high dimensional space that are represented\nby the coreset. This improves the speed, but not the quality of the clustering \n(as measured by NCUT). The paper shows that the reduction in quality\nis linear."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "An important side result is the fast construction of a coreset that can be used\nfor kernel k-means clustering. The improvement comes from a fast\n$D^2$ sampling technique. I believe that there are other, competitive\nfast sampling techniques and I was missing a comparison.\nHere is an example:\n\nChib and Greenberg, 1995, Understanding the Metropolis-Hastings algorithm,\nThe American Statistician.\n\n\nIn addition please see the questions below."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Please address the concern that I raised in the weaknesses section."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The contribution of the paper is solid, with the main idea being combining the approaches of coreset construction and spectral clustering. The utilization of sparsity to improve the running time of clustering algorithm is also well-executed.\n- The presentation is overall excellent, with all of the contributions stated clearly. Schemes and easy-to-read pseudocode are very helpful with understanding the approach.\n- The experimental section is detailed and well-organised."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents an algorithm coreset spectral clustering algorithm for k-means clustering. This is done by first converting the input graph into a k-means problem instance, constructing and $\\epsilon$-coreset for this instance, then solving the spectral clustering problem on the reduced graph. A second contribution is an algorithm for fast $D^2$-sampling utilized in coreset construction, which results in an coreset construction algorithm with running time $\\widetilde{O}(n d_{avg})$."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "It is somewhat unclear how often it is desired to solve spectral clustering on sparse data, or whether settings of interest have $d_{avg} < k$. I would like the authors to add an overview on how clustering methods are used in the empirical research, for example social network analysis, in the introduction or related work."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weakness."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- CSC is optimized for sparse graphs, where the sparsity structure significantly reduces both computation and memory usage. By using a small, representative subset of data (the coreset), CSC scales well with data size and can handle graphs with millions of nodes and thousands of clusters. This scalability makes CSC suitable for large datasets in social networks, biological clustering, and sensor network analysis, where traditional methods would struggle.\n\n- Standard spectral clustering can become infeasible with large, dense similarity matrices due to the high demands on computation and memory. CSC addresses this by working with a sparse kernel matrix and clustering only on the coreset, significantly reducing matrix size and computational cost. This efficiency enables CSC to process large datasets on standard hardware, which would otherwise require extensive resources for traditional spectral clustering.\n\n- A smaller coreset speeds up computation, while a larger coreset captures more nuances in the data structure. This adaptability is useful for applications with specific accuracy or runtime needs, making CSC versatile across different types of data and clustering goals."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper tackles the challenges of clustering large, sparse datasets, where traditional spectral clustering methods can be computationally demanding. While spectral clustering is widely used for identifying non-linear cluster boundaries, its dependence on dense similarity matrices restricts scalability, particularly when dealing with numerous clusters. The authors introduce Coreset Spectral Clustering (CSC), a method that merges the efficiency of coreset sampling with the accuracy of spectral clustering, achieving a substantial speedup while maintaining clustering precision."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The accuracy of CSC’s clustering largely depends on the representativeness of the coreset. To achieve high-quality clusters, the coreset need to accurately capture key structural and distributional aspects of the dataset. In datasets with uneven distributions or subtle data patterns, it could be difficult to create a coreset that fully represents the original data, and even minor inaccuracies could impact clustering results.\n\n- CSC relies on an initial similarity or nearest-neighbor graph, and parameters such as the number of neighbors (k) or distance threshold (ϵ) can significantly affect clustering performance. Choosing suboptimal values for these parameters may lead to an inaccurate initial graph structure, impacting the quality of the final clusters."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In the experimental part, the ARI performance of yours is much better than the green baseline (which is the method of [Jiang et al. ML' 24]). But I think your result is mainly based on the green baseline and you improve their running time. So it makes sense that your method is faster. But why your ARI is so much better than the green baseline?\n2. In the experimental part, you mention that you use the nearest neighbor graphs of MNIST. How to construct such a graph on MNIST? Is it a nearest neighbor graphs based on Euclidean distance?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed technique of constructing a coreset is quite useful when k is large and the similarity is sufficiently sparse."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a refined algorithm of constructing a coreset for kernel k-means problem. They improves the time complexity from $\\tilde{O}(nk)$ [Jiang et al. ML' 24] to $\\tilde{O}(nd_{avg})$, where $d_{avg}$ is the average number of neighbors of a single vertex on the graph defined by the given similarity matrix. They also showed how to use their technique to improve spectral clustering and obtain a approximate solution for normalized cut problem. The experiments are designed to support their theoretical results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Limited contribution. The proposed method highly depends on the former work [Jiang et al. ML' 24]. And their claimed improvements seems trivial. Theorem 1 is also easy to obtain. \n2. This paper assumes that the similarity matrix is sparse, which means a vertex has only few neighbors. So when a vertex is sampled, only its neighbors ($d_{avg}$ neighbors on average) need to update the their distance to the sampled set. Therefore, the time complexity of $\\tilde{O}(nd_{avg})$ is straightforward.\n3. The experimental on the Appendix A seems not ideal. For example, in Figure 5,6,7, the proposed method does not obtain the best ARI; Figure 7 also shows that the green baseline is actually faster. And there is no explanation for that."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We combine the benefits of spectral clustering and coresets"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024coreset,\ntitle={Coreset Spectral Clustering},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1qgZXeMTTU},\nnote={under review}\n}"
},
"abstract": {
"value": "Coresets have become an invaluable tool for solving $k$-means and kernel $k$-means clustering problems on large datasets with small numbers of clusters. On the other hand, spectral clustering works well on sparse graphs and has recently been extended to scale efficiently to large numbers of clusters. We exploit the connection between kernel $k$-means and the normalised cut problem to combine the benefits of both. Our main result is a coreset spectral clustering algorithm for graphs that clusters a coreset graph to infer a good labelling of the original graph. We prove that an $\\alpha$-approximation for the normalised cut problem on the coreset graph is an $O(\\alpha)$-approximation on the original. We also improve the running time of the state-of-the-art coreset algorithm for kernel $k$-means on sparse kernels, from $\\tilde{O}(nk)$ to $\\tilde{O}(n d_{avg})$, where $d_{avg}$ is the average number of non-zero entries in each row of the $n\\times n$ kernel matrix. Our experiments confirm our coreset algorithm is asymptotically faster on large real-world graphs with many clusters, and show that our clustering algorithm overcomes the main challenge faced by coreset kernel $k$-means on sparse kernels which is getting stuck in local optima."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"spectral clustering",
"kernel k-means",
"coresets"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c058d1397f0b75e495c63ae5d2e8bdac818ae939.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on graphs and other geometries & topologies"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/dc2d4f38d95243a7906ce68e57daf6a7d7afa52c.zip"
},
"title": {
"value": "Coreset Spectral Clustering"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1qq1QJKM5q | More Experts Than Galaxies: Conditionally-Overlapping Experts with Biologically-Inspired Fixed Routing | main | Active | Deep learning;Mixture of Experts;Modularity;Sparsity;Conditional Computation | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 3;6;6 | 4;4;3 | 2;3;3 | 2;3;2 | 2;2;3 | 5 | 3.666667 | 2.666667 | 2.333333 | 2.333333 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How does this model perform on a continual learning benchmark, such as permuted MNIST or split-CIFAR-100? \n\nWhat are the additional costs to using COMET in terms of training time and memory usage?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well written and the core idea is explained clearly. \n\nThe authors demonstrate key properties of the COMET model, notably showing that similar inputs tend to activate overlapping experts, facilitated by the fixed gating mechanism.\n\nThe model is tested on a wide selection of benchmark tasks including computer vision, language modelling, and regression. \n\nThe authors demonstrate the benefit of using COMET, particularly at large model sizes.\n\nThe use of COMET requires no additional trainable parameters which is quite advantageous."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Conditionally Overlapping Mixture of ExperTs (COMET).\nCOMET uses biologically inspired, fixed random projections to generate binary masks that define subnetworks know as 'experts'.\nThe mask generation process is input-dependent, causing similar inputs to activate overlapping sets of experts.\nThe authors test the models on a range of benchmark tasks, finding that COMET performs well, particularly for large model sizes."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "In other works these gating functions can help alleviate catastrophic forgetting for tasks that are presented sequentially, but this is something that has not been tested in this paper. \n\nGiven that previous work has similarly employed networks to determine gating, I am not entirely convinced that the novelty here is sufficient. However, I acknowledge that unlike prior approaches, which relied on trainable gates, this method uses fixed random projections.\n\nThere will be additional computational costs to using COMET but there is not an in-depth analysis of these costs in the paper. An analysis of training/inference times and GPU memory usage between COMET and the standard models would strengthen the submission."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Is there any reason to believe this phenomenon does not already occur in large networks? \\cite{elhage2022toy} describe a situation where neural networks encode the phenomenon observed in \\cite{cheung2019superposition} during the course of training. Are there advantages of explicitly creating the superposition?\n\nFigure 4 shows a strange result where the smaller_model performs consistently as well as the standard_model even for fairly low p_k values. It's unclear to me why there would be a benefit for COMET if at the neurons=3000, the smaller_network at pk=.1 will perform as well as the standard_model. What is being gained here?\n\n@article{elhage2022toy,\n title={Toy models of superposition},\n author={Elhage, Nelson and Hume, Tristan and Olsson, Catherine and Schiefer, Nicholas and Henighan, Tom and Kravec, Shauna and Hatfield-Dodds, Zac and Lasenby, Robert and Drain, Dawn and Chen, Carol and others},\n journal={arXiv preprint arXiv:2209.10652},\n year={2022}\n}"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Unlike other methods, the proposed COMET method has no trainable gating functions (unlike standard Mixture of Experts) and avoids representation collapse.\n\nDoes not require explicit input/task IDs or pre-defined expert specialization.\n\nWorks across multiple architectures (MLPs, ViTs, GPT, MLP-Mixers).\n\nThe work is particularly similar to \\cite{cheung2019superposition}, especially with the use of a random projection matrix V to handle the decision to mask. The justifications for using random projections in \\cite{cheung2019superposition} seem to align well with the described capacity benefits of the COMET method in larger networks as compared to smaller networks. In particular, with a larger number of neurons, the probability of interference between masks rapidly decreases.\n\n@article{cheung2019superposition,\n title={Superposition of many models into one},\n author={Cheung, Brian and Terekhov, Alexander and Chen, Yubei and Agrawal, Pulkit and Olshausen, Bruno},\n journal={Advances in neural information processing systems},\n volume={32},\n year={2019}\n}"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces COMET (Conditionally Overlapping Mixture of ExperTs), a new method for creating sparse neural networks. The authors show using fixed, biologically-inspired routing can create more efficient and effective neural networks, particularly for larger models, while avoiding common problems in sparse architectures like representation collapse and poor knowledge transfer. The key insight is that COMET creates input-dependent sparsity without needing to learn the routing mechanism. COMET uses a fixed, biologically-inspired random projection combined with a k-winner-take-all operation to route inputs through the network, rather than using trainable gating functions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The arguments for the number of experts is based on the possible permutations of masks that can be created which gives an unrealistically large number of possible experts. But this does not account for interference issues and establishing a bound more grounded in reality would be very helpful. The theory work in \\cite{cheung2019superposition} should help better define these bounds.\n\nThere's a claim of \"improved generalization through enhanced forward transfer\", but it's unclear what experiments in this paper demonstrates better transfer learning."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Which components of COMET are inspired by the concept of biological random projection?\n\n2. How should the hyperparameter $k$ in Equation (3) be determined?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. COMET presents a novel routing method that replaces trainable gating functions with fixed, biologically inspired routing, which is rare in modular neural network approaches.\n\n2. The proposed method is tested across diverse architectures and tasks, such as image classification, language modeling, and regression, suggesting versatility."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper, titled Conditionally Overlapping Mixture of Experts (COMET), proposes a method aimed at overcoming limitations in existing sparse neural network architectures. COMET introduces a modular, sparse structure with a biologically inspired fixed routing approach that eliminates the need for task IDs and trainable gating functions, commonly associated with representation collapse and redundancy. Instead, the authors implement a k-winner-take-all cap operation, enabling experts to overlap based on input similarity. This approach aims to improve generalization and facilitate faster learning, validated across various tasks and architectures, including MLPs, Vision Transformers, and GPT-based models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Section 3 lacks crucial methodological details necessary for a complete understanding of the proposed COMET approach. For instance, it is unclear which specific design elements in COMET were directly influenced by the concept of biological random projections. \n\n2. The paper’s writing lacks clarity, making it difficult to fully understand the design of COMET. I recommend including a preliminary section that outlines the foundational Mixture of Experts (MoE) framework, followed by a clear discussion on how COMET’s design diverges from and improves upon existing methods.\n\n3. While COMET is designed to improve modularity and interpretability, the authors do not demonstrate how the model’s interpretability has improved. More extensive interpretability metrics or qualitative evaluations would support the claimed benefits of COMET.\n\n4. The absence of code and essential implementation details significantly hampers reproducibility and raises concerns about the robustness of the results."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose Conditionally Overlapping Mixture of ExperTs (COMET), a general deep learning method that induces a modular, sparse architecture with an exponential number of overlapping experts"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024more,\ntitle={More Experts Than Galaxies: Conditionally-Overlapping Experts with Biologically-Inspired Fixed Routing},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1qq1QJKM5q},\nnote={under review}\n}"
},
"abstract": {
"value": "The evolution of biological neural systems has led to both modularity and sparse coding, which enables efficiency in energy usage, and robustness across the diversity of tasks in the lifespan. In contrast, standard neural networks rely on dense, non-specialized architectures, where all model parameters are simultaneously updated to learn multiple tasks, leading to representation interference. Current sparse neural network approaches aim to alleviate this issue, but are often hindered by limitations such as 1) trainable gating functions that cause representation collapse; 2) non-overlapping experts that result in redundant computation and slow learning; and 3) reliance on explicit input or task IDs that impose significant constraints on flexibility and scalability. In this paper we propose Conditionally Overlapping Mixture of ExperTs (COMET), a general deep learning method that addresses these challenges by inducing a modular, sparse architecture with an exponential number of overlapping experts. COMET replaces the trainable gating function used in Sparse Mixture of Experts with a fixed, biologically inspired random projection applied to individual input representations. This design causes the degree of expert overlap to depend on input similarity, so that similar inputs tend to share more parameters. This facilitates positive knowledge transfer, resulting in faster learning and improved generalization. We demonstrate the effectiveness of COMET on a range of tasks, including image classification, language modeling, and regression, using several popular deep learning architectures"
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Deep learning",
"Mixture of Experts",
"Modularity",
"Sparsity",
"Conditional Computation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/214bf5d8f5df3f9b040fac442bfade9355e9748b.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "More Experts Than Galaxies: Conditionally-Overlapping Experts with Biologically-Inspired Fixed Routing"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1rg56KzwsS | FullDiffusion: Diffusion Models Without Time Truncation | main | Active | diffusion models;time truncation | generative models | 3;5;5;6 | 4;4;3;4 | 2;2;3;2 | 1;2;3;2 | 2;1;2;2 | 4.75 | 3.75 | 2.25 | 2 | 1.75 | -0.132453 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Some typos. For example, though at the beginning of line 193, and better at line 454.\n2. I do not understand how the x-prediction and v-prediction suffer from numerical instability whereas the parametrization introduced here does not. Are you referring to the division by $\\alpha_t$? If so, if you write out your parametrization in terms of $\\epsilon$ and $x_t$, you will also encounter the division by 0. The parametrization does not provide any training signal to the network when $t=1$ right? If this is the reason why the proposed method does not suffer, then the same could be done with x and v-prediction as well. I also suggest the authors look into EDM and flow matching's parametrization."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The overall story flow of the paper feels natural. The motivation for no time truncation, at both side of the boundaries.\n2. The paper has written a good background on diffusion models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Diffusion models are widely used for high-quality image generation by reversing a process that gradually adds noise to data. However, these models face numerical instability near the end of the time continuum, which often requires heuristic truncation—terminating the process early—to maintain stability during training and sampling. This time truncation disrupts the model's rigor and demands extra tuning. To address this, the proposed FullDiffusion framework introduces a modified noise predictor and a novel SDE solver, removing the need for truncation by ensuring stability in training with maximum likelihood and enabling full-time simulation. Experiments on CIFAR-10 and ImageNet-32 demonstrate improved performance in likelihood and FID, establishing FullDiffusion's effectiveness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The biggest problem of the paper is that both the theoretical and empirical settings, under which the paper is investigated, are out of date. I will detail my arguments below.\n2. Essentially, the paper proposes to fix two things about diffusion models: 1. the singularity of score function at $t=0$. 2. $\\alpha\\neq0$ at time 1. Both problems have been addressed in the field. First, we never want to evaluate the model at $t=0$ anyway, since both ODE and SDE will not modify $x$ if simulated at time $t=0$. The sampling is always done at time where the model can be properly trained. Second, $\\alpha\\sim0$ is often good enough in practice (the SOTA model EDM [1, 2] uses a rather low terminal noise level). Even if one really wants to have a zero SNR, there are countless works that have already proposed so: [3,4,5,6, ...]. In fact, the proposed formulation is a special case of flow matching, just differing in terms of the interpolation equation.\n3. Given the previous point. In order to demonstrate the effectiveness of this particular formulation, and the sampling technique, a more careful and thorough empirical comparison is needed. Currently, the mentioned, closely related baselines are not included in the paper. For example, is the interpolation $\\sqrt{1-t^2}$ and $t$ better than $1-t$ and $t$ in [3,4]? Even with the weak baseline, the improvements seem to be marginal, and the results are behind SOTA by quite a bit. I am not asking the authors to beat SOTA, but the pool of baselines needs to be expanded, especially in this case, where the theoretical difference is small.\n4. The writing of this paper can be improved. I suggest the author include a short description or intuitive understanding of the equations after derivating them. For example, what modification exactly is added to Equation 18 compared to Equation 16? I assume it is more than just a bigger batch size right?...\n\nIn all, I feel the paper lacks in terms of proper comparison with the prior works, both in theoretical analysis and empirical signals, and thus I cannot recommend acceptance at this point.\n\n\n[1] Karras et al. Elucidating the Design Space of Diffusion-Based Generative Models. NeurIPS 2022.\n\n[2] Karras et al. Analyzing and Improving the Training Dynamics of Diffusion Models. CVPR 2024.\n\n[3] Lipman et al. Flow Matching for Generative Modeling. ICLR 2023.\n\n[4] Liu et al. Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow. ICLR 2023.\n\n[5] Albergo et al. Building Normalizing Flows with Stochastic Interpolants. ICLR 2023.\n\n[6] Girdhar et al. Emu video: Factorizing text-to-video generation by explicit image conditioning. ECCV 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Line 149, I'm confused about the claim \"these coefficients diverge\". $f_t$ and $g_t$ are linear function of $t$ in the VP-SDE, why would they diverge? I think the only coefficient that blows up at $0$ is $g_t^2/\\sigma_t$. \n2. How does Eq. (18) reduce the variance exactly? Can you provide a formal analysis of the variance reduction properties?\n3. How does the design of strata affect the variance reduction?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The method is clear and accessible. \n2. The proposed method improves FID and NLL at the same time. This is interesting because previous works suggest that improving likelihood often leads to worse FID."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a new approach to address the numerical stability issue in diffusion models. The authors propose a new noise schedule and parameterization of preconditioning to eliminate the need for time truncation when dealing with the numerical stability of training and inference. The authors demonstrate that their method eliminates the need for time truncation while maintaining performance on CIFAR10 and ImageNet32x32 datasets. The approach achieves comparable or better results than standard diffusion models without requiring time truncation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The contribution needs further clarification: \n\t1. As shown in B.3 in Karras et al. [1] and A.2 in Zhang et al. [2], the singularity issue at $t=0$ is fundamentally tied to the use of finite training samples. The target data distribution is a mixture of Dirac measures and its score blows up at training samples. So $\\mathcal{J}_{SM}$ is unbounded mathematically. It's inherent and cannot be solved by any parameterization alone. \n\t2. This paper primarily addresses the singularity issue arising from the parameterization of the neural network. There is a class of parameterization to achieve this and I think the authors should discuss that instead of only focusing on one specific case unless there is a strong reason to do so. \n\t4. The parameterization proposed in this paper essentially delegates the singularity to the neural network, which eventually leverages the regularization posed by the neural network design. As reported in Table 1, I believe this is also a valid approach but the benefits over the time truncation approach are not convincingly demonstrated both theoretically and empirically. \n2. The main experiments in Table 1 omit some relevant baselines such as i-DODE ([3]), soft-truncation ([4]). \n3. The manuscript requires several technical clarifications:\n\t1. Equations (10), (13), (16), and (18) should be explicit about which distribution you are taking expectation over. The current notation uses a single $\\mathbb{E}$ for three different expectations. \n\t2. The definition of D is missing, which first appears in Eq. (10).\n\t3. $\\mathcal{J}_{DSM}$ in Eq. (14) needs an expectation. \n\t4. Inconsistent notation: the integral over $t$ is written as $\\mathbb{E}_t$ in Eq. (12) and integral in Eq. (14). \n\t5. In Eq. (18), the second expectation should be removed and the equality should be an approximation. \n\n[1] : Karras, Tero, et al. \"Elucidating the design space of diffusion-based generative models.\" _Advances in neural information processing systems_ 35 (2022): 26565-26577.\n\n[2] : Zhang, Pengze, et al. \"Tackling the Singularities at the Endpoints of Time Intervals in Diffusion Models.\" _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 2024.\n\n[3] : Zheng, Kaiwen, et al. \"Improved techniques for maximum likelihood estimation for diffusion odes.\" _International Conference on Machine Learning_. PMLR, 2023.\n\n[4] : Kim, Dongjun, et al. \"Soft Truncation: A Universal Training Technique of Score-based Diffusion Model for High Precision Score Estimation.\" _International Conference on Machine Learning_. PMLR, 2022."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "n. a."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "(1) The authors provided a link for their source code in the appendix. But it is empty when I try to study and re-run their code."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "(1) A new form of the noise predictor in diffusion models is proposed in order for the LELB bound to be well defined at the bound points (i.e., t=0 and t=1). By doing so, time truncation can be avoided, which I think is nice. \n\n(2) One interesting result is that the FID scores of the ODE solver and SDE solvers are very close in the paper. This suggests that it might be because of the time truncation in the literature, that leads to the poor performance of ODE solver in comparison to SDE solver."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper mainly considers to remove time-truncation when performing training and sampling of diffusion models. The main contribution is to propose a new form of the estimated Gaussian noise. As a result, the corresponding LELB bound is nicely defined at the boundary point. A new semi-linear SDE solver is proposed accordingly."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(1) It is not clear to me how stratified sampling is implemented by reading Section 3.2. The authors only state that \"we propose to use stratified sampling for the time variable t for variance reduction.\" without providing implementation details. \n\n(2) Is Equation (18) the objective function to be minimized? If so, the authors should explicitly say it. The authors should also elaborate the training time and the GPU they used in their experiments. The link to the source code is empty. \n\n(3) It is not clear how many timesteps are used in Table 1. \n\n(4) The English language needs to be improved. There are quite a few typos in the paper, such as \"priliminary\", \"Althoguh\", \"after introduced by the original paper by X\", \"difinition\", and \"eliminate time truncation time during sampling\"."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "I am surprised to see that DDPM++ requires 1000 NFE with Euler solver to reach near optimal FID (figure 1b). iDDPM [5] shows that 100-300 NFE can achieves sub-optimal FID by changing the noise schedule. Since FullDiffusion uses a different noise schedule from DDPM++, I am curious about how much the sampling efficiency of FullDiffusion is contributed by the noise schedule.\n\n\n[5] Nichol, Alexander Quinn, and Prafulla Dhariwal. \"Improved denoising diffusion probabilistic models.\" ICML, 2021."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The reparameterization of network prediction and noise schedule is novel, which eliminates the singularity issue of time truncation\n2. The corresponding solvers are derived along with the new parameterization.\n3. FullDiffusion achieves improvements on both FID and NLL"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on the time truncation parameter that causes the divergent score function in diffusion models. To remove the time truncation, the authors propose FullDiffusion by reparametrizing the network prediction and the noise schedule. Under this new parameterization, the authors accordingly propose a first-order solver and a second-order solver inspired by the semi-linear structure of the reverse SDE (DPM-solver). Results on Cifar10 and ImageNet32 show that FullDiffusion outperforms DDPM++ in terms of FID and likelihood."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. My first major concern is that, the time truncation might be not a problem according to the good FID and NLL achieved by VDM [2] and SoftTruncation [3] (even better than FullDiffusion), these two models maintain the time truncation. Although people know that time truncation causes numerical instability, [1] and [2] proposed different time sampling methods to stabilize the training. Also, I do not think researchers tune the truncation parameter anymore since it is already found and often used as a fixed parameter.\n\n2. The key section 3.1 is ambiguous, e.g. why directly set $\\sigma_t=t$, $f_t=-t/(1-t^2)$ and what gives eq 15? I guess the authors want to eliminate the divergent coefficients and these parameterizations are derived from this goal? However, the reasoning, motivations, and derivation are missing in this section.\n\n3. In the abstract, the authors say 'our method eliminates numerical instability during training', but why is there still a big ELBO variance during training (see Figure 2a)? What is the motivation for using stratified sampling to reduce the training variance?\n\n4. This paper excludes the VE-SDE which is also widely used in the community.\n\n5. Benchmarking on only cifar10 and ImageNet32 is insufficient, I suggest the authors test the method on celeba64 and ImageNet64. Also, the authors should compare FullDiffusion with other diffusion models focusing on likelihood, e.g. VDM [2] and SoftTruncation [3]\n\n6. The FID improvement of FullDiffusion is limited, e.g. 5.42-->5.00, 2.55-->2.53. These improvements can even be derived by using different batches of generated samples.\n\n7. The literature review (section 4.2) is insufficient, lacks reviews of major papers, like [2] and [3]\n\nOthers:\n1) in eq 11, the notations D and H are used without definition.\n2) line 402, the velocity predictor looks wrong, according to [4]\n\n\n[1] Song, Yang, et al. \"Maximum likelihood training of score-based diffusion models.\" NIPS, 2021.\n\n[2] Kingma, Diederik, et al. \"Variational diffusion models.\" NIPS, 2021\n\n[3] Kim, Dongjun, et al. \"Soft truncation: A universal training technique of score-based diffusion model for high precision score estimation.\" ICML, 2022.\n\n[4] Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. ICLR, 2022"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024fulldiffusion,\ntitle={FullDiffusion: Diffusion Models Without Time Truncation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1rg56KzwsS},\nnote={under review}\n}"
},
"abstract": {
"value": "Diffusion models are predominantly used for generative modeling, which synthesize samples by simulating the reverse process of a stochastic differential equation (SDE) that diffuses data into Gaussian noise.\nHowever, when simulating the reverse SDE, the SDE solver suffers from numerical instability near the time boundary; hence, in practice, the simulation is terminated before reaching the boundary point.\nThis heuristic time truncation hinders the rigorous formulation of diffusion models, and requires additional costs of hyperparameter tuning.\nMoreover, such numerical instability often occurs even in training, especially when using a maximum likelihood loss.\nTherefore, the current diffusion model heavily relies on the time truncation technique in both training and inference.\nIn this paper, we propose a method that completely eliminates the heuristic of time truncation.\nOur method eliminates numerical instability during maximum likelihood training by modifying the parameterization of the noise predictor and the noise schedule. We also propose a novel SDE solver that can simulate without time truncation by taking advantage of the semi-linear structure of the reverse SDE.\nThese improvements enable stable training and sampling of diffusion models without relying on time truncation.\nIn our experiments, we tested the effectiveness of our method on the CIFAR-10 and ImageNet-32 datasets by evaluating the test likelihood and the sample quality measured by the Fréchet inception distance (FID). \nWe observe that our method consistently improve performance in both test likelihood and the FID compared to the baseline model of DDPM++."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"diffusion models",
"time truncation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/570fd807a52df46669671d1d9844ee264f4d6ddd.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "FullDiffusion: Diffusion Models Without Time Truncation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1t1YSuBv3T | Evidence-Enhanced Triplet Generation Framework for Hallucination Alleviation in Generative Question Answering | main | Active | Evidence-Enhanced;Hallucination Alleviation;Generative Question Answering | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 3;5;6 | 4;3;3 | 2;2;3 | 2;3;2 | 2;2;3 | 4.666667 | 3.333333 | 2.333333 | 2.333333 | 2.333333 | -0.944911 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See the weaknesses section."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper presents a comprehensive methodology and demonstrates a strong experimental setup. EATQA's effectiveness is validated across two benchmarks, MultiRC and QASPER, where it outperforms prior state-of-the-art models. The paper provides detailed comparisons with competitive LLMs, proving the reliability and effectiveness of the proposed method. Ablation studies further establish the significance of each component in the framework, such as the impact of removing evidence generation or query restoration on performance.\n2. The authors provide a clear exposition of EATQA’s architecture and its underlying principles. The paper is well-organized, with clear definitions of the three primary tasks (evidence generation, question answering, and question restoration). Figures, such as the model overview and template instructions, aid in visualizing the complex relationships within the triplet generation framework. Additionally, the equations and methodological breakdown make it accessible to readers familiar with GQA and hallucination mitigation research."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes EATQA (Evidence-Enhanced Triplet Generation Framework), designed to reduce hallucinations in Generative Question Answering (GQA). EATQA leverages a structured approach by generating triplets of Question, Evidence, and Answer (QEA) and using these to reinforce logical consistency. The model is trained on three main tasks: evidence generation, question answering, and query restoration, which improve the alignment between evidence and answers. Tested on MultiRC and QASPER datasets, EATQA achieves state-of-the-art results, effectively reducing hallucination and enhancing answer fidelity by distilling knowledge directly from evidence during inference."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Limited innovation: The paper's proposed three training losses lack technical depth, and this multi-task approach has already been proposed and used in many scenarios. Although there are improvements on two benchmarks, the method does not provide new insights or thoughts for the readers.\n2. Insufficient baseline models: The discussion of baseline models for retrieval-enhanced methods in the paper is not comprehensive enough.\n3. Limited generalizability: The paper does not conduct experiments on a broader range of datasets, making it difficult to demonstrate the method's generalizability, especially in scenarios where large models are fine-tuned, such as in different types of multi-hop QA scenarios like NQ, TQ, StrategyQA, and MusiQA.\n4. Non-standard writing format: There are many citation format errors, images are not in vector format, and there are issues with the image formatting."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How does the method perform on datasets without gold evidence annotations?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The method is well-motivated and the paper is easy to follow. The experiments show the proposed method has great improvements."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes EATQA to address hallucination issues in GQA. It is an unified triplet generation approach that can capture logical relationships between question, evidence, and answer."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The method is based on gold evidence annotations when training. It may limit its applicability to datasets without such annotations.\n\n2. The improvement margins on some baselines, e.g., CAD and RHO, are relatively modest.\n\n3. Is the computational costs and inference time comparison to baselines missing?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. What was the value for a hyperparameter \\alpha_{kl} and how did the authors fix it?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed triplet generation framework showed significant improvement on two widespread document-based GQA datasets, MultiRC and QASPER, yielding state-of-the-art performance on the datasets."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposed an evidence-enhanced triplet generation framework, EATQA, to address a hallucination issue in generative question answering (GQA). The EATQA encourages the model to predict Answer (A), Question (Q), and Evidence (E), given QE, EA, and QA pairs, respectively. that is, all the combinations of ⟨Question, Evidence, Answer⟩. to understand their relationships. The paper applied it to LLama, that outperformed other LLM-based methods and hallucination mitigation approaches on two GQA benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. First, the paper was not necessarily written in good English. It should receive a native check. Further, it is partly difficult to understand. The authors incorrectly used LaTeX cite commands, that makes the draft more difficult to read. It is better to check the whole draft more carefully again. \n\n2. While the proposed framework could yield better performance in GQA tasks, the evaluation in hallucination alleviation was not necessarily thorough enough, that makes it difficult to judge whether the proposed framework is really good in the hallucination alleviation. The analysis in Sec. 5.4 did not necessarily directly evaluate the degree of hallucination alleviation. Furthermore, no comparisons with previous related work were shown. It is better to show how well the proposed framework can alleviate hallucination directly and clearly, in comparison with related work.\n\n3. In the analysis in Sec. 5.3, no explanation was provided for the performance in Table 6. If it is the evaluation for generated evidences, how reference evidences can be obtained because it was mentioned that evidence annotation is unavailable in the datasets? it is also not described how the scores were calculated. \n\n4. The analysis in Sec. 5.2 seems to contribute to fewer useful findings. In my understanding, since the document length is proportional to the number of sentences, just a table might be enough from Tables 4 and 5.\n\n5. It is better to clearly describe how the authors fixed hyperparameters in the experiments."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024evidenceenhanced,\ntitle={Evidence-Enhanced Triplet Generation Framework for Hallucination Alleviation in Generative Question Answering},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1t1YSuBv3T},\nnote={under review}\n}"
},
"abstract": {
"value": "To\naddress the hallucination in generative question answering (GQA) where the answer can not be derived from the document, we propose a novel evidence-enhanced triplet generation framework,\nEATQA, encouraging the model to\npredict all the combinations of ⟨Question, Evidence, Answer⟩ triplet\nby flipping the source pair and the target label\nto understand their logical relationships, i.e.,\npredict Answer(A), Question(Q), and Evidence(E) given a QE, EA, and QA\npairs, respectively. Furthermore, we bridge the distribution gap to distill the knowledge from evidence in inference stage. Our framework ensures the model to learn the logical relation between query, evidence and answer, which simultaneously improves the evidence generation and query answering. In this paper, we apply EATQA to LLama and it outperforms other LLMs-based methods and hallucination mitigation approaches on two challenging GQA benchmarks. Further analysis shows that our method not only keeps prior knowledge within LLM, but also mitigates hallucination and generates faithful answers."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Evidence-Enhanced",
"Hallucination Alleviation",
"Generative Question Answering"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/be88353adebeee784c7c6d80430806b7a22cc874.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Evidence-Enhanced Triplet Generation Framework for Hallucination Alleviation in Generative Question Answering"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1tBvzOYTLF | RevisEval: Improving LLM-as-a-Judge via Response-Adapted References | main | Active | large language models;evaluation;revision | generative models | 3;5;5;8 | 4;4;5;4 | 2;4;1;4 | 2;2;3;3 | 2;3;4;4 | 5.25 | 4.25 | 2.75 | 2.5 | 3.25 | -0.080845 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. While the overall paper is well-written, mentioning what the numbers mean in each table and how they have been calculated in the captions or in the text may improve the readability of the paper to a general user. For eg: mentioning that the values in Table 2 is the accuracy against human preferences... \n2. As mentioned in the weaknesses, please provide details of any experiments that were conducted to study the soundess of this approach for factual responses (where the generated response contains errors which get normalised in the adapted reference)."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. The paper motivates the problem very well by identifying the issues with current reference-based evaluation paradigms. The idea of dynamically generating contextually relevant references is creative and interesting. It aims to address very important and quite relevant aspects of using LLM as evaluators. \n2. Extensive experiments have been conducted across different tasks as well as various metrics have been evaluated. The authors also show the generalizability of their approach to different metrics.\n3. The paper also considers and accounts for the various biases present in LLM Evaluators and also considers the cost of conducting evaluations (which is often ignored in a lot of works)\n4. Many interesting insights have been reported by the authors, including using these contextually relevant references to improve the standard n-gram and model-based metrics."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes an interesting method “RevisEval” which explores a new approach to performing reference-based evaluation by modifying references based on the responses to be evaluated. The authors show that this improves the reliability of LLM-based evaluators as compared to using static references, by hypothesizing that an effective reference must be closely relevant to the response to be evaluated. Authors show many interesting observations and analysis across various standard NLG tasks as well as open-ended generative tasks and also evaluate various metrics (both standard and LLM-based). Authors also show that these adapted references can even boost the efficacy of standard metrics."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While I agree with the motivation behind the paper, I am not sure about the soundness of the methodology followed to generate the reference answers:\n1. Using the response itself to generate an \"adapted reference\", the evaluation might indirectly validate the response’s content and structure. This may lead to artificially inflated evaluations, as the evaluator is essentially comparing the response against a modified version of itself, which serves as the reference.\n2. If the response contains subtle errors, the adapted reference “might” effectively validate or normalize these errors. These is no study around whether the reviser indeed accounts for or corrects for these errors.\n3. While this approach may work well for evaluations of standard NLG tasks as well as some open-ended tasks that care about the language generation capabilities, but for evaluations that care about the factual accuracy of the responses (something where LLMs are overall known to hallucinate), this simple revision may not be robust."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "* How would the proposed method work for multiple references? For open-ended text generation tasks, including MT, multiple references are often used.\n* What's the metric used in Table 2? Accuracy?\n* Figure 3 — it looks like which metrics are most effective (and closest to GPT-4 performance) vary based on the specific metrics used. Would you provide some general guidelines which metric(s) are most effective in general when combined with RevisEval? Or simply doing majority voting is a good strategy?\n* In the last paragraph of Section 4.2 — \"significantly\" is used two times. Are they used in a statistical sense? If not, they are simply very subjective adverb and I would advise against using it in this context\n* Same for the section title of 4.3 — what do you exactly mean by \"activating?\" I would rephrase with something simpler, e.g, \"Improving\"\n* What are some examples of future work of RevisEval? The conclusion section only provides the summary of the findings."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "* Simple yet effective method — the core idea of the proposed method, RevisEval, is very simple—simply \"rewrite\" the response based on the human-written reference (and the rubric) and use it as a new reference. The method is also effective for many settings including LLM-as-a-Judge and traditional reference-based metrics. It is easy to imagine that the proposed method is used in evaluation of many NLG tasks going forward.\n* Good ablation studies — the paper provides a wide set of ablation studies to show the proposed method's effectiveness. It shows evaluation results on scoring tasks as well as pairwise preference benchmarks. I also liked the bias analysis (Section 4.5) as well as the detailed analysis of concrete examples (Section 5).\n\nOverall, the paper is overall well written and provides enough evidence that the proposed method is simple, effective, and widely applicable."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Recently, LLM-as-a-Judge has been gaining popularity for evaluating language generation tasks, but still has a few reliability challenges compared to human evaluation. This paper proposes RevisEval, a text evaluation method that can be used for LLM-as-a-Judge methods as well as more traditional reference-based evaluation metrics, such as BLEU and BERTScore. The core of the method is to use LLMs to revise the response (system output) based on the human reference, called reponse-adapted references, which is then used as a new reference in the downstream evaluation, be it LLM-as-a-Judge or traditional evaluation metrics. Through experiments, the authors showed that 1) the proposed method RevisEval showed improved correlation with gold standard compared to reference-free and baseline reference-based evaluation methods, and 2) on preference tasks, RevisEval outperforms baselines including fine-tuned LLM-as-a-Judge models, 3) the proposed method reduces the positional bias compared to reference-free methods as well as conventional reference-based method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "No major weakness as far as I see. Here are some minor weakness points:\n\n* Unclear names—personally I find \"response-adapted references\" very confusing. It sounds like the method adapt references based on response, but actually it's the other way around. It is actually reference-adapted responses, but I'm not sure if this is a better way of describing it (I don't have any better ideas).\n\n* Unclear description of the experiment settings—the main body of paper benefits a bit of description about the benchmarks. It is based on Tigerscore, but the paper provides very little information re: the specific datasets used and their sizes. Importantly, I think the variety and the quality distribution of responses matter a lot for the evaluation of evaluation methods, and a few sentences about the quantity and the quality of the benchmark datasets would be very helpful.\n\n* Future prospect—this is a bit hypothetical, but the very reason why RevisEval works at all is that current LLMs are in general better at generation rather than discrimination, as the authors state in Section 4.4. Does this mean that in the future, if we have more powerful LLMs at discrimination, would the proposed method still be useful, since future LLMs can simply \"guess better\" using the reference and the response? \n\n* Applicability — the paper already shows experimental results on a wide range of tasks and benchmarks, but I'm suspecting they are all English tasks (only exception is the source sentences of MT, which are in Chinese). It doesn't have to be done in this paper, but it would be valuable to test the effectiveness of RevisEval in a wider range of tasks (e.g., image captioning) and languages."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to Weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The proposed method is intuitive and reasonable, with a straightforward implementation that advances previous work using LLMs to generate references for evaluation. They also consider a comprehensive range of experimental setups, baseline methods, and evaluation benchmarks to verify the effectiveness of their method, resulting in solid experimental analyses."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The work proposes a simple and straightforward evaluation method that involves modifying and enhancing the output text to be evaluated, using it as the reference for further evaluation, motivated by the potentially unsatisfactory quality of traditional references. They experiment with various setups, including using strong and weak LLMs as revisors and employing both traditional evaluation metrics and LLM-based evaluators."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Given that previous studies have already utilized LLMs to generate higher-quality references as replacements for traditional references (Tang et al., 2024), the innovation and contribution of this method are somewhat diminished. I believe they could further enhance the analysis by more comprehensively comparing these two approaches for generating references (generation as reference vs. revision as reference). Additionally, I suggest exploring the use of more refined response-adapted references, such as having the revisor focus on specific dimensions during evaluation, to allow for a richer and more diverse discussion.\n\nThe experiments in this work are thorough, but they may be somewhat distracting. First, the main experimental results presented in Tables 1 and 2 involve some inconsistent demonstrations; for example, both tables include the \"Open-Source LLM-as-a-Judge\" part, but the types of methods involved seem different. In Table 1, it’s unclear whether \"Ref-Based\" refers to references generated by the corresponding LLMs or the original references, which is important. And Sections 4.3 and 4.4 may not be as critical and could be moved to the appendix, given the availability of stronger evaluation methods; this would allow space for more in-depth experiments and analysis.\n\n**Reference**\n\nNot All Metrics Are Guilty: Improving NLG Evaluation by Diversifying References (Tang et al., NAACL 2024)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. While the improvements look promising, there are some questions about the effectiveness of the proposed solution. Recent meta-evaluation works like Reward Bench [1] show that reward models are much more powerful than LLM-as-Judges in proxying human responses. Does the proposed methodology have a benefit against RMs? \n\n2. Automated evaluators are also widely used as a proxy for human preference in RLHF. An additional step to generate revisions makes the whole process slower and expensive. Hence, while the performance may be promising, it seems like it limits the usage of automated evaluators. Where do you expect this methodology to be used? \n\n3. Mandating a revision step before evaluation assumes that the revising model can refine the answer better. What if the question-response pair is too difficult for the model to revise? Will the methodology still be effective?\n\n\n[1] https://arxiv.org/abs/2403.13787"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper proposes a simple and straightforward solution to improve reference-based evaluation. The methodology is easy to implement and shows promising improvement in quality on different benchmarks. \n\n2. The methodology shows strong robustness, naturally controlling for style, and shows nice performance on adversarial benchmarks like LLM Bar, despite relatively small training."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a simple solution to enhance reference-based evaluation of LLM-as-a-Judges. Instead of using pre-made references, the work introduces a novel evaluation paradigm, \"Revise-and-Evaluation,\" where an LLM revises the provided input to generate a reference answer. The authors note that this method is effective in creating a reference similar to the response in terms of style and different artifacts, effectively accounting for the quality of the answer only. The methodology can be expanded to classical evaluation methodologies like BLEU, ROUGE, and more. The methodology is tested on diverse benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Please see the questions section."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "we propose a novel evaluation paradigm, RevisEval, to generate response-adaptive references for evaluation by revising responses to be evaluated."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024reviseval,\ntitle={RevisEval: Improving {LLM}-as-a-Judge via Response-Adapted References},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1tBvzOYTLF},\nnote={under review}\n}"
},
"abstract": {
"value": "With significant efforts in recent studies, LLM-as-a-Judge has become a cost-effective alternative to human evaluation for assessing the text generation quality in a wide range of tasks. However, there still remains a reliability gap between LLM-as-a-Judge and human evaluation. One important reason is the lack of guided oracles in the evaluation process. Motivated by the role of reference pervasively used in classic text evaluation, we introduce RevisEval, a novel text generation evaluation paradigm via the response-adapted references. RevisEval is driven by the key observation that an ideal reference should maintain the necessary relevance to the response to be evaluated. Specifically, RevisEval leverages the text revision capabilities of large language models (LLMs) to adaptively revise the response, then treat the revised text as the reference (response-adapted reference) for the subsequent evaluation. Extensive experiments demonstrate that RevisEval outperforms traditional reference-free and reference-based evaluation paradigms that use LLM-as-a-Judge across NLG tasks and open-ended instruction-following tasks. More importantly, our response-adapted references can further boost the classical text metrics, e.g., BLEU and BERTScore, compared to traditional references and even rival the LLM-as-a-Judge. A detailed analysis is also conducted to confirm RevisEval's effectiveness in bias reduction, the impact of inference cost, and reference relevance."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"large language models",
"evaluation",
"revision"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/91ef60b9c6681d0fa613e40647ed45548a55d988.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "RevisEval: Improving LLM-as-a-Judge via Response-Adapted References"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1tZLONFMjm | GAOKAO-Eval: Does High Scores Truly Reflect Strong Capabilities in LLMs? | main | Active | Large Language Model;Benchmark | datasets and benchmarks | 3;3;5;5 | 3;4;3;3 | 2;2;3;2 | 1;2;2;3 | 1;2;2;3 | 4 | 3.25 | 2.25 | 2 | 2 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Would you please address the concerns in weakness part?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "•\tThe proposed benchmark highlights the data-leaky issues of previous benchmarks. The annual update of GAOKAO is helpful to evaluate the LLMs performance without tedious manual data collection.\n•\tThe paper evaluates a few popular LLMs on this proposed benchmark. \n•\tThe paper finds that there is a performance mismatch between humans and LLMs when conducting GAOKAO tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In order to reveal the limitations of current benchmarks in evaluating human-aligned capabilities, this paper proposes a benchmark based on China’s college entrance exam and conducts evaluations on LLMs released before the benchmark data. The paper finds that LLMs have high variability on questions of similar difficulty and there is performance mismatch between LLMs and human annotators."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "•\tThe paper lacks clarity:\no\tHow are the human results conducted? What are the grading guidelines? How to distribute the tasks? How to validate the human evaluation process?\no\tThe paper uses Rasch model to simulate human performance. However, there lacks clarifications why GAOKAO performance could be simulated by Rasch model. The actual human performance distribution might be similar to the LLM’s.\no\tLine 274 mentions the difficulty of questions. How is exactly the hybrid approach with human annotations and LLM scores? \n•\tThe paper claims that o1’s reasoning-as-difficulties can mitigate the mismatch between the human performance and LLM’s performance on the benchmark. However, the paper lacks experiments on the performance distribution of o1 on the benchmark, and it is still unknown how this performance distribution aligns with the actual human performance distribution, which is also lacking in the paper.\n•\tThe paper contains a few grammar errors and typos: Line 23: ‘consistant’, ‘difficultiess’, Line 26: ‘we’, ‘phenomenon’ should be plural, Line 459: ‘cabilities’, Line 527: ‘discoverys’, and more. \n•\tThe motivation of this paper is questionable. Previous benchmarks such as GAOKAO-MM and GAOKAO-Bench are proposed to evaluate the comprehensive capabilities of LLMs. However, this paper shows another point that human-aligned LLMs should have similar performance distribution as humans. Wouldn’t LLM research make better LLMs that have higher scores on tasks where humans perform poorly? Unlike improving safety and reducing toxicity through human-alignment, aligning human capability in reasoning tasks might not be a good idea."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "Details on grader recruitment, data privacy, grader anonymity, workload, and compensation etc. are absent."
},
"flag_for_ethics_review": {
"value": [
"Yes, Responsible research practice (e.g., human subjects, data release)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Why was the Rasch model chosen over other psychometric models, and how does it specifically suit LLM evaluation?\n2. Can the observed phenomena in GAOKAO-Eval (e.g., high variance in similar-difficulty questions) be verified with non-Gaokao-based tests?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "• Introduces a comprehensive evaluation benchmark using Gaokao exams that updates every year with minimal/no data leakage.\n\n• Explores scoring consistency and variance with respect to question difficulty.\n\n• Attempts to model scoring behavior using cognitive psychology (Rasch model)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces GAOKAO-Eval, a new benchmark based on China’s 2024 Gaokao exams to assess large language models (LLMs) in a “closed-book” manner, mitigating issues like data leakage. It claims that high scores in existing benchmarks do not necessarily reflect human-aligned capabilities, presenting two main phenomena: “semi difficulty-invariant scoring” and “high variance on similarly difficult questions.” The authors use the Rasch model to analyze scoring patterns and propose “reasoning-as-difficulty” tokens as a potential alignment method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "• The Rasch model is commonly used in human testing. But it is unclear if the Rasch model is the best fit for modeling LLM behavior, especially without fully exploring/discussing alternative psychometric models.\n\n• Some descriptions seem exaggerated. GAOKAO-Eval primarily assesses knowledge-based aspects of LLM performance, focusing on subject knowledge and question-answering within a constrained exam format. This scope limits its comprehensiveness as a benchmark for LLM capabilities, which is inconsistent with what is described in Section 2 Paragraph 1.\n\n• The process of human involvement is not clear. The study involves 54 human graders without disclosing ethical considerations, which raises potential concerns."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "see above"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Understanding the capabilities of the LLMs is a very relevant and timely topic. I appreciate the author’s effort to curate such a valuable dataset that aims to test various abilities of the models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper aims to study if the high scores truly reflect human-aligned capabilities in LLMs. To this end, the authors propose a new eval dataset called GAOKAO-Eval, comprising of different question types, subjects, difficulty levels, etc. Evaluation on this dataset shows that the trained model WQX and GPT4o has much better performance than other models like Qwen, Mixtral, etc. The authors conduct different experiments to show the mismatch between LLM capabilities and the expected human-aligned abilities."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I think the paper can be significantly improved and revised to clearly articulate the experiments, results, and insights. \n\n1.\tThe paper’s general message that LLMs’ performance varies across similar question types and that there is anomalous consistency across difficulty levels is well-studied in the literature. It would be beneficial if the authors focus on their dataset to showcase how models perform across different subjects and difficulty levels, highlighting what types of problems they perform well on versus those where they fail, and providing potential reasons why. Currently, results are aggregated to show performance variations across models on different difficulty levels.\n\n2.\tI found it very difficult to interpret the results, as none of the figures provide a clear explanation of the experiment, insight, or key takeaway. For example, in Fig. 4, you show overall performance across models by subject, but do not clarify what the takeaway is from this figure. Does it imply that WQX and GPT-4o perform the best on this dataset? What is the overall accuracy on this dataset? It’s unclear what the models' performance is on the entire dataset.\n\n3.\tSimilarly, Fig. 5 lacks an explanation of how human ratings and LLM-based judgments were incorporated into ELO. The graph only shows the difficulty level for 11 questions. What does aligning difficulty level with expert judgments mean? Why are only GPT-4o results shown? What does the difficulty of extracted questions signify?\n\n4.\tIn Fig. 6, why is the IRT fit across all model results instead of fitting it at each model level to show, for example, whether GPT-4o outputs across difficulty levels align with human abilities? This result is unclear. \n\n5.\tFig. 7a has a grey area—what does this represent? How is difficulty determined by humans or models? The phrase “across models” is also unclear regarding what this graph is meant to demonstrate\n\n6.\tIn line 357, you mention, “our difficulty ratings are well-aligned with human perception and accurately reflect the human-aligned capabilities of LLMs.” How did you arrive at this conclusion?\n\n7.\tWhat is the takeaway or insight from Fig. 8? \n\n8.\tWhere is eq 3 applied?\n\n9.\tFigure 11 requires more detail. What does incorporating O1 tokens mean? O1 provides the steps and final answer but not the backend exploration or raw tokens, so what is meant by this?\n\n10.\tThe explanations and insights for Figures 11a, b, c, and d are poorly articulated. \n\n11.\tWhy not compare with other open-source multimodal models like LLavaOneVision and LLavaNext, which have shown to be more powerful on multimodal data.\n\n12. How were the human raters selected? details of inter-rater agreement etc., should be provided\n\nApart from the above the key questions for me are:\n\n1.\tGiven that variants of GAOKAO-Bench and GAOKAO-MM already exist, what is the true novelty of this dataset? While the authors mention it is secure and non-leaky, the other two datasets are as well. What differentiates the construction of this dataset compared to the other two, establishing it as a key contribution?\n\n2.\tIf the novelty does not lie in the dataset itself, then the key contributions should focus on the insights derived from the data to deepen our understanding of LLM capabilities. Unfortunately, the paper does not fully address this aspect, as the authors primarily report aggregate numbers without clearly presenting key takeaways beyond the general message that LLM performance does not align with human abilities. I would like to see some key insights or takeaways derived from the experiments that are generalizable and hold broader significance for the community."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "54 high school teachers were involved in grading subjective questions. It is unclear whether the study received IRB approval."
},
"flag_for_ethics_review": {
"value": [
"Yes, Responsible research practice (e.g., human subjects, data release)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Would “human-like reasoning” still be important if the LLM already achieves “human-like performance”? \n2. What if 4o were fine-tuned on the same (or a subset of the) corpus? \n3. What is the key message or finding conveyed by including the WQX model in the results? \n4. Would CoT or other reasoning techniques help reduce the inconsistency?\n5. After reading through the paper, I still feel unclear about the title: why can’t a high score truly reflect LLM capabilities? If high scores aren’t reliable indicators, how can you conclude that WQX improves over InternLM in the paper based on an increase in accuracy?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. A new LLM benchmark with no data leakage is always demanding in the community to have a subjective reflection of LLM performance; however, GAOKAO-Eval itself seems to be only a temporary workaround, as it is likely to be included in the corpus of more recent LLMs.\n2. The efforts in the evaluation is non-trivial, including a thorough comparison on multiple LLMs, a new WQX model specialized for GAOKAO task, human grading, etc. \n3. The authors reports several interesting findings, including the inconsistency of LLM w.r.t. question difficulty, grading, etc. They also examined the relationship between o1 reasoning tokens and performance consistency. These findings could guide the development of more aligned LLMs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce GAOKAO-Eval, a comprehensive benchmark based on China’s National College Entrance Examination (Gaokao), and conduct closed-book evaluations on LLMs released before Gaokao. This could (partially) address the data leakage issues (only for the models that are released before GAOKAO).\n\nThe main contributions of the paper lies on the findings and insights after applying the benchmark on different LLMs. Their findings reveal that even after controlling for data leakage, high scores still fail to truly reflect human-aligned capabilities. The authors introduce the Rasch model from cognitive psychology, and identify two key issues: 1) anomalous consistent performance across various question difficulties, and 2) high variance in performance on questions of similar difficulty. \n\nFinally, the authors recruit human teachers to grade the LLM responses. The grading is inconsistent, and the models show recurring mistake patterns. \n\nThe study promotes that reasoning-based approaches like o1 can help mitigate these discrepancies and highlights the need for more LLM-aligned difficulty assessments in future benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. What does “human-like reasoning” mean? The term is used in several places but lacks a clear definition. More importantly, would “human-like reasoning” still be important if the LLM already achieves “human-like performance”? Addressing these questions could better motivate the research.\n2. The performance of the new model is only marginally better than 4o (in “Science Total” and “Art Total”), even after being trained with an extensive GAOKAO-related corpus. What if 4o were fine-tuned on the same (or a subset of the) corpus? Additionally, what is the key message or finding conveyed by including the WQX model in the results? The necessity is unclear.\n3. o1’s reasoning ability is mentioned and the finding looks promising; however, the internal reasoning process of o1 is opaque to users and the impact of CoT or other reasoning techniques on white-box models is not explored. Would CoT help reduce the inconsistency?\n\nMinor:\n1. line 23: \"anomalous consistant performance across various question difficultiess\" should be \"consistent\" and \"difficulties\".\n2. line 25: \"we find\": \"w\" should be capitalized."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Introduce a comprehensive and annually-updated benchmark and reveal a unique finding that high scores on GAOKAO do not reflect human-aligned capabilities in LLMs."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024gaokaoeval,\ntitle={{GAOKAO}-Eval: Does High Scores Truly Reflect Strong Capabilities in {LLM}s?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1tZLONFMjm},\nnote={under review}\n}"
},
"abstract": {
"value": "Large Language Models (LLMs) are commonly evaluated using human-crafted benchmarks, under the premise that higher scores implicitly reflect stronger human-like performance. However, there is growing concern that LLMs may “game” these benchmarks due to data leakage, achieving high scores while struggling with tasks straightforward for humans. \nTo substantively address the problem, we create GAOKAO-Eval, a comprehensive benchmark based on China's National College Entrance Examination (Gaokao) and conduct closed-book evaluations for representative models released prior to Gaokao.\nContrary to prevailing consensus, even when addressing data leakage and comprehensiveness, GAOKAO-Eval reveals that high scores still fail to truly reflect human-aligned capabilities. To better understand this mismatch, We introduce the Rasch model from cognitive psychology to analyze LLM scoring patterns and identify two key discrepancies: 1) anomalous consistant performance across various question difficultiess, and 2) high variance in performance on questions of similar difficulty. In addition, we identified inconsistent grading of LLM-generated answers among teachers and recurring mistake patterns. we find that the phenomenon are well-grounded in the motivations behind OpenAI o1, and o1's reasoning-as-difficulties can mitigate the mismatch. These results show that GAOKAO-Eval can reveal limitations in LLM capabilities not captured by current benchmarks and highlight the need for more LLM-aligned difficulty analysis."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large Language Model",
"Benchmark"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d83b31250eeb9271bf8a57f386961473582699b7.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/47d71d5a4efd3fb99ef802cabf5bd04ab07d1d43.zip"
},
"title": {
"value": "GAOKAO-Eval: Does High Scores Truly Reflect Strong Capabilities in LLMs?"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1uLW9eYNJB | MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards | main | Active | LoRA;parameter efficiency;parameter sharing;instruction tuning;NLP | alignment, fairness, safety, privacy, and societal considerations | 5;6;6;8 | 3;4;5;3 | 2;3;4;4 | 2;2;3;3 | 3;3;3;3 | 6.25 | 3.75 | 3.25 | 2.5 | 3 | -0.207514 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- In Section 3.3, is $I_a^k \\in \\mathbb{R}^r$ or $\\mathbb{N}^r$\n- What do the 4/8, 16/32 (or \"increasing the rank to 4 or 8\") in Table 2 mean?\n- Many implementation details are missing - what's the pool size, how many shards, breakdown of private & public segments, etc."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The method is more general than LoRA, making LoRA a special case when there is no global pool.\n- The authors provide ablation study for each of the differentiation strategies (except subset selection), showing the efficacy of each strategy.\n- Overall, I find the finding about sharing & differentiation makes sense and the motivation is clear. Each differentiation strategy is proposed to keep the number of parameters unchanged but increase level of differentiation between each layer."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper investigates a more lightweight solution than LoRA in order to serve a large number of finetuned models at the same time. Based on a finding that excessive sharing may hinder model performance, the authors believe that differentiation is necessary to reverse the detrimental effects of pure sharing. The paper proposes Mixture of Shards (MoS) that incorporates both inter-layer and intra-layer sharing with four differentiation strategies.\n\n- Subset selection: randomly choose r pairs of vectors at the beginning for each layer\n- Pair dissociation: separate each pair into two pools, where each vector in a pair will be sampled independently\n- Vector sharding: shard the global pool into n parts and concatenate sampled vectors from each shard\n- Shard privatization: reserve a private segment for exclusive use for each matrix"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Overall, the paper is well written. There are some minor details that can be improved.\n- Figure 2 can be more accurate and following the main text better. There is no mentioning of router \"R\" in the main text. Notations like $A^{pub}$, $A^{pri}$, $B$, $I$, $m_{ij}$ can be used to make it clearer.\n- Index(.) could be replaced by a better notation since .index(.) can be understood as the index of some element in an array."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Have you considered how MoS might interact with techniques like quantization, pruning, or dropout? Many practical deployments of large models use these methods in conjunction to manage resource constraints, and understanding how MoS might complement them would add value. If feasible, a brief experimental analysis or discussion on this integration would enhance the paper’s relevance for real-world applications.\n\nCould you discuss potential limitations of MoS, such as scenarios where the method may underperform or require additional tuning? For example, does MoS have any specific limitations when applied to domains with high variability in representation needs across layers? A discussion on this would offer a more balanced perspective, helping readers assess the suitability of MoS in various contexts.\n\nHas MoS been evaluated for its impact on inference latency, especially in multi-model serving scenarios? \n\nAre there limitations to the size or complexity of models that MoS can handle effectively? For example, do the benefits of MoS start to diminish for models larger than LLaMA2-13B, or do you anticipate any challenges in scaling it to models with trillions of parameters?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The paper provides solid technical grounding for the Mixture of Shards (MoS) method, with each component—inter-layer and intra-layer sharing and differentiation strategies. The Mixture of Shards (MoS) approach is a novel, well-motivated response to the growing need for efficient fine-tuning techniques for large models. By blending inter-layer and intra-layer sharing with lightweight differentiation strategies, the paper introduces a resource-efficient method that extends beyond existing parameter-sharing methods like LoRA, VeRA, and PRoLoRA. This innovative combination of techniques in MoS is practically a valuable approach. \n\nThe experimental design is comprehensive and addresses key aspects of parameter efficiency, memory usage, and model performance across a range of NLP benchmarks (e.g., MMLU, GSM8K, TyDi QA). The thorough ablation study underscores the necessity of each differentiation strategy (subset selection, pair dissociation, vector sharding, and shard privatization) and supports the paper’s claims about MoS’s efficiency. The paper also includes scalability tests, demonstrating MoS’s robustness on larger models, such as LLaMA2-13B, reinforcing its applicability to current large model architectures.\n\nMoS integrates four nearly cost-free differentiation strategies—subset selection, pair dissociation, vector sharding, and shard privatization—to counteract the performance limitations of pure parameter sharing. These strategies is carefully designed to enhance the diversity and exclusivity of shared parameters, which contributes to the robustness and performance of the method. \n\nThe paper includes rigorous experimentation across diverse NLP benchmarks, such as MMLU ( Massive Multitask Language Understanding for factual knowledge), TyDi QA (multilingual question-answering), GSM8K (for mathematical reasoning), BBH (Big-Bench-Hard for multi-step reasoning), and HumanEval. These benchmarks test the model on factual knowledge, multilingual capabilities, mathematical reasoning, general reasoning, and coding. The results demonstrate MoS’s parameter efficiency and effectiveness compared to baseline methods, making a strong case for its practical utility. The parameter savings—approximately eightfold compared to standard LoRA—are significant, supporting the method’s scalability. This reduction substantially alleviates the memory burden, enabling more efficient model customization and serving without sacrificing performance, which is particularly valuable in settings requiring multiple concurrent models.\n\nThe paper is well-structured, with a logical flow that introduces the problem, presents the MoS solution, and discusses experimental results comprehensively. The clarity of the writing is generally good, though the differentiation strategies could benefit from additional diagrams or illustrations to aid in understanding for a wider audience."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel fine-tuning method called **Mixture of Shards (MoS)**, which aims to significantly improve parameter efficiency in adapting large language models for customized applications. As large language models (LLMs) continue to scale, there is a growing need for parameter-efficient fine-tuning techniques to manage the high GPU memory overhead associated with serving multiple customized models simultaneously. Traditional approaches, such as Low-Rank Adaptation (LoRA), reduce resource consumption by updating pretrained weights with trainable low-rank matrices, but they still encounter scalability and memory limitations when applied to large models and extensive user customization. MoS offers a solution that retains the advantages of LoRA while achieving greater parameter efficiency through innovative parameter sharing and differentiation mechanisms.\n\nThe central concept behind MoS is to combine **inter-layer and intra-layer parameter sharing** in a single framework. This sharing is further enhanced by four lightweight differentiation strategies designed to counteract potential performance degradation from pure parameter sharing. These strategies include **subset selection**, **pair dissociation**, **vector sharding**, and **shard privatization**, each providing unique ways to increase the diversity and exclusivity of shared parameters across layers. By using a **Mixture-of-Experts (MoE)-like routing mechanism**, MoS selects and concatenates specific shards from a global parameter pool, thereby achieving efficient memory usage while maintaining high model performance.\n\nIn terms of experimental validation, the paper presents extensive evaluations on various NLP tasks, including factual knowledge (MMLU), multilingual question-answering (TyDi QA), mathematical reasoning (GSM8K), multi-step reasoning (BBH), and coding (HumanEval). The experiments demonstrate that MoS outperforms LoRA and other baseline methods in parameter efficiency, particularly under limited parameter budgets. MoS achieves approximately eightfold parameter savings compared to standard LoRA configurations, making it a promising approach for scenarios requiring numerous custom models.\n\nAn ablation study further examines the importance of each differentiation strategy, showing that components like pair dissociation and shard privatization provide substantial gains in efficiency, while vector sharding offers incremental improvements. The study reinforces the necessity of each differentiation strategy in achieving the performance and efficiency benefits observed with MoS. Additionally, a scalability analysis using the larger LLaMA2-13B model demonstrates that MoS maintains its advantages on a larger scale, further underscoring its robustness and suitability for high-capacity models.\n\nThe paper positions MoS as an important step forward in parameter-efficient fine-tuning. MoS’s compatibility with LoRA-based infrastructure and its ability to serve multiple customized models simultaneously without substantial memory overhead make it practical for real-world deployment. The findings provide insights into the trade-offs and design considerations of parameter sharing, offering a valuable resource for researchers and practitioners working on efficient model adaptation techniques. The paper’s detailed methodology, comprehensive experimentation, and focus on parameter efficiency contribute meaningfully to the broader research area of resource-efficient machine learning, addressing critical scalability issues as the field advances."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While MoS is evaluated on a range of NLP tasks, the paper does not sufficiently analyze the method’s performance across various model architectures or specific task categories (e.g., multilingual tasks, code generation) where parameter efficiency and differentiation strategies could have different impacts. A breakdown showing how MoS performs on individual tasks, especially ones that are highly memory-intensive, would offer a clearer picture of its advantages and limitations across diverse NLP applications. \n\nThe ablation study is a strong point but could be enhanced by further exploring each differentiation strategy’s scaling potential. For instance, while the study confirms the individual benefits of subset selection, pair dissociation, vector sharding, and shard privatization, it doesn’t analyze the interactions or scalability of these strategies as model or task complexity increases. Additional experiments showing the performance impact of these strategies in larger configurations or different combinations would make the study more informative for readers looking to fine-tune MoS to specific needs.\n\nThe paper would benefit from a section discussing the potential limitations of MoS in specific scenarios. For instance, the effectiveness of MoS might be reduced when applied to tasks with low data diversity or high variance in representational needs across layers. Discussing scenarios where MoS might underperform or require adaptation would provide a more balanced view and help users assess when MoS is a suitable choice."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. For the experiments conducted with two runs using seeds 0 and 1, could you provide the individual performance results for each run? Additionally, were any further experiments conducted with different seeds to assess the robustness of the results?\n2. How does the random initialization impact the performance of MoS? Given the reliance on randomness, are there specific initialization settings or hyperparameters that consistently yield better results?\n3. What criteria, if any, were used to decide the number of shards in the global pool, and how sensitive is the model’s performance to this choice?\n4. Were there any specific cases where certain differentiation strategies (e.g., subset selection, pair dissociation) proved more beneficial than others?\n5. How does the computational overhead of MoS compare to traditional LoRA during training and inference, especially with regard to memory usage and GPU hours?\n6. Since MoS integrates multiple strategies, are there any known trade-offs between parameter savings and performance across tasks?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. MoS combines subset selection, pair dissociation, vector sharding, and shard privatization to reduce parameters while maintaining performance.\n2. Demonstrates an eightfold parameter reduction compared to traditional LoRA with empirical support.\n3. Provides insights into the contributions of each differentiation strategy."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Mixture of Shards (MoS), a sharded adaptation of LoRA designed to achieve greater parameter efficiency by leveraging parameter sharing across layers and within layers. MoS not only reduces the number of parameters required compared to traditional LoRA but also mitigates the potential performance degradation associated with excessive sharing. This is achieved through four strategies: subset selection, pair dissociation, vector sharding, and shard privatization, which ensure that each shared parameter can adapt to specific model requirements. MoS demonstrates a further reduction in trainable parametric usage, allowing more scalable deployment of LoRA-based models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although the paper introduces subset selection, it lacks criteria for choosing subsets; the selection is randomly initialized and fixed throughout training.\n2. The MoS approach is primarily a combination of various techniques rather than a cohesive, unified method.\n3. The MoS approach introduces significant randomness, making it challenging to determine if the improvements result from the design or from random variations. A test of significance could strengthen these claims.\n4. The paper includes limited ablation studies for MoS, making it difficult to isolate and understand the contributions of each individual strategy in the overall design."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Questions:\n\n-What’s the reason behind choosing the specific differentiation strategies? How was each expected to impact performance, and was the decision based on empirical results?\n\n-How exactly are vectors selected from the global sharing pool?\n\n-Does the MOS method affect finetuning time? (given the added complexity of MOS)\n\n-Do you plan to release the code? This is a complex framework, and code would be very useful\n\n-Can you elaborate on the statement that differentiation “reverses the detrimental effects of sharing”? Is there any theoretical support for MOS’s design?\n\n-Why was standard deviation not provided for the averages?\n\n-“Differentiation” is typically associated with gradient computation in deep learning, which might cause confusion. I’d consider a different name."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "-The paper has solid motivation.\n\n-Authors propose an intuitively sound idea of using a combination of inter layer and intra layer sharing with MoE-like routing.\n\n-The authors claim that MOS is the first method to use an MoE-like mechanism for parameter-efficient fine-tuning in a single-task LoRA.\n\n-The comparisons with LoRA, VeRA, and PRoLoRA are relevant baselines for MOS performance.\n\n-I like the design of the initial experiment (Table 1) - it's good to back up and motivate the method (though I have some comments mentioned in the weaknesses)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes Mixture of Shards (MOS), a LoRA-based method designed to reduce trainable parameters while maintaining performance for LLMs. MOS combines inter and intra layer sharing mechanisms with MoE-like routing system for parameter selection. It also introduces four “differentiation strategies”: subset selection, pair dissociation, vector sharding, and shard privatization (to add diversity and prevent performance degradation from parameter sharing). The authors claim that MOS achieves about an 8x reduction in parameters compared to LoRA while retaining competitive performance.\n\nMOS method proposes leveraging both VeRA-like inter-layer parameter sharing and PRoLoRA-like intra-layer parameter sharing. It is proposing “Global Sharing Scheme” where each adapted layer across the Transformer creates its low-rank matrices (A and B) using shards from a globally shared pool selected by MoE-like routing.\n\n“Differentiation Strategies” used in MOS:\n\n-Subset selection - selects a subset of vector pairs per transformer block\n\n-Pair dissociation - separates vector pairs into two different pools to create unique combinations for each block.\n\n-Vector sharding - breaks down vectors into smaller shards, which are then concatenated.\n\n-Shard privatization - divides the global pool into public and private sections."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I think that the idea behind MOS is interesting and worth exploring, but it needs more rigorous experiments and analysis, along with a clearer explanation of the method and design choices. Without the code, reproducibility is challenging. \n\nI would like to understand why we would select this method over simpler methods like VeRA that provide good results. Does this method offer enough benefits to justify its complexity? The complexity should be justified, and any impact on finetuning overhead should be mentioned.\n\nWeaknesses:\n\n-The motivation for the specific “differentiation” strategies could be clearer. The authors mention that these strategies help maintain representational power, but this is very high-level and lacks theoretical support.\n\n-The MoE-like routing mechanism for parameter selection isn’t clearly explained, making it hard to reproduce. What exactly is the routing algorithm? Were other approaches tested?\n\n-The paper only evaluates MOS on instruction-following tasks.\n\n-The comparison with VeRA isn’t entirely fair, as MOS uses more parameters than VeRA. I understand that VeRA can have practical limitations in increasing parameters, but could we reduce the MOS parameter count to match VeRA?\n\n-Standard deviations are not provided.\n\n-The initial experiment (Table 1) is interesting, but the conclusion about random scaling doesn’t seem fully justified - this strategy shows very minimal improvement and might not be statistically significant. For subset selection, could more models and seeds be tested to confirm the results?\n\n-Could we add some additional models? Even smaller ones could help validate MOS’s performance. The current results are limited to LLaMA2-7B and LLaMA2-13B, with minimal gains for the latter, which may not justify MOS complexity."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce a more parameter-efficient finetuning method named MoS, and demonstrate its remarkably higher parameter efficiency and other advantages over peer methods, with the hope of establishing it as a resource-friendly alternative to LoRA."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024mos,\ntitle={MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1uLW9eYNJB},\nnote={under review}\n}"
},
"abstract": {
"value": "The rapid scaling of large language models necessitates more lightweight finetuning methods to reduce the explosive GPU memory overhead when numerous customized models are served simultaneously.\nTargeting more parameter-efficient low-rank adaptation (LoRA), parameter sharing presents a promising solution. Empirically, our research into high-level sharing principles highlights the indispensable role of differentiation in reversing the detrimental effects of pure sharing.\nGuided by this finding, we propose Mixture of Shards (MoS), incorporating both inter-layer and intra-layer sharing schemes, and integrating four nearly cost-free differentiation strategies, namely subset selection, pair dissociation, vector sharding, and shard privatization. Briefly, it selects a designated number of shards from global pools with a Mixture-of-Experts (MoE)-like routing mechanism before sequentially concatenating them to low-rank matrices.\nHence, it retains all the advantages of LoRA while offering enhanced parameter efficiency, and effectively circumvents the drawbacks of peer parameter-sharing methods.\nOur empirical experiments demonstrate approximately $8\\times$ parameter savings in a standard LoRA setting. The ablation study confirms the significance of each component.\nOur insights into parameter sharing and MoS method may illuminate future developments of more parameter-efficient finetuning methods."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LoRA",
"parameter efficiency",
"parameter sharing",
"instruction tuning",
"NLP"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2b3210cd854ecf6fad4f4f76ab41c24d00094e77.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/9971c9d5319d71046c926a6be8e4f65f6623d1ba.zip"
},
"title": {
"value": "MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1upXwlEW8y | Prompt Optimization with Logged Bandit Data | main | Active | off-policy evaluation;prompt tuning;large language models;contextual bandits | causal reasoning | 3;5;8;8 | 3;4;2;4 | 2;2;3;4 | 2;2;3;4 | 2;1;3;3 | 6 | 3.25 | 2.75 | 2.75 | 2.25 | -0.142134 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "no"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Can the author describe what's the main insight for the thoerms in this paper? and how they are reflected in the performance of the new approach? There seems to have some disconnection between the theoretical section and empirical verification.\n\n2. How does your method performs in normal prompt optimization setting? like [1]?\n\n\n[1] https://arxiv.org/abs/2306.03082"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The idea of learning a policy to generate good prompts is new to me.\n2. The proposed method clearly addressed the weakness of IS."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed a new policy gradient-based prompt optimization. The goal is to learn a policy that is able to generate prompt with good response (as in good reward). This paper proposed a new DSO that is better than traditional policy gradient and IS based method. Some experimental results provided by this paper show that the new method is able to outperform others."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The experimental session is the major weakness of this paper. This paper only contain a synthetic experiment and a single model experiment on a single dataset with simulated reward function. Experimental results on more datasets and models will make the paper more convincing.\n\n2. The following work should be discussed in the related work since they study prompt optimization with human feedback by learning a reward function and hence related.\n\nhttps://arxiv.org/abs/2402.00396\nhttps://arxiv.org/abs/2405.17346\n\nAn similar line of work on prompt optimization should also be discussed:\n\nhttps://arxiv.org/abs/2306.03082\nhttps://arxiv.org/abs/2310.02905\nhttps://arxiv.org/pdf/2402.09723"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "**Figure 6 Interpretation**:\nIt seems that each bar in Figure 6 represents the results across 5 random seeds. Given the variation across seeds, can we still conclude that the proposed method (DSO) consistently outperforms the regression baseline? The performance between DSO and regression appears similar when accounting for this variability.\n\n**Minor comments**\n* Line 391: $\\sigma_o$ should be $\\sigma_s$?\n* Line 989: MSE loss should be $\\sum_{i=1}^{n} (r_i - \\hat{q}(x_i, a_i))^2$ instead of $\\sum_{i=1}^{n} (r_i - \\hat{q}(x_i, a_i))$.\n* Line 1075: $\\nabla_{\\theta} \\pi_{\\theta}$ should be $\\nabla_{\\theta} \\log \\pi_{\\theta}$?\n* In Section 3.1, the classification of \"conventional approaches\" into \"regression-based methods\" and \"importance sampling (IS)\" feels somewhat unclear. It may be more intuitive to categorize these as \"reward predictor-based approaches\" and \"reward predictor-free approaches.\" This distinction clarifies that IS methods directly use observed rewards, whereas regression-based methods estimate rewards across all actions."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* Using similarity in the generated sentence space to control the bias-variance tradeoff through importance weights is an interesting approach.\n* The paper evaluates the proposed method on two types of tasks, synthetic and LLM-based tasks, demonstrating applicability in varied settings.\n* Theoretical analysis provides insights into the characteristics of DSO, although some detailed proofs could not be fully verified by the reviewer."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses prompting policy optimization for large language model (LLM) pipelines by leveraging logged user feedback, such as clicks, to generate personalized sentences. The authors propose a novel method called Direct Sentence Off-policy gradient (DSO), which uses similarity between generated sentences to estimate the policy gradient. While this approach relies on importance sampling, it can reduce the variance of importance weights by treating them in a sentence space rather than the prompt space. Experiments on a synthetic task and an LLM-based task for personalized movie descriptions are shown to claim the effectiveness of the proposed DSO method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Lack of clarity in algorithmic steps**:\nThe specific steps for implementing the algorithm are unclear. It seems that gradient estimation would require sampling from both the prompt policy and the LLM. If this understanding is correct, how many samples would need to be generated per data point? Should this match the $m$ samples used to estimate $\\pi_0(\\phi(s_i)|x_i)$?\n\n**Notation abuse and lack of clarity in definitions**:\nThis paper has some notation abuse, which leads to ambiguity. For example, the authors introduce $\\pi_\\theta(a|x, s)$ or $\\pi_\\theta(a|x, \\phi(s))$ as a conditional distribution over prompts given the generated sentence $s$ in Section 4 and Appendix D.1. However, this is problematic because $\\pi_\\theta$ is originally defined as a prompt selection policy and should not depend on $s$, which the LLM generates after selecting $a$. Additionally, while the expressions are somewhat interpretable, there is a lack of consistency in function arguments throughout the paper. For instance, $\\pi_\\theta(s|x)$ is used without explanation as $\\sum_a \\pi_\\theta(a|x) p_{LLM}(s|x, a)$. To improve clarity, the authors should avoid redefining $\\pi_\\theta$ with different inputs and instead provide explicit auxiliary definitions where needed, along with a rationale for introducing these conditional probabilities.\n\n**Unpractical setting in Full-LLM Experiment with MovieLens**:\nThe LLM-based experiment in Section 7 lacks realistic user personalization. As shown in Figures 10 and 12, the prompt policy reduces user information to a single word (from a set of only 1000 words) before feeding it to the LLM. This simplistic representation raises concerns about whether the Full-LLM experiment setup can effectively capture real-world personalization. Without a richer prompt (e.g., short sentences) to convey nuanced user information, it is unclear if this approach offers any advantage over simply passing user attributes directly to the LLM. Consequently, this setup might be better categorized as a toy task rather than a realistic evaluation of the proposed method's applicability in real-world tasks.\n\n**Concerns regarding the formulation of baseline approaches**:\nThe problem formulation in this work is novel; however, applying existing methods, particularly the regression approach, seems overly naive for this setup. Since the LLM that generates $s$ is available in this setup, it would be more appropriate for the reward predictor to take $(x, s)$ as input instead of $(x, a)$. Otherwise, the reward predictor would have to learn the LLM's inherent randomness (noise), which seems inefficient. Using $(x, s)$ would allow the reward predictor to avoid this redundancy and better capture the generated sentence features. A Nadaraya-Watson kernel regression (using the same kernel as in DSO) or a neural model like DistilBERT could be employed as the reward predictor to improve adaptability. In connection with the above, in the numerical experiments, using $(x, a)$ as the reward predictor's input in the regression approach may be unfair as a baseline comparison against DSO. DSO leverages (multiple) generated sentence(s) $s’$ for each context $x$ sampled from $\\pi_\\theta$ and the LLM. Thus, any observed performance gap between DSO and the regression approach may simply be due to this difference in formulation rather than any inherent advantage of DSO.\n\n**Organization of the paper**:\nThe structure of the paper could be improved. For instance, details of the synthetic experiment setting and Section 4.2 (not cited in the main text) could be moved to the appendix, as these sections may be of lower priority for understanding the main contributions. Shifting these sections would allow more space for core elements like detailed algorithmic steps, problem setup, and full LLM experiment details in the main text."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How well the sythetic environments represent the real case? I note that there are some gaps between the sythetic environments and the target task. For example, reward is real-valued in synthetic case but it is binary in the real case (click or not); the policy is parameterized by an estimated reward function in the sythetic case."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The algorithm DSO motivated by utilizing the information behind the sentence embedding is generally sound.\n- The theoretically anslysis highlights the benefit of such algorithimic designs by indicating the source of bias and variance of such algorithms.\n- The introduction of the OfflinePrompts benchmark suite is a valuable resource for the research community, facilitating further development and testing of off-policy learning methods for language generation"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents Direct Sentence Off-policy gradient (DSO) for optimizing large language model (LLM) pipelines using logged user feedback such as clicks. DSO addresses the challenges of high variance and bias in policy gradient estimation by leveraging the similarity among generated sentences. The paper provides theoretical analysis on the source of bias and variance reduction of DSO. Experiments on both synthetic environment and a proposed benchmark (OfflinePrompts based on MovieLens-10M) demonstrate the effectiveness of this method. OfflinePrompts is a new benchmark suite,to demonstrate DSO's effectiveness in generating personalized movie descriptions. This is an additional contribution of the paper by providing a practical solution for leveraging naturally logged feedback for prompt policy optimization in language generation tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The experiments for real-world validation is insufficient. (Indeed, we lack good benchmarks for this task.) How well does the real-world performance align with the score/reward in the simulated environment (OfflinePrompts)? I found Figure 11 in the appendix indicates the positive correlation between the simulated rewards and the click feedback from users. Is there other statistics (such as the accuracy)? I am curious on the click rate improvement using the policy trained by DSO in real-world settings."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "I am not an expert on this, but I suspect that increasing personalization can also have potentially harmful social consequences (e.g. by reinforcing bubbles). On the other hand, I don't see an immediately greater risk than for other personalization methods that already exist and are widely accepted. So, I guess, it's fine."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* The method is well-motivated and the theoretical analysis supports the desired variance reduction. Intuition for the analysis is provided. \n* Ablations w.r.t to differences in the setting (dataset size, number of actions, reward noise) and w.r.t to the hyperparameters (kernel type, kernel bandwidth) of the method are carried out.\n* Plan to open-source a benchmark for offline prompt policy learning"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a new method for offline prompt policy learning for LLMs. The main challenge in this setting is the distribution shift between the logged data and the target data. Importance sampling can correct the distribution shift but only at the cost of potentially very high variance. The key idea behind the new method is to exploit similarity relations between sentences to reduce the variance. The bias-variance trade-off of the new method is analyzed theoretically and the method is tested on synthetic data and a LLM movie description task."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Figure 6: there are 5 bars for each method. I was/am a bit confused about what the difference between these bars is. For now, I assume these are the results from the 5 random seeds, ordered by performance. But I think it would be good to have a label for this or mention it in the Figure caption. \n* Literature on contextual bandits/kernelized bandits is left out.\n* The performance gain (in particular compared to regression) seems much stronger in the synthetic setting than in the full-LLM experiment."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper proposes a new OPL method for prompt-guided language generation, which leverages the similarity among sentences."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024prompt,\ntitle={Prompt Optimization with Logged Bandit Data},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1upXwlEW8y},\nnote={under review}\n}"
},
"abstract": {
"value": "We study how to use naturally available user feedback, such as clicks, to optimize large language model (LLM) pipelines for generating personalized sentences using prompts. Naive approaches, which estimate the policy gradient in the prompt space, suffer either from variance caused by the large action space of prompts or bias caused by inaccurate reward predictions. To circumvent these challenges, we propose *Direct Sentence Off-policy gradient* (DSO), which estimates the policy gradient by leveraging similarity among generated sentences, substantially reducing variance while suppressing the bias. Empirical results on our newly established suite of benchmarks, called *OfflinePrompts*, demonstrate the effectiveness of the proposed approach in generating personalized descriptions for movie recommendations, particularly when the number of candidate prompts is large."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"off-policy evaluation",
"prompt tuning",
"large language models",
"contextual bandits"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f680aeec09de2430569c5123e98181627a65bddb.pdf"
},
"presentation": null,
"primary_area": {
"value": "causal reasoning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Prompt Optimization with Logged Bandit Data"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1v7SRWsYve | MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation | main | Active | model merging;transfer learning;multitask learning;task arithmetic;multi-objective optimization | other topics in machine learning (i.e., none of the above) | 5;5;8 | 4;4;3 | 3;3;3 | 2;3;3 | 3;2;2 | 6 | 3.666667 | 3 | 2.666667 | 2.333333 | -1 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "N/A"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper does a good job explaining the motivating the idea of using a Pareto frontier when evaluating model merging and a good job explaining their win-rate metric.\n\nThe paper gives a good overview of the quadratic approximation of the Pareto frontier."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "When merging models (generally finetuned on different tasks), many techniques boil down to a weighted sum (generally of \"task vectors\", the difference between the finetuned model and the pre-trained model) include _per-model_ scaling parameters. This creates an exponential number of settings and makes it intractable to try all the different possible merges.\n\nNormally, merging methods are evaluated based on their average performance across many tasks, but they point out that this setting ignores the idea that a user may care more about performance on some subset of tasks than others. To capture this, they introduce the metric of the \"win rate\" how often a model on one method's Pareto frontier outperforms the models on another methods frontier.\n\nThey find that by sampling several _per-model_ scaling hyperparameters, they can use a quadratic approximation to create a better Pareto frontier with less computational resources."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The Pareto frontier based metric they use (win rate) is explained well, but during the comparisons to other common merging methods, it would have been nice to see another experiment that used their approach to set merging hyperparameters for those methods to see if greater average performance could be achieved. For example, comparing the avg performance of TIES with the hyperparameters from the original paper vs parameters found by their method.\n\nOften merging methods are also evaluated on if they retain the ability to generalize to new tasks, it would be nice to see some experiments to test the generalization abilities of models merged with hyperparameters found using their method.\n\nThey include some talk of using Bayesian optimization for the sampling of hyperparameters and of using nested model merging, but their discussion (intro, methods, results, etc.) for these are so sparse they should probably be cut.\n\nMAP is already a very common acronym for Maximum a Posteriori estimation. This collision will hurt adoption of their approach and is distracting as you need to keep reminding yourself is something else when you see MAP in their paper.\n\nFigure 5 is designed to demonstrate the exponential growth of having _per-model_ hyperparameters. This growth is explained well enough in the paper that such a large figure is not the most effective use of space.\n\nNit: Lots of places where references appear to be part of the text, where they shouldn't be, i.e., it is Author (year) instead of (Author, year)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- In Fig 2 it is not immediately clear to me why the brute force approach of finding the best multitask scaling factor performs worst, also since you call it gold standard. Could you please explain this a bit further? What does the direct search look for exactly? Is it just over Task Arithmetic scaling factors, and if so, what grid is used?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The idea of focusing on trade-offs between tasks for which models are merged and Pareto fronts instead of only a single model merging solution is interesting and a useful reframing of model merging.\n\n- The method is derived from sound theory and can reduce the cost of model merging.\n\n- The method can be used as a plug-in addition to Task-Arithmetic-based merging schemes."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a new model merging method that aims to approximate the pareto front of the performance of various model merging scaling factors by a quadratic approximation of the metric that is used for performance evaluation.\nBy not requiring a full search over possible scaling factors, the amount of computation that is needed is drastically reduced.\nThe authors show that this approach is favorable especially for a larger number of tasks, where the number of possible scaling factor combinations increases exponentially."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper overall is not easy to follow. Many details are left to the reader and there is not always a clear flow in writing, requiring the reader to jump back-and-forth. In particular, the following points could be improved:\n\n- Section 2.3: it is not immediately clear why these norms are calculated, because the fact that the method uses taylor approximations is only introduced at the end of it but even then it is unclear how it ties in with the bigger picture, esp. how closeness of parameters may be related to a taylor approximation of the evaluation metric. This could be clarified. In particular, it is directly showing empirical evidence that Assumption 1 may be valid but this only comes in the section afterwards.\n\n- Section 3.1 (the main description of the method) is not written very well. For example, case 2 and 3: why is the \"warping\" by a sigmoid benificial and why does a softplus help in Case 3? Many details are left for the reader to figure out. Also, it is mentioned that you optimize Eq.5 in L252 but that you do it with gradient descent is loosely thrown in in L283. Overall, Eq.5 could be discussed more, too.\n\n- The nested MAP (nMAP) is only described in Fig. 4 of the main paper and I can not seem to find any description of bMAP at all. Could you please clarify this? While I agree that how nested merging is done is very intuitive a better description would be helpful.\n\n- It would be helpful to discuss related works more, in particular, Rame et al. 2023, who also seek to use Task-Arithmetic-based merging for Pareto fronts of multiple objectives"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The paper lacks comparative experiments with other established MOOP algorithms, such as MOEA/D and NSGA-II. Including these comparisons would enhance the evaluation of both solution quality and computational efficiency, providing a clearer context for assessing the performance of the proposed method. Additionally, brute force may not be the most appropriate baseline for this type of analysis and could be replaced by a simple MOOP method like MOEA/D.\n\n2. The experiments related to large language models are somewhat limited. Typically, mainstream model fusion effectiveness is tested on benchmarks like math and code tasks, as seen in recent work such as DARE (arXiv: 2311.03099). Including comparisons on these types of benchmarks would lend stronger support to the method’s effectiveness relative to established model fusion approaches.\n\n3. The paper would benefit from additional results or comparative analysis with other state-of-the-art model merging methods, such as Adamerging (ICLR 2024) and DELLA-Merging (arXiv: 2406.11617v1). Adding these would help situate the proposed method within the current landscape and highlight any unique strengths or trade-offs."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This is the first algorithm to estimate the Pareto front for task-vector-based model merging without relying on gradient descent, which is often computationally expensive.\n \n2. The Nested MAP variant reduces computational complexity, making it suitable for large-scale problems."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Model Merging with Amortized Pareto Front (MAP), a low-compute algorithm that merges multiple single-task models into a multitask model by efficiently identifying a Pareto set of scaling coefficients. MAP uses quadratic surrogate models to reduce evaluation costs while providing flexible solutions to balance competing task objectives."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The motivation for applying Multi-Objective Optimization Problems (MOOP) in model merging needs further clarification. While this work represents a direct application of MOOP in this area, it lacks an in-depth explanation of why MOOP would be advantageous over traditional gradient descent-based methods. \n\n2. To enhance the clarity and impact of the paper, consider including a direct comparison with gradient descent-based optimization. Specifically, the authors could discuss MOOP’s potential benefits in terms of computational efficiency, ability to handle non-differentiable objectives, flexibility in exploring trade-offs, and its capacity to fully explore the Pareto front, which gradient-based methods may not achieve. This comparison would help elucidate the unique value of MOOP for model merging.\n\n3, The paper would benefit from a more thorough comparative analysis with recent relevant works, particularly \"Knowledge Fusion by Evolving Weights of Language Models\" and \"It's Morphing Time: Unleashing the Potential of Multiple LLMs via Multi-objective Optimization.\" Both studies propose innovative methods for model merging with evolutionary approaches.\nA direct comparison with these methods could clarify the specific advancements and trade-offs associated with the MAP approach, such as variations in fusion strategies, optimization techniques, or performance across diverse benchmarks. Discussing how MAP aligns or diverges in terms of methodology, effectiveness, or scope will provide readers with a more complete understanding of its contribution to the field."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We provide a computation-efficient algorithm for finding the Pareto front representing the trade-offs during model merging caused by conflicting objectives between different tasks."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024map,\ntitle={{MAP}: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1v7SRWsYve},\nnote={under review}\n}"
},
"abstract": {
"value": "Model merging has emerged as an effective approach to combine multiple single-task models into a multitask model. This process typically involves computing a weighted average of the model parameters without any additional training. Existing model-merging methods focus on enhancing average task accuracy. However, interference and conflicts between the objectives of different tasks can lead to trade-offs during the merging process. In real-world applications, a set of solutions with various trade-offs can be more informative, helping practitioners make decisions based on diverse preferences. In this paper, we introduce a novel and low-compute algorithm, \\textbf{Model Merging with Amortized Pareto Front (MAP)}. MAP efficiently identifies a Pareto set of scaling coefficients for merging multiple models, reflecting the trade-offs involved. It amortizes the substantial computational cost of evaluations needed to estimate the Pareto front by using quadratic approximation surrogate models derived from a pre-selected set of scaling coefficients. Experimental results on vision and natural language processing tasks demonstrate that MAP can accurately identify the Pareto front, providing practitioners with flexible solutions to balance competing task objectives. We also introduce Bayesian MAP for scenarios with a relatively low number of tasks and Nested MAP for situations with a high number of tasks, further reducing the computational cost of evaluation."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"model merging",
"transfer learning",
"multitask learning",
"task arithmetic",
"multi-objective optimization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/76a2ff5e787303814e84b72a8c83c14de87253ac.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1vggIT5vvj | Cross-Attention Head Position Patterns Can Align with Human Visual Concepts in Text-to-Image Generative Models | main | Active | text-to-image diffusion model;diffusion model;text-to-image generative model;cross-attention | generative models | 5;6;6;8 | 4;5;5;4 | 3;2;3;3 | 3;2;2;3 | 2;3;3;3 | 6.25 | 4.5 | 2.75 | 2.5 | 2.75 | -0.229416 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "There have been recent works that show the <SOT> and <EOT> CAs capture different concepts. I would be interested to see if the authors found anything interesting regarding HRV and these tokens. I am also curious as to how the weakening and strengthening would work on more complex images that share entangled objects and concepts. For instance, what would weakening of \"melting\" look like for \"a plastic car melting\". I think this would be an interesting experiment since adjective and verb concepts are entangled with an object in a given image and HRV might to better in these cases than the counterparts."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The motivation of the paper is clear and build on a well studied problem of understanding the role of cross-attention and what they learn in editing T2I models. The experiments are visually appealing and tell the story of the paper well, especially the weakening of HRVs that shows weakening based on the most and least relevant concepts / heads. The authors show that using HRVs to edit images works better than SDEdit,P2P, etc. They also show improvement over Attend and Excite for the problem of catastrophic forgetting in T2I models.\n\n While in the weaknesses, I do mention my thoughts on the originality of this work, I believe using previous findings around CAs and targeting different heads and their roles in generating different concepts would be interesting to the community."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes Head Relevance Vectors (HRVs). HRs are an extension of the findings from previous works such as Hertz et al.'s P2P where cross attention maps were used to better understand t2i models and to edit images via prompts. HRV proposes using multiple concept words and concatenating them into a concept embedding matrix K which can then be applied to different heads of the cross-attention and by doing so, disentangle the different heads based on the concept they seem to be focusing on. The authors show this disentanglement of heads based on the concepts learned improved editing of images."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I would argue that the work, while interesting, does not have new insight compared to what previous works such as P2P and Diffusion Self-Guidance have already already shown in regards to the role of cross-attentions. However, this work does take a step towards using those findings to narrow down on head-level manipulation of concept vectors. It goes without saying that T2I models could benefit from more comprehensive evaluation on larger set of generated images / human evaluation. However, I do understand the challenges this poses as well."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Could you give more details about why there are some irrelevant concepts after a certain point of ordered weakening (Fig 9)?\n2. Could you give more details about how the \"h\" is chosen/computed in the method of HRV updates?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Good motivation and a clear idea\n2. Comprehensive quantitative and qualitative comparisons with many other solutions\n3. The experiments, settings, and other details are mainly clearly explained"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a method of constructing so called \"HRV\" vectors, which align with visual concepts. The authors leverage cross-attention layers of Stable Diffusion model to learn those vectors for predefined concepts. The proposed method helps to solve three known issues of the image synthesis task."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. It requires fixing a set of concepts beforehand for every HRV construction. Does not have a study of how the HRV matrix will be changed when some concepts are changed or replaced after the construction.\n2. Manual settings, choice, and configuration are required for every concept (case) during inference (Sec 5.1, Fig 5). \n3. Lack of failed cases, there are no details about the limitations of this method.\n4. Even though there is a section for bigger / novel models (SDXL), all experiments, studies, and comparisons are based on SD v1. New models might eliminate many issues the proposed method tries to solve."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I wonder the interpolation between different strengths of a head. For example, interpolating material=[-2, 2]?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper provides a new perspective in understanding the features in text-to-image generation: different heads.\n2. Qualitative examples (Figure 3a) and CLIP similarities (Figure3b) along weakening MoRHF and LeRHF clearly show the effect of weakening different heads.\n3. The appendix provides extensive qualitative results to remove doubt for cherry-picked results.\n4. The proposed method is useful for three applications: 1) correcting mis-interpretation of words in t2i generation, 2) boosting prompt-to-prompt, 3) reducing neglect in multi-object generation.\n5. Discussions resolve natural questions: extension to SDXL and effect across different timesteps."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper tries to understand cross-attention (CA) layers *regarding attention heads*.\n* The authors introduce N head relevance vectors (HRV) for N visual concepts.\n* The strength of an attention head to the HRVs represent the relevance of the head to the concept.\n\nAbove properties are interpreted by *ordered weakening analysis*.\n* Sequentially weaken the activations of CA heads to observe weakened visual concepts.\n\nBoosting and reducing the strength of different heads control the strength of visual concepts. It helps three applications: 1) correcting mis-interpretation of words in t2i generation, 2) boosting prompt-to-prompt, 3) reducing neglect in multi-object generation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper should provide principles of the proposed approaches.\n * L224 Why should we count each visual concept having the largest value to update the HRVs?\n * This is the most critical weakness for not giving a higher rating. I think the perspective is worth noticing but a solid paper should provide why/how it works.\n * Answering this question with theoretical justifications or intuition would strengthen the paper.\n2. HRV should be described more clearly.\n * L205 a concatenation of token embeddings // concat along which axis? I guess the result of concatenation is $N \\times (d + H)$. Then the query Q does not match the cross-attention operation because $Q\\in R^d$. Am I missing something?\n * L210 K1, ..., KN should be denoted in Figure 2.\n * Adding equations and proper notations would help readers to understand the operation.\n3. Human evaluation should be explained in more detail. Appendix C.2 is not enough. Adding a table with Number of participants, Number and types of questions, Number of questions per participant, and Any quality control measures used would strengthen the user study.\n\nMisc.: Related works -> Related work"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please refer to the weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The idea of correlating visual concepts with diffusion models is interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper mainly focuses on the explainability of text-to-image diffusion model. The authors propose a new metric based on the cross-attention heads in the diffusion UNet to illustrate the correlation between each attention head and visual concepts. Based on the proposed Head Relevance Vectors, the authors further propose several applications including solving polysemous words problems and image editing."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. I suggest the authors add textual description of the proposed HRV instead of directly showing Fig. 2 and Fig. 4 for better understanding.\n2. I wonder why <SOT> and many <EOT> are required during update of HRV?\n3. It would be better to used SDXL or some more recent models such as SD3 as primary model, given that SD1.5 is kind of outdated.\n4. It would be better to add a random weakening baseline in Fig.3.\n5. In Sec.5.1 the authors show that by utilizing HRV the SD can generate more proper concepts. I wonder if this method can be compared with using classifier guidance, where the model is encouraged to align the generated image with wanted concepts in terms of CLIP score."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024crossattention,\ntitle={Cross-Attention Head Position Patterns Can Align with Human Visual Concepts in Text-to-Image Generative Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1vggIT5vvj},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent text-to-image diffusion models leverage cross-attention layers, which have been effectively utilized to enhance a range of visual generative tasks. However, our understanding of cross-attention layers remains somewhat limited. In this study, we present a method for constructing Head Relevance Vectors (HRVs) that align with useful visual concepts. An HRV for a given visual concept is a vector with a length equal to the total number of cross-attention heads, where each element represents the importance of the corresponding head for the given visual concept. We develop and employ an ordered weakening analysis to demonstrate the effectiveness of HRVs as interpretable features. To demonstrate the utility of HRVs, we propose concept strengthening and concept adjusting methods and apply them to enhance three visual generative tasks. We show that misinterpretations of polysemous words in image generation can be corrected in most cases, five challenging attributes in image editing can be successfully modified, and catastrophic neglect in multi-concept generation can be mitigated. Overall, our work provides an advancement in understanding cross-attention layers and introduces new approaches for fine-controlling these layers at the head level."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"text-to-image diffusion model",
"diffusion model",
"text-to-image generative model",
"cross-attention"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/4933e5cd4bbfe3ea054938216e90eaf24e22aed6.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/4a002335790b23ae99b4ac09af34f10f78851fb2.zip"
},
"title": {
"value": "Cross-Attention Head Position Patterns Can Align with Human Visual Concepts in Text-to-Image Generative Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1vjMuNJ2Ik | Stable Diffusion Feature Extraction for Sketching with One Example | main | Active | Diffusion Model;Stable Diffusion;Domain Adaptation;Sketch Extraction;Single Shot | applications to computer vision, audio, language, and other modalities | 3;3;5;5;5;5 | 5;4;4;3;4;3 | 2;2;3;2;3;2 | 2;1;2;2;2;2 | 2;2;2;2;2;2 | 4.333333 | 3.833333 | 2.333333 | 1.833333 | 2 | -0.685994 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1)\tIn the feature selection stage, is the clustering performed for every training image from all its LxT features or clustering from all training images? While there should be K cluster centers, what is the definition of the feature gate G*?\n2)\tWhy set lt=10 in Eq 1? \n3)\tWhat is the difference between the first PCA figure and the second one in Fig 2? Features of two training images? \n4)\tIn Fig. 6, it seems that every sketch contains a person/character in the bottom row, showing different content from the top row. Without showing the corresponding reference image, it is not clear how the content of the input source image is preserved.\n5)\tIn Table 2 and Table 4, what model is used? DiffSketch or the DistilledDiffSketch? \n6)\tThe training time and inference time are not clear to me. The training time of DiffSketch is about 3 hours by sampling 1000 times in CDST. The average inference time for DiffSketch is 4.74s. What is the input of DiffSketch during inference? Based on my understanding, DiffSketch requires the SD features to generate a pair of image and sketch. It cannot directly transfer an image to a desired sketch style. \n7)\tWhat is the distance in Table 3? Distance between which features? For example, each image has 13 feature cluster centers and what are the distances? Distance between 13 cluster centers and all features? If so, the Euclidean distance is definitely smaller than random sampling or equal-time sampling. This distance does not present much information. \n8)\tWhen compared with other methods, which model is used? If DiffSketch_distilled is used for comparison, 30k image-sketch pairs are required to train this model. It takes 4.74 seconds to generate each pair using DiffSketch, so it takes about 150k seconds (50 hours) to generate 30k sketch pairs to train DiffSketch_distilled. It is not fair to just say the proposed method makes inferences in 0.014s with just a single training example in Fig. 1. \n9)\tSome works have been officially published. For example, Luo’s work Diffusion hyperfeatures: Searching through time and space for semantic correspondence in Neurips 2023."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed two-level aggregation (SD features+VAE) makes full use of SD UNet features and VAE features to capture both overall structure and high-frequence details in generating high-quality images."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces DiffSketch, a novel method for generating sketches from text or images, using only a single drawing example for training. \n1.\tThe proposed method explores the features of various layers and timesteps from a pretrained stable diffusion model. The proposed sketch generator aggregates the selected features for the SD model and a pretrained VAE decoder and generates a pair of image and sketch. \n2.\tTo train the sketch generator G_sketch, a triplet, consisting of the diffusion feature, a generated image, and a manually drawn sketch for the image, is required. The training loss follows the definition of Mind-the-gap [Zhu et al 2022]. A novel sampling scheme, condition diffusion sampling for training (CDST), is proposed to ensure the diversity of training samples.\n3.\tWhile training the G_sketch from a single pair of generated image and drawn sketch requires high computation and memory cost, this work further trains a distilled version, DiffSketch_distilled, using the image-to-image translation framework with 30k generated pairs generated using DiffSketch."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe framework is not flexible for practical use. A manually drawn sketch is required for a generated image with the diffusion features to train the sketch generator. However, this is not easy to obtain. In the experiments, the authors use three sketch styles that can be automatically generated for quantitative evaluation. However, existing sketch pairs cannot be used for training.\n2.\tThe ablation study shows that the two-level aggregation (SD features+VAE) and L1 loss are the most effective designs. The proposed CDST and SD feature selection bring weak improvement. \n3.\tThis paper should compare with the sota Style Injection in Diffusion works [Chuang-CVPR 2024]. BTW, the work “ Jiwoo Chung, Sangeek Hyun, and Jae-Pil Heo. Style injection in diffusion: A training-free approach for adapting large-scale diffusion models for style transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8795–8805, 2024a.” is cited twice (same as Chuang-2024b). In the supplementary results, the authors only compare with DiffStyle, not the results from Chuang-CVPR 2024. \n4.\tThe organization and writing of this paper could be improved in many ways. For example, the section organization is confusing. For example, Sec. 3 and Sec. 4 could be reorganized since Sec. 3.2 and 3.3 describe the detailed process of G_sketch and are not related to Sec. 3.1. Sec. 4.1 is the same as Sec. 3.2 and 3.3."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "I wonder the performance of this method on some real sketch dataset, such as Sketchy dataset.\n\n[1] The Sketchy Database: Learning to Retrieve Badly Drawn Bunnies, TOG 2016."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper introduces a novel method of using features from a pretrained Stable Diffusion model for sketch generation, which is a fresh perspective for this task.\n2. By requiring only one reference sketch for training, the proposed method addresses the common issue of limited sketch datasets.\n3. The authors provide thorough analysis and justification for their feature selection and aggregation process."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a novel method called DiffSketch for generating sketch-style images from natural images based on a reference sketch style. The key innovation lies in utilizing features from a pretrained Stable Diffusion model to perform sketch generation with only one example sketch for training, addressing the challenge of data scarcity in sketch datasets. The method involves selecting representative features from multiple timesteps of the diffusion process and aggregating them to train a sketch generator that can generalize to various images. Additionally, the authors introduce a distillation process to streamline the model for efficient inference."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The target sketches used in this work are not real human-drawn sketches, and the resulting sketches differ significantly from those drawn by humans. This raises questions about the applicability of the method to authentic sketch generation.\n2. The experiments primarily compare with style transfer works and a few sketch extraction methods, lacking comparison with relevant works like DiffSketcher, CLIPasso, and Clipascene.\n3. The evaluation is conducted on edge extraction datasets, which may not fully represent the diversity of real-world sketches. Testing on datasets with real human sketches, such as TU-Berlin or Sketchy datasets, could provide a more comprehensive assessment."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "i) boundaries or edgemaps can be extracted from images, but sketches can only be hand-drawn or generated, not extracted from images.\nii)In this paper, the definitions of \"personalized sketch extraction\" and \"sketch style\" require further clarification. The article treats various contour extraction techniques as different styles, which, however, is significantly different from the true concept of individual style.\niii) The BSDS500 dataset is meticulously constructed for edge detection and does not include any sketches. Although the edges are carefully annotated boundaries collected from multiple users, there remains a significant difference when compared to hand-drawn sketches. Hand-drawn sketches are characterized by their unique abstraction and morphological variations, setting them apart from precise edge annotations, which poses one of the main challenges in the field of image-to-sketch generation (image2sketch). Therefore, how do the experimental results on the BSDS500 dataset demonstrate the superior performance of the proposed method in the realm of sketch generation?\niv)The paper claims to possess the ability akin to one-shot learning, but the specific details of this capability do not seem to be clearly articulated within the text."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "In the exploration of the extraction or generation of sketches from images or text, researchers often face the challenge of insufficient paired data (sketch-text pair or sketch-image pair). This paper ingeniously utilizes the existing text2img generation models, effectively reducing the dependence on large-scale datasets and achieving the capability of generating sketches with just a single sample. In order to more accurately evaluate the effectiveness of the generated sketches, this paper proposes a new set of evaluation criteria. Moreover, the paper conducts a comparative analysis with many existing methods. Through extensive experimental validation, the method proposed in this paper demonstrates its superiority and efficiency in multiple aspects."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This article focuses on the technology of image or text-driven sketch extraction and generation based on one example. The research proposes a feature selector that can accurately screen the most discriminative features from the SD model. Subsequently, through a carefully designed feature aggregator, the organic integration of multi-level features is achieved. On this basis, a feature decoder is used to generate the corresponding sketches. The article further delves into the impact of features at different timesteps on the sketch generation process and innovatively proposes a set of new evaluation criteria, providing strong theoretical support for research in the field of sketch generation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "This paper slightly lacks in terms of technological innovation and fails to contribute new perspectives or insights to the field of sketch generation. Additionally, the experimental design in the paper seems to lack the persuasive power to fully demonstrate its arguments. Although concepts such as \"personalized sketch extraction\" and \"sketch style\" are mentioned in the text, the experiments do not delve into the deep exploration of these areas. Furthermore, the paper seems to be somewhat confused in distinguishing between boundaries extracted from images and hand-drawn sketches, failing to make a clear distinction between the two. Finally, the logical structure of the article seems to require further refinement and optimization."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "[+] The paper makes novel observations regarding Stable Diffusion (SD) with proper justification for hyperparameter selection and makes efficient use of inherent bias within SD for one shot style transfer between sketch and images. Especially regarding (i) the choice of number of clusters, as well as the observations across different timesteps, (ii) the kind of features extracted by UNet and VAE decoder.\n\n[+] The paper makes astute observations regarding the limitation of CLIP for sampling scheme and addresses them"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes an algorithm for on shot style transfer given an input image to a sketch style. It makes crucial information regarding pre-trained knowledge within Stable Diffusion and its biases and leverages them to their advantage. Further the paper addresses limitations in their proposed approach and proposes efficient techniques to overcome them, as in their novel sampling technique. Lastly the work compares with state of the art sketch based style transfer algorithms and show that the proposed algorithm provided substantial improvement."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "[-] Ablative study section of this paper is very weak. It is missing ablation studies of the different losses used and provides only for the L1 loss. However, according to the claims made in the paper, all of the proposed losses are very important. Thus, it is necessary to quantitatively and qualitatively judge their contribution in the final output.\n\n[-] Seeing the importance of L1 loss via ablation studies a hyperparameter search for weight of L1 (and other losses) seems crucial to make the most out of the proposed method.\n\n[-] Readability is hindered by the quality of sentence constructions throughout the entire paper. The entire paper should be revisited for better English and sentence construction\n\n[-] The different sections in the paper are organized very poorly and the reader often has to move multiple sections to understand working of a particular concept described within the paper.\n\n[-] One of my major concerns is that – it is not at all clear how the distillation is happening in Sec. 4.4 to get the \"DiffSketch_{distilled}\" model. It briefly says about Pix2PixHD model, without any kinds of detail on the distillation. This section is extremely vague.\n\n[-] In ablation study, \"one timestep\" gives competitive performance to the proposed method and as per [A], timestep has a huge impact on the performance of the model so including comparison with results at a range of steps would be useful in verifying the robustness of the model.\n\n[-] Through as per Table 4 the proposed algorithm works well, the dataset used for validation is very small, combined with the algorithm requiring the user to draw a sketch severely limits the algorithm's capabilities and its adaption for current sketch based datasets.\n\n[-] The major contribution of the paper is the feature combination of SD and VAE. It would have been great to see a quantitative comparison of SD+VAE and SD only feature extraction.\n\nReference:\nA. Denoising Diffusion Implicit Models, ICLR 2021."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "The objective also involves CLIPsim, can this be studied with ablative? \n\nAlso the images in the PDF file is very compressed, making it difficult to evaluate the quality."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The high-level insight seems reasonable – manipulating the distribution of network features during diffusion process is a reasonable choice for achieving this sketching visual effect.\n\nSupplemental material with both quantitative and qualitative data."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a diffusion based method to convert images to line drawings. The style of desired line drawing can be specified by a reference image"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Although the high-level insight seems okay, the details of the method is extremely difficult to understand for me. I spent one afternoon to understand section 3.1 and 3.2 and still have no idea how this works. The “feature selection” and “aggregation” somewhat also links to “Open vocabulary panoptic segmentation with text-to-image diffusion models” and “Diffusion hyper features: Searching through time and space for semantic correspondence” but those previous “aggregation” are some sparse point or sparse mask for diffusion features. To me these does not explain what is the idea behind the stylization.\n\nThe “sketch generator” in 4.1 seems a distilled model trained from stable diffusion. To me it seems the stylization comes from the training of the model on the reference, not from some “diffusion feature aggression”?\n\nAlso it is not clear why we need to modify the VAE. The results in this paper do not look difficult to process for any existing SD VAEs."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Questions on unclear details and poor presentations:\n\n1.\tLine 173, `Fig 2` should be `Fig. 2`\n2.\tWhat does the feature gate $G*$ mean?\n3.\tIn Fig. 2, there are 12 curves. Why there are 12 curves? 12 corresponding to what is not very clear to me.\n4.\tEq. (2). There is no definition of $v_{i,n}$.\n5.\tLine 236, $CH$ should be $\\text{CH}$\n6.\tFigure 4, $U_{md}$ should be $U_m$\n7.\tLine 313, what prompt C is used?\n8.\tLine 293, `$I_{source}$ and $I_sketch$` should be `$I_sketch$ and $I_{source}$`\n9.\tLine 325, avoid using $S$ since $S$ has been used in Eq. (6) and has different meanings\n10.\tLine 338 and Line 350, the regularization is not given. How to employ regularization?\n11.\tLine 354, why not using more test data to perform FID evaluation?\n\nAbout experimental results\n\n1.\tPlease provide the scores of Equal Feature in Table 2\n2.\tThe authors show good performance on HED and XDoG, but w/o CDST has better performance on anim. However, HED and XDoG are less similar to the real human-drawn sketch styles. While anim looks more like human-drawn sketches. Does this mean the propose CDST is not suitable for the human-drawn sketches? \n3.\tFigure 7, please include the real human-drawn sketches for visual comparison.\n4.\tThe results in the supp. such as Figure 22 show that the proposed method fails to imitate the Artist 1’s style as Semi-Ref2sketch. The proposed method fails to generate clean and sparse sketches. Please explain this limitation."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "**Originality** The main idea of analyzing the diffusion features to select and aggregate valid features makes sound to me. In addition, the proposed diffusion-based sampling scheme to generate diverse examples is interesting to me."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a sketch extraction method with only one style example. The main idea of this paper is to train a sketch generator to predict the sketch images from the photo images generated by a fixed pre-trained diffusion model. The sketch generator integrates features from the diffusion model and the VAE decoder, and is trained on one image-sketch pair through the CLIP-based directional loss. After training, a pix2pix-based framework is trained to distill the abundant paired data generated by the diffusion model and the sketch generator for fast inference. The contributions lie in the idea to use diffusion models to generate paired data to solve the one-shot training problem with a special sampling strategy to ensure the diversity of the generated data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Poor presentation**. The details of this paper are generally hard to follow. This paper contains many submodules and process. At least, a summarized algorithm could help the reader to understand the full process. \n\nI found this paper is not self-contained. Many parts need to refer to the Appendix to help understand. See [Questions] for the details.\n\nThe reference format is poor. All the reference uses \\citet, making it difficult to tell the main text and reference apart. Should use \\citep instead! (`large datasets Seo et al. (2023)` -> ` large datasets (Seo et al. 2023)`.)\n\n**Limited applications** This paper claims that the `method is tailored specifically for sketch generation`. However, I didn’t see any designs that only work for sketches. XDoG looks just like the binarized image rather than sketch image. And if the user draws a stylish image rather than a sketch of the input image, this method can still train. In the original paper of CLIP-based directional loss, the StyleGAN can be trained for various style editing tasks in addition to the sketch style. This paper only shows applications on sketches, which is limited. \n\nIn addition, the authors use HED, XDoG as two style types. These two types of sketch extraction are known, which has little value to invest how to imitate the sketch style. Why we have to train such complicated pipeline to imitate the simple HED and XDoG sketches? More complicated human-drawn sketches are what we truly want."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Selecting features and controlling conditions of the diffusion model for sketch extraction"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024stable,\ntitle={Stable Diffusion Feature Extraction for Sketching with One Example},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1vjMuNJ2Ik},\nnote={under review}\n}"
},
"abstract": {
"value": "Sketching is both a fundamental artistic expression and a crucial aspect of art. The significance of sketching has increased alongside the development of sketch-based generative and editing models. \nTo enable individuals to use these sketch-based generative models effectively, personalizing sketch extraction is crucial. In response, we introduce $\\text{DiffSketch}$, a novel method capable of generating various geometrically aligned sketches from text or images, using a single manual drawing for training the style. Our method exploits rich information available in features from a pretrained Stable Diffusion model to achieve effective domain adaptation. To further streamline the process of sketch extraction, we further refine our approach by distilling the knowledge from the trained generator into the image-to-sketch network, which is termed as $\\text{DiffSketch}_{distilled}$. Through a series of comparisons, we verify that our method not only outperforms existing state-of-the-art sketch extraction methods but also surpasses diffusion-based stylization methods in the task of extracting sketches."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Diffusion Model",
"Stable Diffusion",
"Domain Adaptation",
"Sketch Extraction",
"Single Shot"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/153fbcefce5be60c7248f3f5fd80d0ad5c9dc201.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Stable Diffusion Feature Extraction for Sketching with One Example"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1vrpdV9U3i | Variational Search Distributions | main | Active | black box optimization;Bayesian optimization;variational inference;generative models;level set estimation | probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.) | 3;6;6;6 | 5;3;2;3 | 4;4;3;3 | 2;3;3;3 | 3;3;2;2 | 5.25 | 3.25 | 3.5 | 2.75 | 2.5 | -0.927173 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The following questions are sincere:\n\n- Who is the audience for this paper? \n\n- What questions is this paper answering?\n\n- What does the variational inference framing get us in the end? Access to a set of tools for theoretical analysis?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "This paper demonstrates a clarity of thought and composition that is commendable, I particularly enjoyed the related work section.\n\nLikewise I do not have any major concerns regarding the technical soundness of the results presented.\n\nAs a good conceptual introduction to the topic, I think this draft could be useful to researchers new to the topic with some revisions."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper casts sequential black-box optimization as a variational inference (i.e. amortized optimization) problem, and uses this perspective to unify a collection of different black-box optimization algorithms under a common theoretical framework and presents some proof of concept results on easy sequence optimization tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I have two general impressions of this paper. \n\nFirst, it seems like the authors have not really chosen a direction for the paper. There are at least three different directions here, A) a unifying view of sequential black box optimization algorithms, B) a practical algorithm for sequential BBO, and C) theoretical analysis of convergence rates of a particular sequential BBO algorithm under strong assumptions. I would suggest you pick no more than two directions, preferably one. I actually think this particular subfield could really benefit from a more holistic perspective of the work that has been done, as I constantly see minor variations of these algorithms in my social media feed and review stack with no apparent awareness of the relationships between them. From what I can tell from this draft, it seems that A and C likely play more to your strengths.\n\n\nSecond, the authors seem blissfully unaware of a substantial body of work on this topic. To be quite candid, the paper reads like it was written circa September 2021. This is not mere rhetoric. The most recent baseline the authors consider was published at ICML 2021. It is also odd that two of the baselines you did include, DbAS and CbAS, are not even designed for the sequential setting. As a very active researcher in this exact area, I struggle to understand who this paper is for and how the authors pictured their place in the broader dialogue on this topic. I am sure you worked very hard on this paper and I commend your effort, but I honestly believe the best advice I can give you is to talk to more people working on this topic, preferably from outside your immediate academic circle. While it is difficult to hear this feedback, one of the functions of peer review is to reveal \"unknown unknowns\". I want to be sure this review is constructive, so I will provide some key references if you are serious about diving into this topic. You should also consider making use of tools like [Connected Papers](https://www.connectedpapers.com/) to improve your literature review process and avoid this situation in the future. \n\nYou can start with [A survey and benchmark of high-dimensional Bayesian optimization of discrete sequences](https://arxiv.org/abs/2406.04739). This work is the most up-to-date complete survey on the topic I have seen, and the benchmarking rigor is notably good. This paper is associated with two repositories, [poli](https://github.com/MachineLearningLifeScience/poli) and [poli-baselines](https://github.com/MachineLearningLifeScience/poli-baselines). The former contains a suite of test functions that are much more up to date than the combinatorially complete landscapes considered in this paper, and the latter contains a suite of baseline solvers. You may even want to consider contributing your method as a solver to poli-baselines at some point.\n\nSome key axes of variation to consider: \n\nHow is the optimization problem solved? Most fall into one of three categories, directed evolution (which you seem to be familiar with based on your inclusion of AdaLead and PEX), generative search with explicit guidance, e.g. [2, 3, 4, 5, 6], and generative search with implicit guidance [7, 8], which can also be seen as a kind of amortized search. I could cite more papers but I believe I have made my point. Algorithms also differ in their handling of constraints, and their approach to managing the feedback covariate shift induced by online active data collection by an agent. \n\nIn particular I will draw your attention to [a tutorial for LaMBO-2](https://github.com/prescient-design/cortex/blob/main/tutorials/4_guided_diffusion.ipynb) if you want to start considering more up to date baselines, however I would recommend using the solver interface provided in poli-baselines for actual experiments. You may also be interested in Ehrlich functions if you would like a convenient test function that is much more difficult to solve than small combinatorially complete landscapes but still easy to work with [9]. Ehrlich functions are available in [a small standalone package](https://github.com/prescient-design/holo-bench) or [as part of the poli package](https://machinelearninglifescience.github.io/poli-docs/using_poli/objective_repository/ehrlich_functions.html).\n\nWhile I'm sure this is not the outcome you hoped for, science is a dialogue, and good science requires awareness of what is happening outside your academic niche. Hopefully my feedback is clear and actionable enough to benefit this work and your progression as a scientist.\n\nReferences\n\n- [1] González-Duque, M., Michael, R., Bartels, S., Zainchkovskyy, Y., Hauberg, S., & Boomsma, W. (2024). A survey and benchmark of high-dimensional Bayesian optimization of discrete sequences. arXiv preprint arXiv:2406.04739.\n- [2] Tripp, A., Daxberger, E., & Hernández-Lobato, J. M. (2020). Sample-efficient optimization in the latent space of deep generative models via weighted retraining. Advances in Neural Information Processing Systems, 33, 11259-11272.\n- [3] Stanton, S., Maddox, W., Gruver, N., Maffettone, P., Delaney, E., Greenside, P., & Wilson, A. G. (2022, June). Accelerating bayesian optimization for biological sequence design with denoising autoencoders. In International Conference on Machine Learning (pp. 20459-20478). PMLR.\n- [4] Gruver, N., Stanton, S., Frey, N., Rudner, T. G., Hotzel, I., Lafrance-Vanasse, J., ... & Wilson, A. G. (2024). Protein design with guided discrete diffusion. Advances in neural information processing systems, 36.\n- [5] Maus, N., Jones, H., Moore, J., Kusner, M. J., Bradshaw, J., & Gardner, J. (2022). Local latent space bayesian optimization over structured inputs. Advances in neural information processing systems, 35, 34505-34518.\n- [6] Maus, N., Wu, K., Eriksson, D., & Gardner, J. (2022). Discovering many diverse solutions with bayesian optimization. arXiv preprint arXiv:2210.10953.\n- [7] Tagasovska, N., Gligorijević, V., Cho, K., & Loukas, A. (2024). Implicitly Guided Design with PropEn: Match your Data to Follow the Gradient. arXiv preprint arXiv:2405.18075.\n- [8] Chen, A., Stanton, S. D., Alberstein, R. G., Watkins, A. M., Bonneau, R., Gligorijevi, V., ... & Frey, N. C. (2024). LLMs are Highly-Constrained Biophysical Sequence Optimizers. arXiv preprint arXiv:2410.22296.\n- [9] Stanton, S., Alberstein, R., Frey, N., Watkins, A., & Cho, K. (2024). Closed-Form Test Functions for Biophysical Sequence Optimization Algorithms. arXiv preprint arXiv:2407.00236."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "None"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* What's 'x' in the title of Figure 1?\n* What are the limitations of this approach?\n* How is diversity within a batch enforced? \n* The reverse KLD is known to result in mode collapse. Why wasn't this an issue?\n* Which variation reduction method did you use for the gradient estimator?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* The problem is important as it has applications in pharmaceutical drugs/enzyme design.\n* The paper paper is well written and the method is sound\n* Experimental results on high dimensional datasets demonstrate superiority of the approach"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a black box varoiational inference approachfor discrete designs generation. The authors derive asymptotic convergence rates for learning the true conditional generative distribution of designs. Compelling results on high dimensional sequence-design problems are demonstrated."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The method lacks novelty, it's based on putting together blocks that have already been proposed in the litterature\n* The paper clarity can be improved with an overview plot of the method"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- How robust are the results to the selection of the threshold $\\tau$ and the batch size $B$?\n- While the reviewer is not familiar with the field, could the authors give some intuitions about the difference between VSD and active learning approaches like Bayesian optimization, and why VSD is better?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper formulates the batch active search problem in the variational inference framework and provides theoretical guarantees to the learned distribution based on the sequentially attained data.\n- Experimental results on real-world biological datasets demonstrate the practical use of the algorithm and its effectiveness to solve the problem."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper develops the variational search distribution method to solve the active search problem in biological design. VSD estimates the super level-set distribution in a sequential manner by generating batches of data points for sequential experiments. Empirical results on optimizing protein fitness in several datasets showcase the effectiveness of VSD."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The precision of VSD and most other methods is decreasing with more rounds in TrpB and TFBIND8 datasets while the recall values are in general low. However, an ideal method should achieve a better estimation of the ground truth super level-set distribution as more samples are collected. This may be due to the initial training set size being too large or the fitness landscape being easy to model. How do the models perform with a smaller initial training set size?\n- How is VSD compared with the simple and commonly used directed evolution method?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.Since your algorithm heavily relies on another model ($p(x | D)$), I would be highly interested in better understanding the influence of a good prior on your variational distribution.\n2. Regarding the GFP experiments, do you sample already existing sequences ? What is the influence of the relative poor performance of the oracle on ood data on the interpretation of the results ?\n3. How can you explain that only a very simple prior such as a mean field performs on average better ? It seems quite logical for GFP for instance where a wild type exists, however it is less intuitive for datasets without wild type.\n\nTypo: the recall and precision have the same expression."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The paper is clear, well-written, and aligns with well-established benchmarks in the field, such as CBAS (Brookes et al.). \nThe model is supported by convergence analysis and an extensive set of well-handled experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors present a novel variational method for learning to sample from rarely observed events, aiming to minimize a distance between the distribution of interest, namely $p(x∣y>t)$, and its parametric variational counterpart $q(x|\\phi)$. The problem is reformulated to leverage the “whole dataset,” not just rarely observed events, and is expressed as Equation (5), which comprises two terms: $log p (y>t∣x)$ and the negative KL divergence between$q(x|\\phi)$ and $p(x)$. The authors' final proposal is to estimate $p(y>t∣x)$ using a parametric function instead of a simple PI estimate. The variational distribution is optimized by a REINFORCE gradient estimator."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the model description is clear, the model comprise a parametric distribution $p(x|D_0)$ which might be the biggest model shortcoming originating from the model own formulation. \n\nIts major impact is that it reweights the gradient estimates of $q(x|\\phi)$. Intuitively, how would that compare simply to the iterative strategy of Cbas ?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We develop variational search distributions (VSD), a method for finding discrete, combinatorial designs of a rare desired class in a batch sequential manner with a fixed experimental budget."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024variational,\ntitle={Variational Search Distributions},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1vrpdV9U3i},\nnote={under review}\n}"
},
"abstract": {
"value": "We develop variational search distributions (VSD), a method for finding discrete, combinatorial designs of a rare desired class in a batch sequential manner with a fixed experimental budget. We formalize the requirements and desiderata for this problem and formulate a solution via variational inference. In particular, VSD uses off-the-shelf gradient based optimization routines, can learn powerful generative models for designs, and can take advantage of scalable predictive models. We derive asymptotic convergence rates for learning the true conditional generative distribution of designs with certain configurations of our method. After illustrating the generative model on images, we empirically demonstrate that VSD can outperform existing baseline methods on a set of real sequence-design problems in various biological systems."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"black box optimization",
"Bayesian optimization",
"variational inference",
"generative models",
"level set estimation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/3061468e416bd31a56c4c351ced6dfc8c8451ded.pdf"
},
"presentation": null,
"primary_area": {
"value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/a0e83343743da5f8f10c1061ec9be54d794ef19d.zip"
},
"title": {
"value": "Variational Search Distributions"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1wRXUROlzY | Evaluating and Improving Subspace Inference in Bayesian Deep Learning | main | Active | Subspace inference;Bayesian neural networks;Uncertainty quantification | probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.) | 3;5;6 | 4;3;4 | 2;3;3 | 2;2;3 | 2;3;3 | 4.666667 | 3.666667 | 2.666667 | 2.333333 | 2.666667 | -0.188982 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see the weaknesses for questions and improvements."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The authors provide a full explanation of technical detail, including all architecture details and training process."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors proposed a novel method of constructing low-dimensional subspace (BA) that allows the aggregate entire weights trajectory during SGD rather than the trajectory tail. The authors argue that their method is not only as computationally effective as the Tail Trajectory (TT) method but improves inference quality by better capturing the subspace landscape. Then the authors provide a simple method to evaluate the quality of constructed subspaces based on the Bayes Factor. They apply the proposed estimator and show that the subspaces constructed from the BA trajectory outperform those from TT. At last, the authors propose a new method of Bayesian Inference in the newly constructed subspace. They combine Importance Sampling with the randomized quasi-Monte Carlo method to get an estimator with a better convergence rate.\n\nThe authors provide multiple experiments to assess the quality of the proposed subspace as well as the RQMC-IS method and argue that those algorithms achieve higher test accuracy"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The novelty of the proposed algorithm is arguable. The only difference between the proposed Algorithm 1 and Algorithm 2 from [1] is that the latter uses the last point in each block (that is defined by the appropriate choice of hyperparameter c: moment update frequency) when Algorithm 1 uses the mean point in the block.\n2. Figure 2 seems to be misleading because both BA and TT subspaces are constructed from the SGD points after some initial warm-up process that is the same in the proposed paper and in [1]. For example, for Synthetic Regression, both methods use the last 1000 points, while for the CIFAR dataset, they use the last 140 points.\n3. The conclusion that the proposed BA and RQMC-IS have a better quality is based mostly on Tables 4, 5, 7, 9, 10, 12, 13, 16, and 17, where most of the numbers in each line are within the standard deviation of one another. From my point of view, the provided results don't show any statistically significant difference between BA and TT or between RQMC-IS and ESS/VI. I advise authors to provide a more detailed analysis of the metrics and show that there is a definitive difference between methods. For example, one can provide a similar Figure as Figure 4 from [1].\n4. Some of the tables provide incomplete comparisons between methods. For example, TT (RQMC) is missing from Tables 7 and 8. Also, there is no comparison between RQMC and SNIS.\n5. There is some inconsistency between the results from the paper and the prior work. For example, Figure 4 is used to argue that compared to the TT subspace, the BA subspace reflects higher uncertainty in data-sparse regions and higher confidence in data-rich regions. However, Figure 4 (middle) should be the same as Figure 3 (ESS, PCA Subspace) from [1], where TT captures uncertainty the same way BA does. Is there any difference in the experiment setup that caused this difference?\n6. The proposed sampler RQMC-IS seems to require evaluation of $p(D|z)$ using all training data and couldn't be estimated using mini-batches, which makes this method practically useless for large neural networks and large datasets. At the same time VI can be performed by Doubly Stochastic VI using mini-batches that drastically improve speed. What type of VI did the authors use in their paper? Have they considered comparison with SVI [2] or other scalable methods?\n\nMinor typos:\nLines 207, 242: subapce -> subspace\n\n[1] Pavel Izmailov, Wesley J Maddox, Polina Kirichenko, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Subspace inference for bayesian deep learning. In Uncertainty in Artificial Intelligence, pp. 1169–1179. PMLR, 2020.\n[2] Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. (2013). Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Is it appropriate to interpret the proposed subspace construction method as a kind of downsampling scheme to the FT method?
\n\nThere are some recent literature missing in this work, e.g., SWA [1], TWA [2], DLDR [3]. In [1], the algorithm can be regarded as special case for TT. In [2], the subspace can be flexibly constructed by utilizing the checkpoints in the training head stage, tail stage, and even the fine tuning stage, the later of which meaning that a downsampling scheme can be applied to FT, to some extend. In [3], it samples some checkpoints in the trajectory, commonly in the stage where a descent performance is already gained (informally saying a downsampling to the “almost FT”).
\n\nPlease provide more explanations on the novelty and propertied of the proposed methods, and also more discussions w.r.t. the existing work, i.e.,
\n\n- explain how the block-averaging (BA) method differs from or improves upon SWA, TWA, and DLDR.\n\n- provide a more thorough comparison table or discussion that highlights the key differences and potential advantages of BA over these existing methods.\n\n- clarify the novel aspects of BA that go beyond simple downsampling, if any.\n\n
[1] P. Izmailov, D. Podoprikhin, T. Garipov, D. Vetrov, and A. G. Wilson, Averaging weights leads to wider optima and better generalization, UAI, 2018.
\n\n[2] T. Li, Z. Huang, Q. Tao, Y. Wu, and X. Huang, Trainable weight averaging: Efficient training by optimizing historical solutions, ICLR, 2023.
\n\n[3] T. Li, L. Tan, Z. Huang, Q. Tao, Y. Liu, and X. Huang, Low dimensional trajectory hypothesis is true: DNNs can be trained in tiny subspaces, IEEE TPAMI, 2022.
\n\n\n\n2. If the answer to question 1 is yes. The results in table 1 is obvious, so I don’t see much importance of taking much length showing table 1.
\n\nNonetheless, it is really interesting to see the results in Figure 3, showing the significance of the proposed evaluation metric on the condition that the testing data evidence ratios are informative comparison baselines.
Since it is proposed as an evaluation metric on the quality of the constructed subspace. However, during the construction or some tuning towards obtaining the subspace, I didn’t notice how such metric is utilized, e.g., we can leverage such metric is determine the dimensions $k$ or $M$, etc. From the existing results, I only saw that this metric is simply used to show that BA is better than TT, which is less informative, as anyways we can use inference performance metrics to compare.
\n\nPlease elaborate the utility of the proposed metric, i.e.,
\n- demonstrate how their proposed metric (Bayes factor and evidence ratio) can be used to guide hyperparameter selection, such as choosing optimal values for k or M.\n\n- discuss potential applications of these metrics beyond just comparing BA to TT, such as in model selection or uncertainty quantification tasks or possibly provide examples of using these metrics during the subspace construction process to iteratively improve subspace quality.\n\n- explain how these metrics offer insights that traditional performance metrics may not capture.\n\n\n3. Some minor aspects: the computational cost of obtaining this metric can also be explained with some details; In table 2, I did’t see much advantage of BA over TT and FT; rather than comparing the performance and efficiency jointly in a simple table, I would suggest to have some comparisons specifically on the efficiency.\n\n- a more detailed analysis of the computational costs associated with calculating the proposed evaluation metrics.\n\n- some separate demonstration that focuses solely on comparing the computational efficiency of BA, TT, and FT methods.\n\n- discuss potential reasons for the non-distinctively advantageous performance of BA in Table 2; are there any trade-offs involved?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The problems addressed are interesting.\n\n2. The paper is easy to read and the proposed method is quite relevant to many researches in DL optimization.\n\n3. The proposed evaluation metric appears to be the most interesting aspect to me, personally."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes a bayesian subspace inference method, focusing on three aspects, i.e., subspace construction, subspace evaluation, and inference efficiency. Correspondingly, the block-averaging scheme, bayes factor construction, and some importance sampling techniques are leveraged. In general, the problem studied in this work is interesting, however, the current content does not show significantly appealing advantages. Thus, the following comments in the review system are raised. I would be willing to raise my scores if my concerns or possible misunderstandings are well addressed in the rebuttal."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Weakness\n1. The BA scheme seems not sufficiently novel, as I saw it somewhat like downsampling to the full trajectory, which is closely related to some existing works (see questions below).\n\n2. Despite the importance and efficacy of the proposed evaluation metric, it feels hard to find the practical use and values in helping the optimization or inferencee process, rather than simply being a metric that shows one method outperforms one another, because we can also just compare the inference results with other metrics to compare (see questions below).\n\n3. In the current numerical experiments, the evidence to the inference efficiency is insufficient in comprehensive evaluations, as it is claimed as one of the main contribution of this work (see questions below)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is clearly written and very easy to follow. The idea of using separated samples to reconstruct the subspace is straight-forward but effective. The combination of block-averaging (BA) subspace reconstruction and quasi-MC is simple but seems to be effective empirically."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed 3 modifications to the existing subspace Bayesian inference methods, targeting at (1) subspace construction; (2) direct subspace quality evaluation; and (3) subspace sampling. Specifically, the author improves the tail trajectory subspace construction (only take the last few samples) by taking spread samples across the entire trajectory, covering broader range of dynamics. The author also propose direct evaluation of subspace quality based on the Bayes factor (evidence likelihood ratio). Last but not least, the author also proposed to use important sampling or quasi-MC for subspace sampling. Empirically, the proposed method is tested on UCI and image classification, demonstrating the improved performance compared to the existing one."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The overall methodology seems to be a straightforward combination of three existing (with minor modifications) approaches. To demonstrates its benefits, it would be great to cite/perform comparisons with other Bayesian inference techniques. \n\nAnother questions I have is during the subspace reconstruction, do you need to flatten the matrix from $mxn$ to $d$? Doesn't this destroy the structural information stored in the original weight matrix? For example, matrix multiplication Wx represents each row of W is dot product with x, this is a kind of structural information stored within the matrix. If you flatten this, you loose such info. Is this because flatten W and then perform SVD can give you a much lower low-dimensional $z$, compared to performing SVD on the matrix W? \n\nWhat if you keep the matrix structure, but perform SVD on the matrix trajectory to get USV^T, and treat S as your Z? \n\nFor the image classification and UCI, I am curious about the full trajectory performance. Why don't you report it?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We enhance subspace inference in Bayesian deep learning with better subspace construction, evaluation metrics, and efficient inference techniques, improving both accuracy and computational efficiency."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024evaluating,\ntitle={Evaluating and Improving Subspace Inference in Bayesian Deep Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1wRXUROlzY},\nnote={under review}\n}"
},
"abstract": {
"value": "Bayesian neural networks incorporate Bayesian inference over model weights to account for uncertainty in weight estimation and predictions. Since full Bayesian inference methods are computationally expensive and suffer from high dimensionality, subspace inference has emerged as an appealing class of methods for approximate inference, where inference is restricted to a lower-dimensional weight subspace. Despite their benefits, existing subspace inference methods have notable pitfalls in terms of subspace construction, subspace evaluation, and inference efficiency. \nIn this work, we conduct a comprehensive analysis of current subspace inference techniques and address all the aforementioned issues. \nFirst, we propose a block-averaging construction strategy that improves subspace quality by better resembling subspaces built from the full stochastic gradient descent trajectory. Second, to directly evaluate subspace quality, we propose novel metrics based on the Bayes factor and prior predictive, focusing on both goodness-of-fit and generalization abilities. Finally, we enhance inference within the subspace by leveraging importance sampling and quasi-Monte Carlo methods, significantly reducing computational overhead. Our experimental results demonstrate that the proposed methods not only improve computational efficiency but also achieve better accuracy and uncertainty quantification compared to existing subspace inference methods on CIFAR and UCI datasets."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Subspace inference",
"Bayesian neural networks",
"Uncertainty quantification"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/dcbd1a2ad94dcbc1f19b0722b2e10b2493423d9c.pdf"
},
"presentation": null,
"primary_area": {
"value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/f33d9f5ed9f54e71d66506b4b86045d9801141e8.zip"
},
"title": {
"value": "Evaluating and Improving Subspace Inference in Bayesian Deep Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1waeKNeQzG | Style-Coherent Multi-Modality Image Fusion | main | Active | Multi-modality;Image Fusion;Style-based Learning;Self-supervised Learning | applications to computer vision, audio, language, and other modalities | 5;5;5;6 | 4;4;4;4 | 3;3;3;3 | 3;2;3;3 | 3;3;3;3 | 5.25 | 4 | 3 | 2.75 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. What’s advantageous of using information entropy-based clipping to constrain α?\n\n2. What is the rationale for designing the fusion form in equation 8?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.The decoupling of style and content based on frequency domain analysis is creatively applied to image fusion, and the stylistic features of the source image are preserved and enhanced considering the characteristics of image fusion.\n\n2. Adaptive reconstruction loss with good generalization is proposed."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel style-coherent multi-modality fusion model, based on frequency analysis, which creatively decouples the style and content of modality for image fusion. This work argues that styles represent modality-specific differences in texture, saturation, and resolution, so the SNF module adaptively performs style preservation and enhancement during the fusion process, and SAF aligns cross-modal fused features to a designated modality, ensuring stylistic consistency. In addition, distinguishing from the traditional methods that directly supervise with source data, this work employs an adaptive reconstruction loss function."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. This method has a preference for one modality in the fusion process, but how to choose the preference for two modalities whose dominance is not clear?\n\n2. Poor interpretability of SAF modules and Losses."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See \"Weaknesses\""
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.The style-coherent approach is applied to multimodal fusion field and valid to be effective.\n\n2.The paper is well-written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel approach to Multi-Modality Image Fusion (MMIF) that addresses the issue of style discrepancies (e.g., saturation, resolution) in existing methods, which can obscure important features. The proposed model includes a style-normalized fusion module for more effective feature merging and a style-alignment fusion module to ensure consistency across modalities. An adaptive reconstruction loss enhances information preservation during the fusion process. Experimental results show that this method outperforms existing approaches, indicating strong potential for various image processing applications."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the adaptive reconstruction loss is a key part of the proposed approach, the paper provides limited analysis on its impact compared to other loss functions. Further ablation studies focusing specifically on this component could strengthen the claims."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Please provide further ablation experiments on Equation 8, such as thoroughly analyzing the impact of different alignment strategies on the preservation of modal information and fusion effectiveness. Additionally, the paper should use visualization or quantitative methods to discuss and analyze the feature representation capability of the potential fusion space after alignment, as this is a core innovation of the study.\n2. Related works need to be considered. I suggest that the authors include a more in-depth analysis of how their method compares to existing approaches in the introduction and related work sections. Additionally, since the paper extracts modal heterogeneity based on a decomposition approach, it is important to conduct experimental comparisons with previous related works. I suggest that the authors begin the discussion by referencing some earlier multimodal fusion papers based on decomposition approaches, such as DRF [2], DIDFuse [4], and LRRNet [5].\n3. Please provide a sensitivity analysis experiment on the impact of different $\\beta$ values on the fusion results. Additionally, an ablation experiment on the max operation in Equation 11 should also be conducted.\n4. Please clarify the exact experimental settings used for comparison in the fifth Weakness and illustrate any differences from the settings used in CDDFuse [1].\n\n[4] \"DIDFuse: Deep image decomposition for infrared and visible image fusion,\" Proceedings of the International Joint Conference on Artificial Intelligence, 2020.\n\n[5] \"LRRNet: A novel representation learning guided fusion network for infrared and visible images.\" IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper aligns heterogeneous modal features to a shared and unified fusion space instead of directly fusing them, which is reasonable to reduce the differences between modalities.\n2. The performance of this paper seems better compared to some related SOTA works."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents the Style-coherent Content Fusion Model (SCFNet) for Multi-Modality Image Fusion (MMIF), addressing the challenges posed by significant style discrepancies between images from different sensors. The proposed model utilizes a dual-branch encoder architecture, incorporating a Fourier Prior Embedded (FPE) block, a Style-Normalized Fusion (SNF) module, and a Style-Alignment Fusion (SAF) module to enhance content representation and align features across modalities. An adaptive reconstruction loss function is introduced to improve content supervision and detail retention during fusion."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The core of this paper is to align heterogeneous modal features to a shared latent fusion space to reduce inter-modal differences, which is reflected in Equation 8. However, there is a lack of theoretical analysis and further experimental validation regarding the rationale behind the design of the modal alignment method and its varying impacts. In particular, it needs to be analyzed through experiments whether aligning infrared features to the visible light domain will lead to the loss of certain infrared detail information, requiring an examination of the information retention during the alignment process.\n2. Currently, there is a substantial amount of research focusing on both modal consistency and modal heterogeneity, such as CDDFuse [1], which employs fusion methods to address modal differences. Earlier, DRF [2] utilized a style transfer approach by separating scene and attribute representations for image fusion. The paper lacks a thorough comparative analysis of this work with existing similar studies and do not provide enough experimental comparisons with other similar methods.\n3. As to the method, this paper mainly combines and improves existing approaches in the design of key components. For example, the design of the FPE and SNF modules draws on previous works, which, while beneficial for the research, offers limited contributions in terms of innovation.\n4. This paper proposes adaptive reconstruction loss as one of the innovations, with the Equations 9-11, but the rationale and effectiveness lack explanation and validation. First, how should the hyperparameter $\\beta$ in Equation 9 be set, as it is very important. Second, in Equation 11, why use $\\max(R(V), R(I))$ — is $\\max$ the optimal choice? \n5. In the experiment section, the training set used by the authors follows the settings from [1] and [3]. In Table 2, the results of the CDD method on the TNO Dataset are consistent with [1], so the results of the same comparison methods, TarD and DeF, on the TNO Dataset should also match those in [1]. Why is there a discrepancy here?\n\n[1] \"CDDFuse: Correlation-driven dual-branch feature decomposition for multi-modality image fusion.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.\n\n[2] \"DRF: Disentangled representation for visible and infrared image fusion.\" IEEE Transactions on Instrumentation and Measurement, 2021.\n\n[3] \"Equivariant multi-modality image fusion.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see Weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ Multi-modal image fusion is a fundamental step in many applications.\n+ The proposed approach of separating style and content is sound and promising.\n+ The results are convincing."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors study the problem of fusing images from multiple modalities for various downstream tasks. To this end, the authors propose a deep network to handle the discrepancies between different modalities. In this network, the authors first split the amplitude and the phase in the frequency domain, leveraging the observation that style (modality-specific details) are preserved in the amplitude where other details (content) are represented in the phase component. The network first style-normalizes features from both modalities and then uses a learnable alignment to obtain a unified representation in the visible domain.\n\nThe results on several benchmarks suggest significant improvements over the state of the art."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. I sense that the authors specifically avoided the use of the term disentanglement. In the disentanglement literature, people did introduce different methods for disentangling content and style for various applications. I believe positioning the paper with that literature would have been valuable. A quick Google search reveals some studies using disentanglement for some multimodal tasks, though with non-visual modalities.\n\n2. Figure 1: I am not convinced with the visual results provided. The only difference I see is that the proposed method produces slightly sharper reconstructions. This does not necessarily entail that the style discrepancy is a major issue. The example in Figure 7 is more convincing.\n\n3. Many crucial details are left unclear.\n\n3.1. Figure 2: It is not clear how Merging is different from Summation or Concatenation. The figure/caption should state what FPE stands for.\n\n3.2. \"the twin encoder branches share the same structure and parameters.\" => This should be justified a bit.\n\n3.3. \"the degree of style modification is gradually adjusted by introducing learnable parameters\" => How do we ensure that this is gradual if the parameters are learnable.\n\n3.4. Eq 7: Not clear why pooling is required here or why there is a need for a spatial squeeze operation. Moreover, it is not justified why maxpool has to be combined with avgpool.\n\n3.5. Eq 8: What's being performed here is not explained properly.\n\n3.6. Entropy-aware alpha: Not clear why providing bounds on a variable enforces information entropy.\n\n3.7. Eq 11: This overall loss formulation should have been explained in more detail. There are several unclear bits. Why do we use Max(R(V), R(I)) for similarity loss? Why is there no (f(V, I)-I) or (f(I, I)-I) term in the equation?\n\n4. The paper should provide analysis on alpha and the parameters in Eqs 5 and 6.\n\nMinor comments:\n- \"Fourier Prior Embedded\" => \"Fourier Prior Embedding\".\n- \"we perform a content replacement\" => \"we replace content\".\n- Eq 9: I suppose it is better to use X instead of I."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024stylecoherent,\ntitle={Style-Coherent Multi-Modality Image Fusion},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1waeKNeQzG},\nnote={under review}\n}"
},
"abstract": {
"value": "Multi-modality image fusion (MMIF) integrates heterogeneous images from diverse sensors. However, existing MMIF methods often overlook significant style discrepancies, such as saturation and resolution differences between modalities, resulting in overly smooth features in certain modalities. This tendency causes models to misjudge and disregard potentially crucial content. To address this issue, this paper proposes a novel style-coherent multi-modality fusion model that adeptly merges heterogeneous styled features from various modalities. Specifically, the proposed style-normalized fusion module progressively supplements the complete content structure by merging style-normalized features during cross-modal feature extraction. Meanwhile, a style-alignment fusion module is developed to align different feature representations across modalities, ensuring consistency. Additionally, to better preserve information and emphasize critical patterns during fusion, an adaptive reconstruction loss is applied to multi-modal images transformed into a unified image domain, enforcing mapping to a consistent modality representation. Extensive experiments validate that our method outperforms existing approaches on multiple MMIF tasks and exhibits greater potential to facilitate downstream applications."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Multi-modality",
"Image Fusion",
"Style-based Learning",
"Self-supervised Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e34c82b267b9f92ec4cc373a162f28bca9e1b67d.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/094e810742cb19c49d43d0855ea85574a0a54fc2.pdf"
},
"title": {
"value": "Style-Coherent Multi-Modality Image Fusion"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1x1gGg49jr | SurFhead: Affine Rig Blending for Geometrically Accurate 2D Gaussian Surfel Head Avatars | main | Active | dynamic head avatars;rigging;inverse-graphics | applications to computer vision, audio, language, and other modalities | 6;6;6;6 | 3;2;5;4 | 3;3;2;3 | 3;3;3;3 | 3;3;2;3 | 6 | 3.5 | 2.75 | 3 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See [Weaknesses]."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* This paper introduces better deformation modeling techniques for Gaussian surfels. Compared to existing works, the proposed method is more reasonable and can handle more extreme deformations such as stretching and shearing. The proposed technique could be useful in other related research topics beyond head avatar modeling. \n\n* The proposed method is able to reconstruct fine geometric details, outperforming existing baselines by a large margin. \n\n* The paper is overall well-written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a method for learning head avatars based on 2D Gaussian splatting. To make the Gaussian surfels better handle the stretch and shear deformation under extreme poses and facial expressions, this paper introduces affine transformation derived from Jacobian deformation gradient of the surface. Normal orientations are calculated accordingly. Moreover, the authors propose Jacobian blend skinning to interpolate these affine transformations to ensure surface smoothness. Results show that the proposed method is able to reconstruct drivable head avatars with high-quality geometry."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Missing comparison against Gaussian Head Avatar (GHA) [Xu et al. 2023], which is a state-of-the-art head avatar method in terms of image synthesis quality. Although the authors have already compared with SplattingAvatar and GaussianAvatar, I think an additional comparison against GHA is also necessary because GHA demonstrates high-resolution image synthesis with the assistance of a super-resolution module. \n\n* It would be better if the authors report the training time and rendering speed. One important advantage of Gaussian splatting is its efficiency. I wonder whether the proposed techniques (such as Jacobian blend skinning) hinders this advantage or not. \n\n* It is not clear how the proposed method performs for subjects wearing eye-glasses. NeRSemble dataset contains cases with eye-glasses, but they are suspiciously skipped in the experiments."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* It is unclear to me how Jacobian blend skinning improves upon linear blending skinning. It seems to me that $J_b$ introduces spatial smoothness to Gaussians' deformation matrices. Is the vertices of the mesh still transformed by linear blending skinning before the Jacobian blend skinning is applied? Also, could the author clarify Fig. 2(b)? What are the meanings of the green and yellow lines and the weights? How do they connect to the deformation of the triangle meshes?\n* The paper mainly focuses on improving the geometry. Since 2D Gaussian splatting is known to produce better geometry than 3D Gaussians, how much does 2D Gaussians help improve the geometry, compared to the components proposed in the paper?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* It motivates each contribution properly.\n* The use of Jacobian deformation and Jacobian blend skinning in the context of head modeling looks novel to me.\n* The qualitative results and normal similarity evaluation demonstrate better geometry compared to baselines.\n* The effectiveness of each component is well studied."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper aims at better geometry estimation in head modeling. It replaces the 3D Gaussian splatting with 2D Gaussian splatting to better model the surface. Moreover, it addresses three issues in existing works with three novel components: 1) To compensate for the incorrect deformation of 2D Gaussians due to the triangle's shear and stretch, it proposes the Jacobian deformation; 2) To mitigate the discontinuities in adjacent triangles, it improves linear blend skinning with Jacobian blend skinning; 3) To resolve hollow illusion in eyeballs, it replaces Spherical Harmonics with Anisotropic Spherical Gaussians. It demonstrates that it outperforms the state-of-the-art regarding normal similarity and remains comparable in rendering quality."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* It is unclear how the deformation gradient $J$ in Sec 2.2 and the blended Jacobian $J_b$ in Sec 2.3 connect. In Line 258 to 261, it mentions that it replaces the original deformation in GaussianAvatar with a new deformation $J_b$, and $J$ appears only as a parameter of JBS in Eq. 2. What is the relationship between $J$ and $U_i, P_i$ in Eq. 2? Which transformation, $J$ or $J_b$, is used in the final method? \n* The paper measures the normal similarity between ground-truth normals and rendered normals from 2D Gaussians. However, since this work claims to achieve a better geometry, evaluating metrics that apply to meshes, such as Chamfer distance or normals rendered from the mesh is more informative when judging the geometry. Although previous methods use normals rendered from 2D Gaussians as proof of geometry, the link between it and the mesh quality still looks vague to me.\n* As the rendering quality is only on par with state-of-the-art, e.g., GaussianAvatar, the comparison of training and rendering speed is missing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "As elaborated in the \"weaknesses\", the geometric accuracy of the experimental results exhibits considerable variation, and we hope that the authors can further enhance their analysis and discussion on this crucial point.\n\nIn terms of experimental design, the NeRSemble dataset, while containing accurate 3D mesh models, lacks sufficient detail, and obtaining such precise models can often be challenging in practical applications. In this regard, we eagerly inquire whether the proposed method heavily relies on such relatively accurate 3D mesh models, and how it would perform in their absence. To validate this, the authors could consider using videos they have shot or sourced from the internet as input, employing monocular facial 3D reconstruction algorithms (such as DECA) to obtain mesh sequences, or directly bypassing the use of 3D mesh models altogether. We are anticipating and curious about the results of such experimental setups. We encourage the authors to actively explore and propose potential strategies for adapting their proposed approach to scenarios where detailed 3D mesh models are unavailable."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper introduces SurFhead, a novel model within the Gaussian Splatting framework that captures geometrically accurate head deformations. This representation utilizes intricate affine rigging combined with Gaussians and their normals, solely based on RGB videos, which is a significant advancement in achieving realistic and detailed head avatars. The proposed Jacobian Blend Skinning (JBS) Algorithm is technically sound. The paper tackles the problem of the hollow illusion in the cornea, where a concave surface appears convex due to the prioritization of photometric losses during training. \n\nThe methods presented in the paper are demonstrated to achieve superior results across a variety of subjects, including real and synthetic data. They excel in challenging scenarios such as sharp reflections on convex eyeballs, fine geometric details, and exaggerated deformations, showcasing the robustness and effectiveness of the proposed approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper contributes a novel representation for geometrically accurate head avatars within the 3D Gaussian Splatting framework, a method for natural interpolation of affine transformations across adjacent deformations, and enhancements to the realism of corneal representations in head avatars. These contributions advance the state of the art in personalized head avatar construction and have the potential to improve various applications in computer graphics, virtual reality, and beyond. The key contributions include:\n\n* Introduction of SurFhead Model: The paper introduces SurFhead, the first geometrically accurate head avatar model within the Gaussian Splatting framework. This model is designed to capture the deformation of head geometry using intricate affine rigging that combines Gaussians and their normals solely from RGB videos.\n\n* Jacobian Blend Skinning Algorithm: To address the issue of discontinuities between adjacent triangles in head deformations, the paper proposes the Jacobian Blend Skinning (JBS) algorithm. This algorithm blends adjacent transformations while avoiding geometric distortions by linearizing the non-linear matrix interpolation space, leveraging classical matrix animation techniques and geometrically smooth polar decomposition.\n\n* Enhancement of Corneal Convexity and Specularity: The paper addresses the hollow illusion in the cornea by regularizing corneal convexity and enhancing specularity using computationally efficient Anisotropic Spherical Gaussians (ASGs). This improvement ensures a more realistic representation of the cornea in the head avatar."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The method is built upon the foundation of 2D Gaussian Splatting, and I believe that the proposed method's success in recovering a superior surface geometry owes much to this solid groundwork. Indeed, the authors introduced several improvements on this basis, such as intricate affine rigging, but I consider these innovations to be more incremental improvements rather than groundbreaking advancements.\n\nThe geometric accuracy of the experimental results appears to exhibit significant variation: some achieve hair-level geometric detail, while others fail to recover the structure of the hair. Consequently, whether this variation stems from instability in the algorithm or differences in the data quality of various training datasets arises. I hope the author can provide more analysis and discussion on this issue."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "The authors have thoroughly discussed the potential ethics impact of detailed head avatars."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1) Related Works: I think some related works of static reconstruction could be further discussed, such as H3DS [1], deformable model-driven approaches [2], and Implicit Neural Deformation methods [3]. These methods leverage 3D points from SfM, multi-scans, or multi-view data from various identities to enhance reconstruction under sparse-view conditions, although they primarily focus on static human face or head geometry reconstruction.\n\nRef:\n\n[1] Ramon, Eduard, et al. \"H3d-net: Few-shot high-fidelity 3d head reconstruction.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\n\n[2] Xu, Baixin, et al. \"Deformable model-driven neural rendering for high-fidelity 3D reconstruction of human heads under low-view settings.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\n\n[3] Li, Moran, et al. \"Implicit Neural Deformation for Sparse‐View Face Reconstruction.\" Computer Graphics Forum. Vol. 41. No. 7. 2022."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) Jacobian Blend Skinning (JBS): The use of Jacobian Blend Skinning (JBS) enables natural interpolation of affine transformations across adjacent deformations, effectively reducing discontinuities in transitions.\n\n2) Cornea Opacity Constraint: To address the specular highlights in the eyeball region, the method constrains the corneal regions to remain opaque by regularizing the opacity of the respective Gaussians."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a method that handles stretch and shear transforms essential for detailed deformations of geometry utilizing intricate deformations driven by the affine Jacobian gradient instead of similarity transformation and corresponding normal adjustments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) Detail Representation: In Figure 5 (bottom row), there seems to be a lack of finer details, such as wrinkles. Adding more visual comparisons or details on addressing such high-frequency features could strengthen the analysis.\n\n2) Rendering Speed and FPS: Given that methods like 3DGS/2DGS achieve real-time rendering, the speed of deformable-driven methods may be a limitation for applications requiring real-time animation. Could you report the FPS compared to other methods to clarify performance in time-sensitive scenarios?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024surfhead,\ntitle={SurFhead: Affine Rig Blending for Geometrically Accurate 2D Gaussian Surfel Head Avatars},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1x1gGg49jr},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent advancements in head avatar rendering using Gaussian primitives have achieved significantly high-fidelity results. Although precise head geometry is crucial for applications like mesh reconstruction and relighting, current methods struggle to capture intricate geometric details and render unseen poses due to their reliance on similarity transformations, which cannot handle stretch and shear transforms essential for detailed deformations of geometry. To address this, we propose SurFhead, a novel method that reconstructs riggable head geometry from RGB videos using 2D Gaussian surfels, which offer well-defined geometric properties, such as precise depth from fixed ray intersections and normals derived from their surface orientation, making them advantageous over 3D counterparts. SurFhead ensures high-fidelity rendering of both normals and images, even in extreme poses, by leveraging classical mesh-based deformation transfer and affine transformation interpolation. SurFhead introduces precise geometric deformation and blends surfels through polar decomposition of transformations, including those affecting normals. Our key contribution lies in bridging classical graphics techniques, such as mesh-based deformation, with modern Gaussian primitives, achieving state-of-the-art geometry reconstruction and rendering quality. Unlike previous avatar rendering approaches, SurFhead enables efficient reconstruction driven by Gaussian primitives while preserving high-fidelity geometry."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"dynamic head avatars",
"rigging",
"inverse-graphics"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ee0ff58107f30c034739ea3db4c3ecf14bbeaf4d.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/620dc41e7a9e55e6ff3a71b4e80f2ea1196d8924.zip"
},
"title": {
"value": "SurFhead: Affine Rig Blending for Geometrically Accurate 2D Gaussian Surfel Head Avatars"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1xG3MN1RRW | SparseVLM: Visual Token Sparsification for Efficient Vision Language Models Inference | main | Active | Sparsification;Vision Language Model;Efficiency | applications to computer vision, audio, language, and other modalities | 3;5;5;6;6 | 5;4;5;5;4 | 2;3;2;3;2 | 2;2;2;3;3 | 2;3;3;3;3 | 5 | 4.6 | 2.4 | 2.4 | 2.8 | -0.372678 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Suggestion: Instead of using the rank, an alternative approach (which is not prone to the numerical issues that I discussed above), is to prune based on the (relative) singular values of $\\mathbf{P}$: \n 1) First compute the singular values of $\\mathbf{P}$, assume that these are returned in decreasing order.\n 2) Divide each value by the total sum of singular values (a.k.a. the \"energy\" of the matrix). Let's call this vector $E$, i.e. the relative energy of each singular value.\n 3) Prune $N - k$ tokens, where $k$ is the smallest value such that $\\sum_{i=1}^k E_i \\geq \\lambda$.\n- Could you please add the memory used by the baseline in Table 4?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The proposed method is compared against two recent and popular baselines: ToMe and FastV.\n- The paper is well structured and the method is well explained (modulo some typos / ambiguous equations, see weaknesses).\n- The method does not require any type of fine-tuning, so it can be used on top of different VLMs, which broadens its potential adoption.\n- For the same number of retained tokens, Table 1 and Table 2 show that the proposed method represents a huge accuracy improvement respect the baselines.\n- The paper ablates the use of token reconstruction in Table 3, which shows that the proposed improvement significantly improves over the core SparseVLM method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces SparseVLM, a method to accelerate vision-language models (VLMs) by prunning vision tokens incrementally over layers, based on its significance for (a subset) the text tokens. The set of (significant) visual tokens to keep is computed from the self-attention scores in each layer, and the set of relevant text tokens is computed just once, using the dot-product between the text and image tokens after being embedded to the same size. This tries to reduce the computational overhead of the method, achieving real wallclock time speed-ups, for different prunning levels. The authors also propose to aggregate and reconstruct some tokens, to prevent completely losing the information of the tokens that are decided to prune. \nThe paper presents results in different image and video understanding benchmarks, and compares the proposed method against two recent baselines (ToME and FastV). The results show that the proposed method improves over these baselines across different prunning thresholds, and achieves significant memory, FLOP and runtime reduction with roughly the same accuracy, when compared to FastV."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The results in Table 1 and Table 2 do not reflect the reduction in neither FLOP, nor wallclock runtime. Only Table 4 offers some results comparing the proposed method only against FastV. However it's not clear to which benchmark(s) the reported accuracy corresponds to. Also, the baseline storage memory is missing (although it can be calculated from the remaining values). I would suggest that the authors report a similar figure to Figure 4 with two plots showing the avg accuracy w.r.t. a) letency or total runtime and b) FLOPs. This would represent much better the cost-vs-quality tradeoffs, for SparseVLM applied on both LLaVA and MGM. This is crucial to increase the rating of the paper. If space is limited, I would suggest reporting values for individual datasets in the appendix, and report only the average in the main text (unless there are any major outliers).\n- Table 2 does not even represent the speed-vs-accuracy trade-off, nor the \"token budget\"-vs-accuracy, since only a single token budget of 135 is represented. Also, this value is not the same used in any of the image results reported in Table 1. Which begs the question: why was this particular value used? Please, provide figures as described above.\n- It's not 100% clear how $\\mathbf{P}$ in section 3.2 is calculated. According to eq. (1) and (2), $\\mathbf{P}$ is a subset of rows and columns of the attention matrix (after the softmax), but lines 183-184 refer the \"logits\" (i.e. $\\frac{\\mathbf{Q}\\mathbf{K}^\\top}{\\sqrt{D}}$). It's also not clear if the attention matrix is re-normalized after the selected visual tokens are dropped from the keys or not.\n- Notation in eq. (7) is ambiguous. The index $j$ in the sum isn't used anywhere. Also, notice that the size of $\\mathbf{H}_v \\mathbf{H}_q^\\top$ is $L_v \\times L_q$, which is inconsistent with the sum over $j$, assuming $j$ denotes a column index, since $\\mathbf{R}_i$ supposed to be the average over visual tokens for the $i$-th query token. This is a small mistake that can be fixed by using $\\mathbf{H}_q \\mathbf{H}_v^\\top$, to match the dimension order of $\\mathbf{P}$ is $L_v \\times L_q$ (i.e. text $\\times$ vision).\n- The choice of the relevant text token select threshold $m = \\text{Mean}(\\mathbf{R})$ isn't justified. Why this threshold and not something else? E.g. the text tokens in the with the highest $R$ score such that the sum of \n- The number of vision tokens to prune is based on the rank of $\\textbf{P}$, this can be problematic due to numerical precision. For instance, suppose that $n = L_t = L_v$, what happens if we get that half of the the singular values of P are $10^{-5}$ and the rest are $10^5$? The rank would be technically $n$, but is it really or do we get $10^{-5}$ rather than 0 due to numerical errors?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Due to the weakness of this paper, I tend to be borderline negative about this paper. See weakness section for details of my concerns and questions."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The proposed SparseLVM framework, including rank-based strategy and token recycling, is reasonable.\n\n2. The paper is clear to read. It is easy for the audience to follow the sophisticated designs in SparseLVM.\n\n3. Experiments are performed on both image and video benchmarks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces SparseLVM, a training-free method to prune redundant visual tokens in LVLMs. SparseLVM leverages visual-relevant text tokens to rate the significance of vision tokens within the self-attention matrix, leading to the progressive pruning of irrelevant visual tokens. Specifically, SparseLVM proposes a rank-based strategy to adaptively determine the sparsification ratio for each layer and a token recycling method that compresses pruned tokens into center tokens. SparseLVM reduces the number of tokens with less performance drop than ToMe and FastV."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The proposed SparseLVM is not practical for two reasons. \n\n- First, it is not compatible with FlashAttn, which is a standard solution for accelerating the calculation of self-attention. In SparseLVM, the attention matrix must be explicitly obtained to select redundant visual tokens in **each layer** of LVLMs. However, FlashAttn does not support attaining the explicit attention matrix. Without compatibility with FlashAttn, SparseLVM will be limited in its efficiency. The SparseLVM should be compared with the original LVLMs with FlashAttn. \n- Second, although the performance drop of SparseLVM is less than ToMe and FastV, it is still considerably large. More explanations and discussions are necessary.\n\n2. Some important ablation studies are not shown.\n\n- For verifying efficiency, the SparseLVM should be compared with the original LVLMs with FlashAttn. \n- For verifying effectiveness, the SparseLVM should report more results on high-resolution image understanding benchmarks, such as DocVQA, InfoVQA, AI2D, etc, as in leading LVLMs [1].\n\n\n3. Some details of SparseLVM are not clearly introduced. \n- What is the value of m in equation (6), lambda in equation (8), and tau in equation (9)? How does SparseLVM determine them?\n- After Visual Token Recycling, how does SparseLVM insert these recycled tokens into the preserved tokens? It seems that these recycled tokens have the risk of spatial relationship between different image tokens. \n\n\n\n[1] Qwen2-VL: Enhancing Vision-Language Model’s Perception of the World at Any Resolution"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weakness for details."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The proposed Text-Guided Visual Token Pruning is novel, which introduce text-aware guidance for visual token sparsification. The experiments showing this approach outperforms text-agnostic methods like FastV by 14.8% on LLaVA when retaining only 64 tokens, which validate the effectiveness of using text tokens as \"raters\" for visual importance.\n\n2. The proposed method is training-free, which is easy to deploy. \n\n3. The paper introduces a rank-based strategy to adaptively determine the sparsification ratio for each layer, which saves the number of hyperparameters and reduces the engineering effort.\n\n4. Instead of directly pruning tokens, the proposed method merges them into compact representations. Ablation studies show this recycling mechanism improves accuracy from 1.5% to 17.7% on POPE when pruning to 64 tokens, demonstrating significant information preservation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces SparseVLM, a training-free token optimization that improves the efficiency of Vision-Language Models (VLMs) by reducing visual token. The method improve the efficency of VLM by three steps: 1) first identify text tokens strongly correlated with visual signals via cross-attention, and then 2) measure the contribution of visual tokens to the selected visual-relevant text tokens (raters), and finally 3) adaptively prune the insignificant vision token. Experiments show the LLaVA equipped with SparseVLM reduces 61%∼67%\nFLOPs with a compression ratio of 78% while maintaining 93% of the accuracy. The proposed method consistently outperforms the\nexisting state-of-the-art method FastV by 7.7%∼14.8% on LLaVA, 10.2%∼21.6% on MiniGemini, and 34.4% on VideoLLaVA."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The proposed method requires the attention scores to select the visual tokens to be pruned. This would not be compatible with FlashAttention, and may require significantly more memory and possibly extra latency. The authors are encourage to do comparison with baselines powered with FlashAttention and show the result. My concerns is that without using FlashAttention the proposed method could cost much more memory, and make it harder or infeasible to be deployed. Specifically, author should show the peak memory consumption and latency comparision between proposed method vs Baseline with FlashAttention.\n\n2. The experimental evaluation lacks comparison with latest token reduction methods that outperform FastV. Notably absent are Token summarization[https://arxiv.org/abs/2410.14072 ], Progressive Token Pruning[https://arxiv.org/abs/2301.13741]- all of which have better performance comparing to FastV in different tasks. Including these state-of-the-art baselines is essential for a comprehensive evaluation.\n\n3. The experimental focuses on a single VLM architecture: LLaVA, which limiting evidence of the method's broader applicability. Testing across other VLM architectures like Flamingo would better demonstrate generalizability, particularly with different visual and textual feature fusion mechanisms."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. How to deal with RoPE for the sparsified visual tokens?\n2. In Equation 7, why was it chosen to use the features from the visual encoder and text embeddings to select raters? Does this lead to the method performing poorly on problems that require logical reasoning, such as the performance on MMMU、Reasoning-related subset of MMBnech?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper is well-written, showcasing a clear and articulate presentation of ideas.\n2. The paper is simple and easy to follow.\n3. The training-free token optimization mechanism is more universal and can be better adapted to various VLM models compared to methods that require training."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an efficient training-free token optimization mechanism dubbed SparseVLM without extra parameters or fine-tuning costs.\n\nThe contributions of this paper are summaried as follows:\n1. The paper introduces a sparsification framework dubbed SparseVLM for vision-language models. \n2. The paper first assigns visual-relevant text tokens as raters, adaptively prunes VLMs with the rank of the attention logits, and recycles partial tokens.\n3. Consistently outperforms the FastV."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. One motivation of the paper is that visual tokens should be sparsified adaptively based on the question prompt. This prompt-aware sparsification, while preserving the original model's performance as much as possible, causes the VLM to lose its ability for multi-turn conversations.\n2. The method in the paper requires explicitly obtaining the attention map, but in many inference acceleration frameworks, the attention map is not accessible, such as in FlashAttention. In Table 4, is the baseline using the standard attention mechanism? If compared with FlashAttention, does it still have a speed advantage?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In Table 1, for the experimental results under the settings \"Retain 192/128/64 Tokens,\" what exactly do these settings mean? For FastV, does this mean that only this number of image tokens is retained across all layers?\n2. 5.1 section,\"3 settings (using all tokens, only text tokens, and only text raters we select)\",explain the settings in detail.\n3. Are you doing the pruning and recycling process in the prefilling stage? \"we introduce a rank-based strategy to adaptively determine the sparsification ratio for each layer\". If as said like this, do we prune and recycle at each layer in the prefilling stage to keep 192/128/64 tokens in experiment? Please give a clear explanation of your sparsification process, which is not stated in the paper."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper presents SparseVLM, a training-free mechanism designed to improve the efficiency of vision-language models (VLMs) by optimizing the handling of visual tokens.\n2. The paper is well-written and clearly presents the proposed framework. The authors provide detailed descriptions of their methodology.\n3. Considering the recycling of deleted image tokens is an effective method to alleviate performance degradation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces SparseVLM, an efficient, training-free optimization method for visual tokens in vision-language models (VLMs). Recognizing that visual tokens in VLMs often introduce high computational costs due to their low information density, SparseVLM selectively prunes redundant tokens without needing additional training data or parameters. By using the visual-relevant text tokens (from the self-attention matrix) to rate the importance of visual tokens, SparseVLM identifies and prunes unnecessary tokens progressively. A rank-based strategy is used to determine the pruning ratio per layer, while a token recycling method condenses pruned tokens into compact forms, maintaining essential visual information."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The primary focus on efficiency must maintain performance; otherwise, efficiency becomes meaningless. In Table 1, even the setting with the least acceleration, \"Retain 192 Tokens,\" exhibits substantial performance drops across multiple benchmarks. Specifically, GQA drops by 4.3%, POPE by 2.3%, VQAv2 by 2.9%, MMB by 2.2%, and TextVQA by 2.1%, which are unacceptable losses.\n2. In Section 5.1, why was the unusual number of 142 image tokens chosen for the experiment? Additionally, if the goal is to demonstrate the effectiveness of the \"text rater,\" it would be insufficient to test only one efficiency setting. A range of settings retaining different proportions of image tokens should be used to substantiate its effectiveness across varying conditions.\n3. In the section \"Sparsification Level Adaptation\", N is calculated to determine the number of tokens deleted in each layer for adaptive purposes. However, in the later experimental sections, the number of retained image tokens (e.g., 192) is specified directly. If the result of N in a decoder layer is 0, how can you specify retained image tokens to 192? Isn’t this contradictory? \n4. Rank(P) is a rather unusual way to compute visual redundancy. P represents a part of the attention map, but it is unclear why the linear correlation among attention vectors would relate to visual redundancy. Is there any supporting evidence for this, such as a reference to a paper?\n5. Figure 1 shows that the patches selected by fastv are identical under different questions, which is unreasonable. Since fastv relies on the attention between text and image (this can be found in the source code), the selected patches should not be exactly the same. You may check for any errors in the process.\n6. The paper mentions, \"We reuse the self-attention matrix of visual-text tokens directly from the decoder layers without extra training parameters for sparsification.\" However, if the method requires outputting the self-attention matrix, it can not use FlashAttention, which would significantly impact inference speed.\n7. In Table 1, it would be helpful to include efficiency evaluation like FLOPs and latency directly alongside performance scores on the benchmarks to facilitate comparison, the number of retained image tokens is not sufficient to evaluate efficiency.\n8. One contribution claims, \"it is the first attempt to explore the potential of text-aware guidance for efficient inference of VLMs.\" This is inaccurate, as the \"fastv\" approach also prunes image tokens based on text tokens’ attention to image tokens.\n9. The description of the ToMe method in the Related Work section is inaccurate. \"For example, ToMe (Bolya et al., 2022) prunes according to the relevance between visual tokens and text and merges both modalities through the BSM algorithm.\"\n10. In the introduction, the calculation of the number of image tokens seems incorrect. The claim, \"For instance, a 672 × 672 image in LLaVA (Liu et al., 2024) yields 2304 vision tokens that span over half of the context length,\" does not align with the correct calculation of 576 × 5 (four sub-images plus one resized original image). You can check it again, there might be an error somewhere."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose an efficient text-aware training-free vision token optimization mechanism called SparseVLM."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024sparsevlm,\ntitle={Sparse{VLM}: Visual Token Sparsification for Efficient Vision Language Models Inference},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1xG3MN1RRW},\nnote={under review}\n}"
},
"abstract": {
"value": "In vision-language models (VLMs), visual tokens usually consume a significant amount of computational overhead, despite their sparser information density compared to text tokens. To address this, most existing methods learn a network to prune redundant visual tokens and require additional training data. Differently, we propose an efficient training-free token optimization mechanism dubbed SparseVLM without extra parameters or fine-tuning costs. Concretely, given that visual tokens complement text tokens in VLMs for linguistic reasoning, we select visual-relevant text tokens to rate the significance of vision tokens within the self-attention matrix extracted from the VLMs. Then we progressively prune irrelevant tokens. To maximize sparsity while retaining essential information, we introduce a rank-based strategy to adaptively determine the sparsification ratio for each layer, alongside a token recycling method that compresses pruned tokens into more compact representations. Experimental results show that our SparseVLM improves the efficiency of various VLMs across a range of image and video understanding tasks. In particular, LLaVA equipped with SparseVLM reduces 61\\% $\\sim$ 67\\% FLOPs with a compression ratio of 78\\% while maintaining 93\\% of the accuracy."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Sparsification",
"Vision Language Model",
"Efficiency"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ada02af6871129647c0ad48e059de7830e858e0c.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/a10e0623dc9ca3a7e81354da3444fab5cb2a0ac2.pdf"
},
"title": {
"value": "SparseVLM: Visual Token Sparsification for Efficient Vision Language Models Inference"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1xzqz73hvL | High-dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws | main | Active | empirical risk minimization;high-dimensional statistics;scaling laws;weak to strong generalization;knowledge distillation | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 5;6;6;8 | 3;3;3;3 | 2;3;3;4 | 2;2;3;3 | 2;3;2;3 | 6.25 | 3 | 3 | 2.5 | 2.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "In addition to the points above that should be clarified, I have one following question: how is the CIFAR-10 experiment performed? Notably, how are the surrogate and target distributions generated? This is worth expanding in the paper."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "This paper is rather well-written and contains quite a few interesting theoretical insights. The results draw clear delineations on how knowledge distillation through a surrogate model can help. As someone who hasn't thought about overparameterized linear regression in a while, Proposition 1 and Corollary 1 were rather surprising results, demonstrating that for a dataset size proportional (but smaller) than the number of parameters, the optimal surrogate predictor to use for generating pseudolabels is actually not the ground truth predictor, and that there is (in theory) always room for benefit as long as the covariance is non-isotropic, which implies that a learner benefits from using something other than the actual distribution of labels. \n\nIn addition to the theory, the numerical results on CIFAR-10 also counterintuitively support that a learner only trained on surrogate pseudolabels on the target domain actually outperform the surrogate model itself, which has access to true labels (albeit with a different covariate distribution...?)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors propose a precise characterization of the benefits of knowledge distillation, mostly in the context of gaussian linear regression. In particular, the main set-up considers the excess risk of linear regression on a given distribution, but with the learner only able to access pseudolabels generated by a surrogate model instead of the true labels. Notably, the authors show that under a covariance-shift model (i.e. the distribution of covariates $x$ may change between the surrogate and target stages, but the underlying predictor $\\beta_\\star$ remains the same in between), then the optimal surrogate predictor minimizing the (asymptotic) excess risk on the target distribution is a weighted version of the ground-truth predictor $\\beta_\\star$, which which amplifies entries corresponding to large eigenvalues of the (diagonal) covariance matrix above a certain threshold, and shrinks entries below a threshold. Furthermore, in a masked setting, where the surrogate model is restricted to selecting a subset of the full set of features, then similarly the optimal surrogate predictor selects predictor entries above a certain threshold of covariance eigenvalues. Lastly, the authors show that in a certain asymptotic regime, an optimal surrogate-to-target model (i.e. a model trained on target distribution covariates with surrogate model pseudolabels) has the same excess risk as the least-squares target model trained with the true labels, demonstrating that knowledge distillation in a sense cannot beat out ERM with access to true labels."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Though the theoretical results are interesting, there are a few aspects that are worth clarifying. Notably, even though it is demonstrated that there can exist a surrogate model that induces better risk on the target model than using the true labels, in general a surrogate model is typically not trained with foreknowledge of the task distribution. I believe this is what Section 5 is trying to convey, but it is not clear to me after reading that section how to interpret the result therein. In particular, it should be explained how this relates to, for example, a target model that is trained using strong labels to demonstrate the marginal gain (or loss).\n\nIn general, the paper accrues a lot of jargon and notation; it would be very helpful to either create a table containing the different set-ups/regimes considered and the summary conclusion of the relative gain/suboptimality of knowledge distillation and/or a notation table that summarizes what the various risks denote. This would help clarify how to place the scaling law (Proposition 5) and Section 5 with respect to the results of the prior sections."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Can you provide intuition why the risk of the surrogate-to-target model under the optimal selection of the parameters scales the same as that of the target model (even though there is a strict improvement in the risk)? I am wondering why improvement is possible. Is it because, for example, if the tail components of the covariance is zero, then features on these components are essentially useless, therefore an surrogate that omits those components will be better? \n\n2. your equation (8) involves $\\beta^{s2t}$. Does it mean that your asymptotic risk estimate (9) also involves $\\beta^{s2t}$ and thus can not be directly computed? I think in the final bound $\\beta^{s2t}$ should not appear; otherwise I can just claim the definition of the excess risk of $\\beta^{s2t}$ is already an exact characterization of itself.\n\n3. In observation1, you assume jointly diagonalizability. Is there fundamental hardness to remove this assumption?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The authors provide comprehensive theoretical results for weak-to-strong generalization, giving a exact characterization of the excess risk of the weak-to-strong estimator. This knowledge distillation problem is important in modern machine learning, indicating the significance of this work."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the problem of knowledge distillation under linear regression. In the first stage, data are collected from a surrogate model. In the second stage, a target model is trained using the data generated in the first stage. The authors characterize the non-asymptotic excess risk of the target model under \"model shift\" setting and \"distribution shift\" setting. Numerical results are provided, justifying their theory on ridgeless regression and on neural network architectures."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The presentation of this paper is in general not very satisfactory, in the sense that this paper is lack of necessary intuition and explanation. For example, how to interpret the non-asymptotic bounds and what does each term stand for? Why it is possible that weak-to-strong estimator is even better than purely using the strong model?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Can you give me some insights to extend the theory to neural networks (even if two-layer neural network)? I think the authors should also discuss this in the refined version."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Knowledge distillation and weak-to-strong generalization are significant topics today, and their theory is very poor. Therefore, this is a meaningful paper for me.\n2. The theory is complete and well-written.\n3. The derived bounds seem tight because they are matched with empirical results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper provides a sharp characterization for knowledge distillation in the high-dimensional regression setting, including both model shift and distribution shift, cases. Concretely, the paper characterizes the precise risk of the target model in both cases through non-asymptotic bounds in terms of sample size and data distribution under mild conditions. As a consequence, the paper identifies the form of the optimal surrogate model, which reveals the benefits and limitations of such processes. Finally, the paper validates the results by numerical experiments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The theory only focuses the high-dimensional linear regression setting, which is well-studied in the literature. Besides, the results can not be extended to neural networks directly.\n2. A typo in line 134."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* I would love to see Figure 1b and Figure 2b in LogLog Scale. One of the main points of the authors in Section 4 (Proposition 5) concerns the learning rates in the high-dimensional limit. It would be nice to see them in Figure 1b.\n* Is it possible to generalise the result of Section 3.1 to the case where the chosen features are chosen with a matrix $A$ which has a non zero kernel? The masking seems a specific case of this.\n\n\n* There is a broken citation on page 3.\n* On line 334 is it \"omniscent test risk estimate\"?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper is mathematically sound and considers an interesting setting. The main strengths are\n* The derivations of the different transition values for the covariates is sharp and to my knowledge is a novel finding.\n* The theory match with simulations also work at finite dimension even if the result is high-dimensional.\n* The model introduced and studied is expressive enough to show different behaviour that characterise performances."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper considers the problem of knowledge distillation under the setting of linear ridge-less regression in the teacher student scenario.\n\nThe setting considered in this paper is the the proportional regime where $p,n\\to\\infty$ and their ratio $\\kappa_t = p/n$ is kept fixed. They consider $\\kappa_t > 1$ in the overparametrised regime.\n\nThe models considered in the paper are three:\n* _The Surrogate-to-Target model_ where the data is generated from a dataset $\\mathcal{D}$ with input $x\\in\\mathbb{R}^d$ and output $y = x^\\top \\beta_\\star + z$ with $ \\beta_\\star$ a teacher vector. This data is used to estimate a min norm estimator called $\\beta^s$ and generate a second data set $y^s = x^\\top \\beta^s + z$ and the final estimation is done as $\\beta^{s2t}$ from $(x,y^s)$.\n* _The Standard Target model_ where the model is evaluated on the generated data $(x, y)$\n* _The Covariance Shift model_ here the dataset is generated with a certain choice of covariance and then the population risk evaluated on a different covariance model.\n\nThe first part of the paper is devoted to finding the performance conditioned on a specific teacher while the second to last section considers the full _Surrogate-to-Target_ setup.\nThe authors also consider the procedure of Masking for the surrogate model. In this case the surrogate model has been trained on a masked version of the data and the new labels are generated from the original inputs and the labels of the surrogate model.\n\nThe main technical results presented in the main are the characterisation of the population risk for the model conditioned on $\\beta^s$ and then for the _Surrogate-to-Target_ model.\n\nFor the case conditioned on the target the authors are able to precisely derive the effect of the surrogate model on the final student, showing specific conditions (depending on the covariates and $\\beta^\\star$) under which a $\\beta^{s2t}$ performs better than a _The Standard Target model_. The same is true for the masking."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Right now the mathematical result is introduced in generality without explaining the idea behind the proof. The authors could briefly explain that to derive the results one should apply the theory from [Han&Xu2023] that relies on the use of of the convex gordon min max theorem.\n* The authors provide some numerical simulations on ResNet-50 on a CIFAR10 classification showing a result that qualitatively differs from the theory. Either this limitation is explained in detail or I don't think it is necessary to be shown.\n* Is there any reason why the authors consider in their technical results the ridgeless estimator instead of the ridge one? A long series of works (e.g. [Hastie2020, Louriero2022]) considers general loss and provides similar bounds.\n* _(Minor)_ Section 4 is presented unclearly. The settings for the propositions are not well explained and need to be introduced more clearly.\n\n[Hastie 2020] Surprises in High-Dimensional Ridgeless Least Squares Interpolation. Annals of Stats 2020\n\n[Loureiro2022] Learning curves of generic features maps for realistic datasets with a teacher-student model. Neurips 2022"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper provides a sharp characterization of a two-stage learning process, where the first-stage (surrogate) model's output supervises the second stage, thus revealing the form of optimal surrogates and when it is beneficial to discard features."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024highdimensional,\ntitle={High-dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1xzqz73hvL},\nnote={under review}\n}"
},
"abstract": {
"value": "A growing number of machine learning scenarios rely on knowledge distillation where one uses the output of a surrogate model as labels to supervise the training of a target model. In this work, we provide a sharp characterization of this process for ridgeless, high-dimensional regression, under two settings: *(i)* model shift, where the surrogate model is arbitrary, and *(ii)* distribution shift, where the surrogate model is the solution of empirical risk minimization with out-of-distribution data. In both cases, we characterize the precise risk of the target model through non-asymptotic bounds in terms of sample size and data distribution under mild conditions. As a consequence, we identify the form of the optimal surrogate model, which reveals the benefits and limitations of discarding weak features in a data-dependent fashion. In the context of weak-to-strong (W2S) generalization, this has the interpretation that *(i)* W2S training, with the surrogate as the weak model, can provably outperform training with strong labels under the same data budget, but *(ii)* it is unable to improve the data scaling law. We validate our results on numerical experiments both on ridgeless regression and on neural network architectures."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"empirical risk minimization",
"high-dimensional statistics",
"scaling laws",
"weak to strong generalization",
"knowledge distillation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d37d4d5a65459a9acbf1097861cc2ebf51a4f402.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/e65a71ee1b1003042a7bd78f371dd44caf75a404.pdf"
},
"title": {
"value": "High-dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1yJ3IDpb1D | HoTPP Benchmark: Are We Good at the Long Horizon Events Forecasting? | main | Active | Event Sequences;Marked Temporal Point Processes;Long Horizon Forecasting;Evaluation Metric;Benchmark | datasets and benchmarks | 3;3;5;5 | 2;4;2;3 | 2;2;2;3 | 2;2;3;2 | 2;2;2;2 | 4 | 2.75 | 2.25 | 2.25 | 2 | -0.301511 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Could the authors provide more detailed reasoning on why T-mAP is the most suitable metric for long-horizon MTPP evaluation? A comparison with other metrics, such as OPW, would help clarify the unique advantages of T-mAP for certain datasets or applications. Are there specific scenarios where T-mAP particularly excels?\n\n2. How does T-mAP adapt to different domains represented in the HoTPP benchmark, such as healthcare vs. social media? It would be helpful if the authors could provide additional domain-specific analysis or clarify if they observed any notable trends in metric performance across different datasets.\n\n4. Given the computational intensity of some methods (e.g., continuous-time LSTM), what optimizations, if any, do the authors recommend for users with limited hardware resources? Would simplifying certain models or using hybrid methods maintain benchmark validity while improving accessibility?\n\n4. Next-K models are briefly discussed, but can the authors elaborate on alternative structures or settings within this family of models? Exploring how these models perform differently across long-horizon tasks could provide insights into their benefits or limitations and would help clarify if more complex Next-K structures could outperform standard autoregressive models."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Innovative Evaluation Metric: The paper introduces Temporal mean Average Precision (T-mAP), a novel metric inspired by object detection, which addresses limitations in existing evaluation methods for long-horizon forecasting by accurately handling false positives and negatives, offering a refined measurement of model performance in Marked Temporal Point Processes (MTPP).\n2. Practical Benchmark: The HoTPP benchmark developed in this work includes large-scale, diverse datasets and optimized inference procedures, establishing a standardized framework that supports both autoregressive and parallel inference, greatly enhancing research accessibility and reproducibility in long-horizon event forecasting.\n3. Comprehensive Empirical Analysis: The paper rigorously evaluates various models, including rule-based and advanced neural methods, across multiple datasets, providing robust empirical evidence that reveals critical insights into the performance trade-offs of next-event vs. long-horizon forecasting.\n4. Clear and Structured Presentation: The paper clearly articulates the challenges in long-horizon event prediction, explaining the proposed methodology and its advantages with illustrative figures and well-organized tables, making complex concepts acc"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces HoTPP (Horizon Temporal Point Process Benchmark), a benchmark for evaluating long-horizon event sequence prediction models. Its main contributions include: proposing a novel evaluation metric called Temporal mean Average Precision (T-mAP), inspired by object detection metrics in computer vision, which properly handles variable-length sequences within a prediction horizon and accounts for false positives and false negatives; demonstrating through comprehensive analysis that models with high next-event prediction accuracy don't necessarily perform well at long-horizon forecasting, suggesting the need for specialized approaches for each task; developing the HoTPP benchmark, which includes large-scale datasets from diverse domains (finance, healthcare, social networks) with up to 43 million events, implementation of various modeling approaches (including rule-based baselines, intensity-free methods, intensity-based methods, and Next-K prediction models), optimized procedures for both autoregressive and parallel inference, and theoretical proof of T-mAP computation correctness; and revealing through extensive experiments across multiple datasets the trade-offs between next-event and long-horizon prediction performance, benefits of Next-K approaches for long-horizon predictions, importance of proper sequence length selection, and analysis of label distribution entropy degradation in autoregressive predictions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Limited Justification of Metric Selection: While the T-mAP metric is an innovative contribution, the paper could strengthen its justification for why T-mAP is superior to other established metrics in specific application scenarios. Including more detailed comparisons with alternative metrics, such as Order-preserving Wasserstein Distance (OPW), could provide further evidence of T-mAP's effectiveness, especially in complex event sequences.\n2. Lack of Fine-Grained Analysis for Different Domains: The datasets span diverse fields like healthcare and social media. However, the paper does not profoundly explore how T-mAP performs within these domains. Analyzing domain-specific challenges or model performance variations across fields could add depth and highlight the metric’s adaptability, further demonstrating HoTPP’s real-world applicability.\n3. Computational Constraints on Benchmark Implementation: Some models, like the continuous-time LSTM, require extensive computational resources, limiting their practical applicability. The paper could improve by suggesting or including optimizations, such as more efficient GPU implementations or leveraging hybrid models, making the benchmark more accessible to researchers with limited resources.\n4. Limited Exploration of Next-K Models: Although the paper discusses Next-K models and their potential in improving long-horizon forecasting, there is little exploration of variations within this model family. Providing examples or implementing alternative Next-K structures could substantiate the claims regarding their advantages, offering actionable insights for researchers interested in non-autoregressive alternatives.\n5. Lack of Qualitative Error Analysis: The paper could benefit from qualitative error analysis to clarify why some models underperform on long-horizon metrics. Visual examples or error case studies might offer valuable insights into prediction failures, guiding future model improvements by highlighting common error patterns in long-horizon event forecasting."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Deep methods do not necessarily outperform rule-based methods, how to illustrate the superiority of the proposed metric by comparing them?\n2. What is the main message of section 4.3? It seems to only show that the metric increases monotonically with delta.\n3. Can authors discuss which scenarios are most suitable for different types of metrics? Like how does the proposed metric handle sparse or highly irregular data compared to traditional ones?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper is well-organized, with clear sections delineating the introduction, related work, methodology, experiments, and conclusions.\n2. Figures and tables are used effectively to illustrate concepts and present experimental results.\n3. The significance of forecasting multiple future events is substantial."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a new metric, Temporal mean Average Precision (T-mAP), which is designed for evaluating long-horizon predictions in marked temporal point processes (MTPP). This offers a new perspective on evaluating forecasting accuracy beyond traditional metrics like OTD. It also introduce HoTPP, a new benchmark for long-horizon forecasting, which provides large datasets and optimized procedures for both autoregressive and parallel inference."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The motivation in section 3, while intended to address deficiencies in existing methods such as OTD, appears weakly articulated. The example provided in Figure 2 lacks clarity, particularly why OTD cannot correctly match the last triangle with the ground truth. Furthermore, the assertion that OTD is computed between fixed-size prefixes is confusing since n_p and n_gt vary, which contradicts the description in line 178 to 185.\n2. Some claims are not clear. For example, in line 230 T-mAP identifies the matching that maximizes both precision and recall simultaneously. Typically, these metrics are in a trade-off relationship. How T-mAP manages to maximize both?\n3. The experimental section (Section 6.1) lacks specific references to the methods and datasets discussed, which makes it difficult to follow and verify the stated findings. Direct references to specific methods and datasets should be included to enhance clarity."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- How T-mAP is better than the existing metric? \n\n- More analysis on long-horizon event forecasting are needed. \n\n- Why the baselines are not new?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- This paper proposes a new metric, namely T-mAP, to better evaluate the performance of long horizon events forecasting. \n\n- This paper includes large-scale datasets with up to 43 million events. \n\n- This paper release HoTPP open-source benchmark to facilitate future research."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper benchmarks the task of long horizon events forecasting. The main contribution of this paper is to propose a new metric, namely T-mAP. Moreover, this paper also includes large-scale datasets with up to 43 million events. Although with these contributions, this paper is not well-written and not easy to understand."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- This paper is not well-written and hard to follow. The main contribution of this paper is to propose T-mAP to evaluate models on the task of long horizon events forecasting. However, why T-mAP is better than the existing metric, i.e., OTD, is not clearly introduced. \n\n- The experiments are not extensive. In the title of this paper is about long-horizon event forecasting, but the experiments covers equally on the topic of the next-item forecasting. \n\n- The number of compared baselines is not many, and the baselines are not proposed recently. It seems that the topic of event forecasting is not very hot, so the motivation that we need a benchmark is very strong."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See Weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The author's motivation is clearly expressed, the approach sounds interesting, the problem of modeling long sequences of events is well addressed, and the large dataset developed has the potential to advance the field."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce Temporal mean Average Precision (T-mAP), a temporal variant of mAP that overcomes the limitations of existing long-term evaluation metrics. They also release HoTPP, the first benchmark specifically designed to evaluate long-term MTPP predictions, which includes a large-scale dataset of up to 43 million events."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper is difficult to understand. What is the ‘structured descriptions of each event’, what is the challenges of autoregressive prediction in line 48. What are the challenges of autoregressive prediction in line 052? How are subsequences handled for each event type in line 99. This increases the difficulty for readers to understand, and it is recommended that the author explain these terms in more detail.\n\n2. The motivation is vague. Why do we need long-term event prediction? In fact, we can execute Next-event task multiple times to get similar results.\n\n3. How does the proposed metrics perform in a long-tail prediction scenario?\n\n4. Does the proposed metric take into account the time error of event occurrence? If the time interval corresponding to the event is far away, can the method cope with this situation?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A novel metric and benchmark for evaluating long-horizon event sequence forecasting."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024hotpp,\ntitle={Ho{TPP} Benchmark: Are We Good at the Long Horizon Events Forecasting?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1yJ3IDpb1D},\nnote={under review}\n}"
},
"abstract": {
"value": "Accurately forecasting multiple future events within a given time horizon is crucial for applications in finance, retail, social networks, and healthcare. Event timing and labels are typically modeled using Marked Temporal Point Processes (MTPP), with evaluations often focused on next-event prediction quality. While some studies have extended evaluations to a fixed number of future events, we demonstrate that this approach leads to inaccuracies in handling false positives and false negatives. To address these issues, we propose a novel evaluation method inspired by object detection techniques from computer vision. Specifically, we introduce Temporal mean Average Precision (T-mAP), a temporal variant of mAP, which overcomes the limitations of existing long-horizon evaluation metrics. Our extensive experiments demonstrate that models with strong next-event prediction accuracy can yield poor long-horizon forecasts, and vice versa, indicating that specialized methods are needed for each task. To support further research, we release HoTPP, the first benchmark specifically designed for evaluating long-horizon MTPP predictions. HoTPP includes large-scale datasets with up to 43 million events and provides optimized procedures for both autoregressive and parallel inference, paving the way for future advancements in the field."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Event Sequences",
"Marked Temporal Point Processes",
"Long Horizon Forecasting",
"Evaluation Metric",
"Benchmark"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1577622d8ed8c7a9b1967199142596f7c388bfe8.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/6edf9a582d3a0b51194781a286211522659c41b3.zip"
},
"title": {
"value": "HoTPP Benchmark: Are We Good at the Long Horizon Events Forecasting?"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1yJP5TVWih | Lambda-Skip Connections: the architectural component that prevents Rank Collapse | main | Active | Rank Collapse;Skip Connections;Sequence Modeling Architectures | learning theory | 5;6;6;8 | 4;3;2;4 | 3;3;3;3 | 2;2;3;3 | 3;2;3;3 | 6.25 | 3.25 | 3 | 2.5 | 2.75 | 0.207514 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I have a reasonable estimate of creativity and technical depth, but it is difficult for me to assess impact. I am not familiar with the area and my assessment has limited confidence. I would not, for example, know if rank collapse is widely appreciated within even the transformer \"community\" (if there is any such thing). I have not seen MAMBA become that visible or widely used compared to standard transformer-based LLMs, but cannot speculate if rank collapse played a role. $\\lambda$-tuning for robustness seems quite useful, but again I do not know the area well enough to know if, in practice, $\\lambda=1$ is frequently dangerous. If the authors point to specific places in the paper where the above issues are discussed, or add some more motivating support, that would be helpful.\n\nA few writing style and notation nits:\n\nL156-L159 set up $X^{(k)}$ as layer input and $Y^{(k)}$ as layer output. However, equation (1) introduces $O^{(k)}$ without explaining it will provide skipping ability in equation (2).\n\nL174-L178 There seem to be some inconsistent subscripts and superscripts. On one side we see $A^{(k)}_t, B^{(k)}_t$ etc. But just after the displayed equation, for LTI systems we see the superscript $(k)$ disappear, without an explanation if this is because the LTI system is assumed to have one layer.\n\nL888-L903 While setting up expressions and bounds with so many variables, it helps to afterward highlight the most import 1-2 variables, and give qualitative connections between their typical values in practice and the implications on the bounds. E.g., how easy or difficult would be to choose an acceptable $\\lambda$ in a typical LLM? Also, some of the definitions like $S$ are very far from the proofs in the appendix."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Reasons to accept:\n* Identifies rank collapse problem in state-space models like MAMBA, similar to earlier discovery of this problem in transformer-type networks.\n* Identifies skip strength parameter $\\lambda$ as an important knob to limit the damage of rank collapse."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Dao and Gu [https://arxiv.org/pdf/2405.21060] established a form of equivalence between transformers and continuous-time state-space models. In a different development, Dong et al. [https://arxiv.org/abs/2103.03404] showed that self-attention layers without skip connections or MLPs suffer from \"rank collapse\" ― with increasing layers, the output matrix tends to rank-1, i.e., all token positions tend to the same representation.\n\nThe present submissions puts these together to show that rank collapse is a problem also for state-space models. It shows that the skip connection provides vital protection against rank collapse, but that a weighted addition (with weight $\\lambda$ which may be regarded as a hyperparameter, or perhaps trainable) with the skip connection is more flexible."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Reasons to reject:\n* Given the two papers on which this paper builds, it might be argued that the present work is relatively incremental. (That being said, I appreciate the candor while setting up the contributions of this paper, and I learnt something from it.)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See above."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The lower boundary of the rank collapse of $\\lambda$ skip connections is analytically derived. The results agree well with empirical analysis.\n2. The paper presents the convergence rate in the absence of skip connections, contributing valuable insights."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper analyzes the rank collapse of SSM due to identical $\\lambda$ skip connections. The authors provide a rigorous convergence rate for the rank collapse and offer sufficient guarantees to prevent it. Experimental results demonstrate the effectiveness of their analysis."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The authors analyze the $\\lambda$ skip connections. However, the skip strength $\\lambda_k$ may vary on different layers.The paper should discuss how the findings hold up under these varying conditions. Additionally, many models implement skip connections selectively across layers rather than uniformly. A discussion on the generalizability of the results would enhance the paper.\n2. Theorem 4.1 paves the way to choose suitable $\\lambda$. However, in Figure 2, it appears that when $\\lambda$ is sufficiently large, the rank collapse index shows little variation. Clarification on how to determine the optimal value of $\\lambda$ would be beneficial.\n3. Based on theorem 4.1, could the authors explore adding constraints to the parameters to optimize $C_M$, $S$ and $c$ for improved neural network performance?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The rank collapse metric is not normalized in the definition. Would it be enough to lower bound the rank collapse metric, when the norm itself evolves across layers?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper addresses the significant issue of rank collapse in sequence model architectures. It offers both theoretical analysis and empirical evaluation to support the proposed architectural component aimed at resolving this problem. I like the remark that provides the parameters corresponding to the practical architectural settings.\n\nAdditionally, the theoretical development and overall presentation of the paper are commendably clear and well-structured."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper examines the phenomenon of rank collapse in general sequence model architectures, including transformers and state space models. To mitigate this issue, the paper proposes a parameterized version of the skip connection that multiplies the residual stream by a constant factor. Theoretical analysis identifies the conditions on the parameter sufficient to prevent rank collapse, and an analytical example demonstrates that neither the absence of skip connections nor the standard implementation prevents rank collapse. Finally, empirical evaluations support the findings of theoretical analysis."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The theory investigates the sufficient conditions for preventing rank collapse in the worst-case scenario. This could imply that the required conditions are overly stringent."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Q1. Between Eq. (3) and Eq. (6), there is ambiguity regarding the residual term, specifically whether X or V serves as the residual component. This inconsistency could impact the theoretical derivations that follow. Could the authors clarify this definition? Additionally, using the same symbol D for both SSM and LayerNorm contexts creates potential confusion. Distinct notations would enhance clarity.\nQ2. The theoretical conditions for λ appear to be conservative compared to empirical findings. Could the authors explain this discrepancy? Furthermore, the appendix notes cases of rank stability without skip connections, which might challenge the theory. An analysis of these cases would be valuable.\nQ3. Could the authors provide additional experiments showing the model’s downstream performance as a function of layer depth and skip strength? Also, would the inclusion of alternative metrics, such as effective rank, offer a more comprehensive assessment of rank collapse?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "S1. The paper tackles the problem of rank collapse, extending its analysis from transformers to SSMs.\nS2. Through theoretical proofs, the paper demonstrates that lambda-skip connections prevent rank collapse, preserving model expressivity in both transformers and SSMs.\nS3. Experimental results show that lambda-skip connections and other components enhance expressivity and stability across different model architectures."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses rank collapse, a phenomenon where embedding vectors in deep learning models converge to a uniform state. Building on previous studies that focused on transformers, this paper extends the analysis to State Space Models (SSMs). The study employs theoretical and empirical analysis to demonstrate how lambda-skip connections, LayerNorm, and gating mechanisms contribute to both the stability and expressivity of transformers and SSMs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1. The definition of the residual term between Eq.(3) and Eq.(6) is inconsistent, with ambiguity around whether X or V serves as the residual term. This inconsistency impacts the theoretical derivations that follow and should be clarified to ensure precise interpretations. Additionally, certain symbols, such as D, are used in both the SSM and LayerNorm contexts but represent different meanings. Distinct notation would improve readability and reduce potential confusion.\nW2. While the experiments generally align with the theoretical predictions, some disparities remain unaddressed. For example, the theoretical threshold for λ appears more conservative than the empirical results suggest, and additional clarification would help. Further, the appendix notes rank stability even without skip connections, which might challenge the presented theory.\nW3. The paper primarily focuses on rank collapse within the model’s architecture but does not connect this phenomenon to downstream task performance. Adding experimental results that measure downstream task performance in relation to model depth and skip connection strength could provide a more comprehensive assessment."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024lambdaskip,\ntitle={Lambda-Skip Connections: the architectural component that prevents Rank Collapse},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1yJP5TVWih},\nnote={under review}\n}"
},
"abstract": {
"value": "Rank collapse, a phenomenon where embedding vectors in sequence models\nrapidly converge to a uniform token or equilibrium state, has recently gained at-\ntention in the deep learning literature. This phenomenon leads to reduced expres-\nsivity and potential training instabilities due to vanishing gradients. Empirical ev-\nidence suggests that architectural components like skip connections, LayerNorm,\nand MultiLayer Perceptrons (MLPs) play critical roles in mitigating rank collapse.\nWhile this issue is well-documented for transformers, alternative sequence mod-\nels, such as State Space Models (SSMs), which have recently gained prominence,\nhave not been thoroughly examined for similar vulnerabilities. This paper extends\nthe theory of rank collapse from transformers to SSMs using a unifying frame-\nwork that captures both architectures. We introduce a modification in the skip\nconnection component, termed lambda-skip connections, that provides guaran-\ntees for rank collapse prevention. We present, via analytical results, a sufficient\ncondition to achieve the guarantee for all of the aforementioned architectures. We\nalso study the necessity of this condition via ablation studies and analytical exam-\nples. To our knowledge, this is the first study that provides a general guarantee to\nprevent rank collapse, and that investigates rank collapse in the context of SSMs,\noffering valuable understanding for both theoreticians and practitioners. Finally,\nwe validate our findings with experiments demonstrating the crucial role of archi-\ntectural components in preventing rank collapse."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Rank Collapse",
"Skip Connections",
"Sequence Modeling Architectures"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a60f06d2e37696e24c034db746d2f44be56a5348.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning theory"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Lambda-Skip Connections: the architectural component that prevents Rank Collapse"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1ymGFnxfVB | LJ-Bench: Ontology-based Benchmark for Crime | main | Active | Ontology;Knowledge Graph;Crime;Language Models | datasets and benchmarks | 3;5;5;5 | 4;3;4;4 | 3;3;2;2 | 1;2;2;2 | 1;3;4;2 | 4.5 | 3.75 | 2.5 | 1.75 | 2.5 | -0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "This article presents an interesting contribution to the evaluation of Large Language Models (LLMs) in the context of harmful information, particularly through the introduction of LJ-Bench, a benchmark designed around a structured ontology of crime-related concepts. The systematic assessment of LLMs against a variety of illegal activities offers valuable insights into their vulnerabilities, particularly regarding societal harm. This focus is particularly relevant in today’s landscape, where the safe deployment of LLMs is a pressing concern.\n\nHowever, the article also has several notable shortcomings that warrant attention. Firstly, the structure of the paper feels fragmented, with sections detailing specific aspects of the research without a coherent flow, which may hinder readers' comprehension of the overall argument. Additionally, some of the choices made throughout the study, such as the selection of prompts, appear arbitrary and lack adequate justification, raising questions about the robustness of the methodology. Furthermore, the decision to focus solely on the Gemini model is not sufficiently motivated; a broader evaluation involving multiple models could provide a more comprehensive understanding of LLM vulnerabilities in relation to illegal queries.\n\nLastly, the article does not adequately address how the proposed ontology will be maintained over time, which is crucial for its practical application and relevance. Overall, while the work has the potential to be a valuable resource for researchers aiming to enhance the safety of LLMs, these unresolved issues suggest that further refinement and discussion are needed to strengthen the overall contribution.\n\nQuestions:\n- Given the fragmented structure of the article, how do you envision improving the coherence of your arguments in future revisions to enhance reader comprehension?\n- What specific criteria did you use to select the prompts for evaluation, and how might you address the potential concerns regarding the perceived arbitrariness of these choices?\n- Could you elaborate on your rationale for focusing exclusively on the Gemini model for evaluation? Would you consider expanding this analysis to include other LLMs to provide a broader perspective on their vulnerabilities?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- Benchmark development.\n- Systematic evaluation: Assessment of LLMs across 76 distinct types of crime.\n- Focus on societal harm: The article emphasizes an important aspect of model evaluation that can inform future research and development efforts aimed at enhancing model safety and trustworthiness."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors tackle the risk of Large Language Models (LLMs) providing harmful information by introducing LJ-Bench, a benchmark grounded in a legally structured ontology of 76 crime-related concepts. This dataset tests LLMs against a broad range of illegal queries, revealing that LLMs are particularly vulnerable to prompts associated with societal harm. By highlighting these vulnerabilities, LJ-Bench aims to support the development of more robust, trustworthy models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Fragmented structure, especially the Related Work section.\n- Some arbitrary choices, particularly regarding the selected prompts.\n- Limited justification on focusing on the Gemini model."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "The paper does contain ethical issues but it tries to address them.\nAnother review may be useful."
},
"flag_for_ethics_review": {
"value": [
"Yes, Other reasons (please specify below)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "W1-W7"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "S1. Jailbreaking benchmarks for law are very important.\n\nS2. The detailed ontology is good.\n\nS3. The results are detailed and explained well. The appendix includes lots of real cases and prompts and other details."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a legal crime jailbreaking benchmark based on California law. It also provides an ontology of crimes with 76 categories."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1. The scope of this paper is very restricted. LJ-Bench is based on California law. How applicable is it to other countries?\n\nW2. What about harm \"against trees and plants\"? Is there no law in California against this?\n\nW3. Is the ontology vetted by law experts and professionals?\n\nW4. What is the point of augmented dataset of extended questions? Does it not fall in the same issues as in Fig 5, that is, of very similar text, and not really new content?\n\nW5. How effective the jailbreaking answers are should be evaluated by humans. Another LLM, that too of the same kind, may be biased in evaluation. Hence, a human evaluation is needed.\n\nW6. Is Table S3 not the full list? The caption says something different, though. Or does it need to be combined with Table S4 to get the full mapping of 76 categories and number of questions corresponding to each in the benchmark?\n\nW7. How applicable is this method to non-English prompts?\n\nW8. Typo: Contribution points 2 and 3 are repeated\n\nW9. Typo: Sec E.1 title"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "I am still not sure how the introduction of this benchmark helps us make more responsible use of LLMs. For people studying crime and legal issues, it seems that disabling the LLM from relying on this benchmark to answer questions (which I presume would be the obvious use case) would be overly broad. On the other hand, I'm not seeing sufficient evidence that, even if that were the goal, the benchmark could prevent it. For example, if I were to change the prompts and questions in slight ways, would the language model still not answer? I am not sure that there is a general and foolproof solution to the jailbreaking problem. More experiments and robustness studies would have helped express this more convincingly. Nevertheless, the authors should feel free to comment on this concern."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "--The use case and motivation behind the paper is reasonably strong, as evaluating the robustness of LLMs against a broad enough range of illegal activities is clearly important. \n--There is sufficient description of related work; in fact, I believe this may be the strongest part of the paper. \n--There is reasonable clarity in the way the paper is written, although I do believe it could use some more quality improvement and proofreading, as I state below."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The widespread usage and ease of access of LLMs to information make it imperative that we\nstudy their robustness against potential harm they might cause to society. The authors\nintroduce a new benchmark called LJ-Bench, inspired by legal frameworks, and\nprovide the first detailed taxonomy on the types of questions whose responses would elicit harmful\ninformation. It contains crime-related concepts, supporting 76 classes of illegal\nactivities. The authors then conduct an experimental analysis of attacks on LJ-Bench, \nbased on the new types of crime as well as the hierarchical categories."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "--The experimental results are not up to the mark in this paper. First, they are not as extensive as they need to be, but more generally, they lack the type of scientific grounding (e.g., statistical significance results) that would be necessary in a paper purporting to be centered on responsible use of AI. \n--There are some presentation issues. First, the figures are not of sufficiently high quality. Second, the paper clearly lacks adequate proofreading e.g., on page 2, a bullet point is repeated, on page 8 the word 'original' is misspelt and so on."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "As the benchmark is designed to assess model safety when asked to assess, or answer questions, about illicit acts - thus the dataset contains questions about how to perform illicit acts, including questions which can jailbreak current models."
},
"flag_for_ethics_review": {
"value": [
"Yes, Potentially harmful insights, methodologies and applications"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Is there a reason why this benchmark was not run on OpenAi and Anthropic Models?\n- Do you have a sense of how extensible this work is to other legal frameworks?\n - In \"For example, the nature of the answer would differ significantly when seeking classified information from the CIA (Central Intelligence Agency) compared to obtaining similar information from a local police station.\" how would you expect the answer to differ, could you have short examples?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The authors use their formal mappings to legal structures to ensure that the questions contained in their benchmark fairly represent all relevant types of crime described under Californian Law and the Model Penal Code.\n - The authors use their formal mappings to legal structures to ensure that the questions contained in their benchmark fairly represent all relevant types of crime described under Californian Law and the Model Penal Code. We see this as a food technique to ensure fair distribution of question types in a benchmark,\n - The authors present both the benchmark, and an experimental evaluation of how a model (gemini 1.0) performs against that benchmark."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce LJBench, a benchmark of questions about crime-related concepts - designed to assess LLM safety in responding to such questions. The primary outputs of this paper are:\n - An OWL ontology, that re-uses some concepts from schema.org, for describing legal concepts from Californian Law and the Model Penal Code, describing 76 distinct types of crime\n - LJ-Bench: A dataset of 630 questions asking how to perform acts considered illegal under Californian Law or the Model Penal Code - with a fair distribution of questions across the 76 types of crime.\n - Structured OWL descriptions of each question from the LJ-Bench dataset, describing the type of crime each question relates to and whom the crime applies to.\n - Experiments to assess the outputs of Gemini 1.0 on these questions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Comments on the ontology**\n\nWhilst the choice of formally representing legal concepts in an ontology is a sensible approach, we have some concerns around the methodology used to create the ontology. In particular:\n - There is extensive literature on legal ontologies which the authors do not reference, we encourage the authors to review the following papers:\n\t - \"A systematic mapping study on combining conceptual modelling with semantic web\"\n\t - \"Legal ontologies over time: A systematic mapping study\"\n after reviewing these papers we suggest that the authors identify:\n\t - Whether there are existing ontologies capturing concepts from Californian law that should be re-used, and\n\t - Whether there are more suitable ontologies beyond schema.org that they should use as the foundation for the ontology for lj-bench\n - There is no rigorous methodology described for:\n - How the authors identified the 76 distinct types for crime from Californian Law and the Model Penal Code, nor why they have chosen the 4 broader categories to class these into.\n - How the four super categories of \"against a person, against property, against society, and against an animal\" were identified and selected.\n\nWe have also observed the artefacts that the authors have submitted, and have the following comments on the ontology design:\n - In the supplementary materials, only a fraction of the 630 questions from lj_bench are described in lj-ontology.rdf\n - There appear to be modelling errors in the disjoint class declarations. For instance \"rape\" is disjoint from \"sex offence\", when it likely should be classified as a subset.\n - nitpick: owl:ObjectPropertys defined in the schema are missing rdfs labels and comments (e.g. crime:steals)\n - nitpick: Classes defined in the schema are missing labels\n - nitpick: It is poor practice to have URIs with commas (,) question marks (?) or the (&) symbol\n - nitpick: Literals in comments inappropriately contain formatting, e.g. \"mis-\\nappropriates\" should be \"misappropriates\"\n - Information should not be implicitly encoded in the names of URIs; with crimes like \"crime:unlawful_interference_with_property\". Instead of having\n\n```\ncrime:unlawful_interference_with_property a crime:Unlawful_Interference_With_Property, owl:NamedIndividual .\n```\n\nhave\n```\ncrime:propertyInterference a crime:PropertyInterference, owl:NamedIndividual ;\n\trdfs:label \"Unlawful Interference With Property\"\n```\nI would also consider adding an rdfs:comment. \n\nPlease also review these suggestions https://chatgpt.com/share/6713d39d-1388-800c-a886-4e9ee3994efa, in particular on:\n - Naming conventions\n - Incomplete property definitions\n - Overlapping disjoint classes\n\n**Other Nitpicks**\n - We suggest the authors do note place \"few\" in brackets in the first figure\n - We request the authors include a turtle (ttl) serialisation of their ontology artefacts for human readability\n- Lots of quotes opened incorrectly, e.g. see list in attack section\n - Please reference schema.org better in the bibliography"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024ljbench,\ntitle={{LJ}-Bench: Ontology-based Benchmark for Crime},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1ymGFnxfVB},\nnote={under review}\n}"
},
"abstract": {
"value": "Despite the remarkable capabilities of Large Language Models (LLMs), their potential to provide harmful information remains a significant concern due to the vast breadth of illegal queries they may encounter. In this work, we firstly introduce structured knowledge in the form of an ontology of crime-related concepts, grounded in legal frameworks. This ontology serves as the foundation for the creation of a comprehensive benchmark, called LJ-Bench, the first extensive dataset designed to rigorously evaluate the robustness of LLMs against a wide range of illegal activities. LJ-Bench includes 76 distinct types of crime, organized into a taxonomy. By systematically assessing the performance of diverse attacks on our benchmark, we gain valuable insights into the vulnerabilities of LLMs across various crime categories, indicating that LLMs exhibit heightened susceptibility to attacks targeting societal harm rather than those directly impacting individuals. Our benchmark aims to facilitate the development of more robust and trustworthy LLMs."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Ontology",
"Knowledge Graph",
"Crime",
"Language Models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/94e5cf629c3d811c8e1e1206abb6d9a8a111eda1.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "LJ-Bench: Ontology-based Benchmark for Crime"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1z3SOCwst9 | Differentially private learners for heterogeneous treatment effects | main | Active | Causality;differential privacy;treatment effect estimation | causal reasoning | 5;6;6;6 | 4;3;2;3 | 3;3;4;3 | 2;2;3;2 | 2;4;4;4 | 5.75 | 3 | 3.25 | 2.25 | 3.5 | -0.816497 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. How many queries were done for Dataset 1 (Figure 3 and 4)?\n2. Do the authors have a sense for how tight they think the upper bound for smooth sensitivity is using the gross error sensitivity? Is there room to make it tighter with more assumptions?\n3. In Figure 4, it seems like DP-CATE consistently underestimates the CATE at the 0.01 privacy budget. Is there any intuition and/or concrete results on whether this method tends to underestimate or overestimate?\n4. What does y-axis represent in Figure 6 and 7? Is it the actual CATE or is it the error?\n5. How was the Lipschitz constant computed for the empirical results?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- By and large this is a well-written paper and it was easy to understand what both algorithms were doing at a high-level.\n- The methods provide more flexibility than prior work in supporting all types of ML models\n- The separation between knowing the number of queries a priori and not is well taken as it allows them to build a stronger estimator in the fixed query setting\n- The experiments show some promise that the estimator is relatively close to the ground truth in the synthetic experiments"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This study proposes two new methods to address the problem of computing CATE estimates under differential privacy. Leveraging the often used assumption of number of queries being known a priori the authors develop a method that uses influence functions as an upper bound to the smooth sensitivity quantity. This is used in a traditional output perturbation algorithm of the CATE estimate to calibrate the Gaussian noise. The second method is for releasing the CATE function in its entirety under differential privacy. This is a much more difficult problem. The authors develop an algorithm that guarantees DP by using a calibrated Gaussian process to modify the output of the original CATE algorithm. They develop an algorithm for determining how to calibrate this process leveraging theory about RKHSs and Gaussian kernel regression. The efficacy of these algorithms are demonstrated on synthetic datasets where access to the ground truth CATE is available and observational medical datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The algorithm for releasing the CATE function assumes knowledge of the Lipschitz constant which unless I am mistaken seems like an unrealistic assumption (note I am not concerned with assuming Lipschitzness of the loss)\n- More details about the experiments are needed to help understand them. Right now the details are quite sparse so it’s difficult to contextualize them in each of the challenges described. I leave my questions related to this for the Questions section of the review.\n- Unless I am misunderstanding Figure 6 and 7, it seems like th CATE is constant along the ages / covariates for each task? If so, this is not as compelling for the success of this method. I think like the synthetic dataset, an observational task should be chosen where the CATE differs based on the covariate I will be adjusting. It’s important to understand how well the DP estimator can capture the variation in CATE as the condition covariate changes."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "* In the problem statement, do you have any constraints on the domain of X? In practice is it usually a small domain (e.g., one feature) or large domain (many features)? \n* In the finitely many queries setting, can you be more precise about the typical characteristics of the setting? How many queries is typical, and what do those queries look like?\n>* If the number of queries is small (e.g., quantify CATE for ages 0-20, 20-30, 30-40, ...,) then the problem is trivial.\n>* I think it would be good to give a motivating example to make the abstract problem formulation you have a bit more grounded."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* The paper is technically sound and statistically rigorous.\n* The problem studied is novel and of practical interest.\n* The paper is clearly written and nicely polished."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the problem of causal inference with sensitive observational data with DP, motivated by medical applications. specifically, the authors study estimtaing the CATE function (conditional average treatment effect), which as the name implies, quantifies the effect of a treatment as a function of some covariate. The authors propose a simple output perturbation mechanism to estimate the CATE with DP."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The paper uses some jargon that may not be familiar to the reader (e.g., doubly-robust). \n* There are no comparisons to baselines. While the problem studied is new there are no published baselines on this approach, there are some simple baselines you could compare against. \n>* Here's one: in 1D case you study in e.g., Fig 1, 6 just compute the CATE function non-parametrically. That is, compute COUNT(Y=1, a <= X <= b) and COUNT(Y=0, a <= X <= b) for a variety of intervals in the domain, from which you can estimate CATE easily.\n>* To handle higher dimensional case, you could compute those counts for each covariate and then make some kind of conditional independence assumption. In the p=2 case you consider in experiments, you could also just directly compute the full histogram and it should be pretty doable. I would assume there are both qualitative and quantitative advantages to your approach, but I think it would be good to demonstrate that explicitly. \n>* From the idea above, it seems this can be framed as a marginal-preservation problem, a problem that many synthetic data algorithms are pretty good at (e.g., PrivBayes). That could be another baseline."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- How does the proposed approach for functional data compare to the techniques in Hall et al. (2013)? Highlighting these distinctions would provide valuable context for readers.\n- While the framework is general, how does its performance compare numerically to similar algorithms (e.g.,Betlei et al. (2017), Guha & Reiter (2024) and Niu et al. (2019)) in settings where those methods are applicable?\n- Could the authors provide insights on how functional estimation compares to pointwise estimation in practical scenarios?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The paper is exceptionally well-written and easy to follow, providing a clear presentation of complex concepts.\n- The authors present a well-founded methodology and provide rigorous mathematical analysis.\n- Experimental results on both synthetic and real data help to validate the proposed framework.\n- The framework is general and flexible, covering a wide range of learning algorithms without making unrealistic assumptions.\n- The discussion in line 314 about the relationship between doubly robust learners and smooth sensitivity offers an interesting insight that could have broader implications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a new framework (DP-CATE) for estimating conditional average treatment effects (CATE) under differential privacy while ensuring double robustness (see below my comment on robustness). DP-CATE is broadly applicable to two-stage CATE meta-learners with a Neyman-orthogonal loss. The framework can perform both pointwise estimation of the CATE function and direct functional estimation. The authors provide experimental results on synthetic and real data that demonstrate the effectiveness of DP-CATE."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- While the authors discuss the novelty of their work on functional data privacy, they underplay previous work in this area. Notably, Hall et al. (2013) provides a mechanism for functional data, which the authors should highlight more explicitly to offer better context for readers.\n- The use of “robustness” could be misleading, as it carries different meanings across fields. Clarifying what robustness specifically entails here would be helpful, especially considering existing work on “privacy and robustness.”\n- Certain definitions, such as those on line 157, could be recalled for clarity. Additionally, the definition of Y
(
⋅
)
used on line 160 appears to be missing, which may cause confusion.\nOverall, while these issues do not significantly detract from the quality of the work, addressing them could improve clarity and reader understanding."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The presentation of this paper is clear and well-organized.\n- It addresses an important practical problem: ensuring privacy in treatment effect estimation from sensitive medical data.\n- The proposed DP-CATE framework is highly flexible and model-agnostic, compatible with any doubly robust meta-learner and machine learning model.\n- The authors provide theoretical guarantees for differential privacy while preserving the double robustness property."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel, privacy-preserving approach for estimating the Conditional Average Treatment Effect (CATE), motivated by the need for privacy in electronic health records. The authors propose DP-CATE, a flexible framework that ensures differential privacy while maintaining double robustness in CATE estimation. The framework is offered in two versions: one for finite queries (e.g., treatment effects for specific patient groups) and another for functional queries (releasing the complete CATE function). A key technical innovation lies in calibrating noise using influence functions for finite queries and Gaussian processes for functional queries. The authors provide theoretical privacy guarantees and demonstrate the framework's effectiveness using both synthetic data and real-world medical datasets (MIMIC-III and TCGA)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper does not sufficiently analyze the consistency of the proposed estimators, i.e., whether the estimators remain consistent.\n- The presentation of the identification condition (3) is unclear. The authors should clarify the assumptions (e.g., unconfoundedness) under which the optimizer of (3) represents the true conditional average treatment effect function.\n- Theorem 1 builds upon the work of Avella-Medina (2021). The authors seek an upper bound on $\\zeta$-smooth sensitivity to ensure privacy. However, is this bound tight, and might there be a more optimal bound for $\\zeta$-smooth sensitivity?\n- Could the authors comment on the inclusivity of the two proposed methods? For example, if one generates a functional query and then uses it to answer finite queries, what would be the potential advantages or disadvantages of this approach?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024differentially,\ntitle={Differentially private learners for heterogeneous treatment effects},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1z3SOCwst9},\nnote={under review}\n}"
},
"abstract": {
"value": "Patient data is widely used to estimate heterogeneous treatment effects and understand the effectiveness and safety of drugs. Yet, patient data includes highly\nsensitive information that must be kept private. In this work, we aim to estimate\nthe conditional average treatment effect (CATE) from observational data under\ndifferential privacy. Specifically, we present DP-CATE, a novel framework for\nCATE estimation that is *doubly robust* and ensures *differential privacy* of the estimates. For this, we build upon non-trivial tools from semi-parametric and robust statistics to exploit the connection between privacy and model robustness.\nOur framework is highly general and applies to any two-stage CATE meta-learner\nwith a Neyman-orthogonal loss function. It can be used with all machine learning models employed for nuisance estimation. We further provide an extension\nof DP-CATE where we employ RKHS regression to release the complete doubly\nrobust CATE function while ensuring differential privacy. We demonstrate the effectiveness of DP-CATE across various experiments using synthetic and real-world\ndatasets. To the best of our knowledge, we are the first to provide a framework for\nCATE estimation that is doubly robust and differentially private."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Causality",
"differential privacy",
"treatment effect estimation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/84860c159ecc0e8526d1b68c67ad2acd1aebc282.pdf"
},
"presentation": null,
"primary_area": {
"value": "causal reasoning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Differentially private learners for heterogeneous treatment effects"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1zDOkoZAtl | Towards Meta-Models for Automated Interpretability | main | Active | interpretability;safety;automated interpretability;ai safety;explainability;extraction;tracr;rasp | interpretability and explainable AI | 3;3;3;6 | 3;3;4;4 | 2;3;4;3 | 1;1;2;3 | 2;4;3;3 | 3.75 | 3.5 | 3 | 1.75 | 3 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The most interesting finding of this paper to me is that the meta-model recovers 77% of program on the non-sparse activations test set. It seems like such a strong train/test generalization split. Is there any intuition for why the transformer can generalize in this case? Does this hold in general cases — evaluating on a linear transformation of the input data yielding the same result? It seems too good to be true."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The paper is well written and presented. The work is easy to understand and follow. The related work and limitations sections are good. In particular, most of the limitations I am concerned about are acknowledged in the limitations section, which is great!"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors train a transformer to decompile Tracr programs back into RASP format. They generate a dataset of RASP programs of length 4-8, use Tracr to compile the program into a transformer, and then train a meta-model transformer that takes the compiled transformer weights as input and autoregressively predicts the decompiled RASP program. The authors achieve 60% accuracy on a held out set, can recover a hand-written sorting program not seen during training, and get 77% decompilation accuracy on a variant of the held out set where the compiled models have a linear transformation applied to make their activations nonsparse."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "One big weakness is the limited scope of the experiments. The authors train a transformer on a relatively small dataset of RASP programs The programs found are small, length 4–8, and the accuracy is only 60%. They only train on this dataset, and then report held out accuracy, as well as accuracy on a nonsparse held out set. I would like to see a more thorough evaluation, for example with more program sizes, or testing broader generalization abilities.\n\nAnother weakness is that I don't see any way this approach will feasibly scale to larger programs, or real world transformers. It only works because the data trained on is so small, and because we are compiling the RASP programs to generate the dataset for decompilation.\n\nTo say more, this is a fundamental limitation of this approach. Taking RASP as the domain and transformer weights as the codomain, Tracr is not anywhere close to surjective (if i understand correctly). So, any decompilation meta-model training procedure seems fundamentally unable to work on real world transformer models. This is okay if we just accept that a meta-model decompiler is only useful for Tracr-derived activations. But I don't really see the usefulness of decompiling in this case: Tracr programs are by nature created from a RASP program, so we should already know what the ground truth is. \n\nI think the idea of using meta-models to convert a neural network into a program representation could have potential. However, training a model to do so by means of RASP + Tracr seems fundamentally limited.\n\nEven if I accept this as a research direction, I think the present work could be more thorough in its experiments and insight. As currently presented, there is really only one dataset (the generated one) and two results (the held out set performance and the non-sparse held out set performance). I think there is a higher bar for ICLR than this amount of inquiry into a research area."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "How many tokens are in the meta-model training set?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper touches on a very timely problem, which is attempting to scale mechanistic interpretability by reducing the amount of manual effort required by much of the existing literature.\n\nThe work is, to the best of my knowledge, original. I am not aware of any other works that attempt to automate interpretability by training a model to decode RASP programs (or any other algorithmic representation) directly from transformer weights.\n\nI found the writing to be generally clear. I also appreciated the limitations for being upfront and fairly comprehensive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents an approach to automated mechanistic interpretability for Transformers by training another Transformer (the meta-model) to decode a RASP program from a given model's weights. The meta-model is trained on random RASP programs compiled to Transformers using Tracr.\n\nThe paper presents two sets of experiments. The first uses random RASP programs of up to 9 operations. The trained meta-model achieves 60% accuracy in extracting the original RASP program. The second experiment focuses on whether a meta model can extract RASP programs from non-sparse weights (since Tracr-compiled Transformers tend to have sparse weights). This is accomplished by transforming the sparse Transformer weights by 1) a random orthogonal matrix and then 2) compressing the hidden dimension via PCA. A meta-model trained on non-sparse Transformers compiled from random programs of length 5 achieves 77% accuracy."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My main concerns is that the experiments are lacking in terms of demonstrating that a meta-model would be able to yield any actual insights for mechanistic interpretability. At best, the experiments have convinced me that a meta-model can invert Tracr compilation with enough data. Although I commend the authors for running the second set of experiments (with artificially dense weights), I think there is still to much of a gap between the dense weights and a \"real\" Transformer for the approach to have been validated.\n\nOne possibility would be to train Transformers using standard gradient descent on algorithmic outputs, then use the trained meta-model to invert them. For instance, rather than use Tracr to compile the RASP program for sorting (as done in the experiments), it would be better to *train* a Transformer to sort using data. I think validating the approach on a small number of Transformers trained (rather than compiled) to perform algorithmic tasks (on the order of 5-10) would be necessary for me to recommend acceptance.\n\nOther concerns:\n- The programs used in the experiments are all rather short, so it remains to be seen if the approach would apply to more complex / realistic domains (natural language, chess / go, or more complex algorithms)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Questions: \n\n- L306—307: “In addition, our meta-models tend to be larger than the base models\nthey are trained on by about a factor of 10-100, which would be prohibitive for very large base models.” Is there enough data to determine the scaling law here? Is the required size linear in the base model (or the compressed base model)? Or superlinear?\n- L310 “We use a black box to interpret a black box.” Have the authors considered applying the meta-model decompiler to itself, and seeing if the resulting RASP program is at all sensible? This would presumably need to be combined with the program-repair scaffolding suggested below to avoid per-token errors accumulating over a length that is 10×—100× the typical program length you used, but a positive result here would again be quite interesting.\n\nComments:\n\n- L229—230 “On this test dataset the decompiler is able to decompile 60% of programs without errors. On a per-token level it achieves an accuracy of 98.3%; a tokenized RASP program typically consist of between 30 and 60 tokens” Have the authors considered augmenting the model with program-repair scaffolding? For example, given an original RASP program $P$ that is Tracr-compiled in to $C$ and decompiled into $P’$, compile $C’$ with Tracr from $P’$ and train an adversarial model to generate possible counter-examples (as suggested in L402—403 “Automated Verification of Interpretations”), train a “repair” model to take both the weights of $C$, the decompiled program $P’$, and the (input, C(input), C’(input)) data, and suggest a new program $P’’$.\n- L351—352: “in one setting fully understanding the exact algorithm implemented by a network (Nanda et al. 2023)”. Nanda et al. 2023 do not fully understand the exact algorithm implemented by the modular arithmetic models; the MLP is left mostly unexplained. Zhong et al. 2023 [1] get closer on a simpler architecture, but even they do not explain how the MLP works. The only works I’m aware of that can at present claim to “fully understand the exact algorithm implemented by a network” are [2] and [3].\n- L400—403 “Automated Verification of Interpretations. Can a meta-model be trained to output not only a programmatic description of the base model, but also evidence or proof that this description is accurate? One approach would be to train a meta-model to adversarially suggest examples which might disprove any proposed equivalence between a model and an interpretation.” A simpler starting point would be to prove that the Tracr compilation of the output of decompilation is a close match to the original network. If we conjecture that the activations of one network are linearly probe-able from the other network, we can train a linear probe at all of the relevant points in the network to translate activations back and forth. Then any of the mech interp validation techniques (e.g., in order of increasing rigor: activation patching [4], causal scrubbing [5], or compact proofs [6]) could be applied to establish correspondence. AlphaProof [7] style automation might also be possible.\n\nMinor Comments:\n\n- L92—93: “pred” on the LHS should be “predicate”, right?\n- L243—244: “can be deterministically mapped to RASP code via\na deterministic algorithm.” using “deterministic[ally]” twice seems redundant, unless there’s something deeper going on\n\n[1] Zhong et al. \"The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks.\" *arXiv*, 2023, https://arxiv.org/abs/2306.17844.\n\n[2] Yip et al. “ReLU MLPs Can Compute Numerical Integration: Mechanistic Interpretation of a Non-linear Activation.” *ICML 2024 Mechanistic Interpretability Workshop*, 2024. https://openreview.net/forum?id=rngMb1wDOZ\n\n[3] Wu et al. “Unifying and Verifying Mechanistic Interpretations: A Case Study with Group Operations.” *arXiv*, 2024, https://arxiv.org/abs/2410.07476.\n\n[4] Stefan Heimersheim and Neel Nanda. “How to use and interpret activation patching.” *arXiv*, 2024, https://arxiv.org/abs/2404.15255.\n\n[5] Chan et al. \"Causal Scrubbing: a method for rigorously testing interpretability hypotheses.\" AI Alignment Forum, 2022, https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously-testing\n\n[6] Gross et al. “Compact Proofs of Model Performance via Mechanistic Interpretability.” *arXiv*, 2024, https://arxiv.org/abs/2406.11779.\n\n[7] AlphaProof and AlphaGeometry teams. “AI achieves silver-medal standard solving International Mathematical Olympiad problems.” DeepMind Blog, 2024, https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper's main strength lies in demonstrating a novel, systematic approach to automated interpretability that achieves significant results on a large dataset while laying groundwork for future developments in the field. The careful experimental design and clear presentation make the contributions accessible and (hopefully) reproducible.\n\nOriginality:\n\n- Novel approach of using meta-models to automatically extract human-readable programs from neural network weights\n- Integration of Tracr compiler with neural decompilation, effectively “reversing” the compilation process\n- Method for generating large-scale training data by sampling valid RASP programs\n\nQuality:\n\n- Thorough empirical validation with a large dataset (1.6 million programs)\n- Good quantitative results (60% accuracy on full programs, 98.3% token-level accuracy)\n- Clearly presented experimental methodology\n- Efficient dataset generation process (5 seconds per model on CPU)\n- Additional experiments on non-sparse weights\n\nClarity:\n\n- Clear problem formulation and motivation\n- Well-structured presentation of methodology\n- Transparent discussion of limitations and future work\n\nSignificance:\n\n- Addresses the fundamental challenge of scalability and automated discovery in ML interpretability"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores the use of meta-models — neural networks that take other networks' parameters as input — for automated interpretability of machine learning models. The authors present a method to train transformers to map neural network weights to human-readable code, effectively creating an automated “decompiler” for neural networks. They demonstrate this approach using Tracr, a compiler that converts RASP programs (a domain-specific language for modeling transformer computations) into transformer weights.\n\nThe main contributions are:\n\n1. Development of rasp-gen, a sampler that generates valid RASP programs, used to create a dataset of 1.6 million RASP programs and corresponding model weights\n2. Training of a transformer meta-model that can recover RASP programs directly from model weights, achieving 60% accuracy for complete program reconstruction and 98.3% token-level accuracy\n3. Demonstration that the trained meta-model can handle out-of-distribution examples, including successfully recovering a hand-written sorting algorithm\n\nThe authors also show their meta-model architecture outperforms previous approaches on related tasks, even when trained with less data. The work serves as a proof-of-concept for using meta-models to automate aspects of mechanistic interpretability, potentially offering a path toward more scalable neural network interpretation methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "By far the biggest weakness of this paper is the limitation to small RASP programs without much indication that the technique should be expected to generalize. Broadly, if I can be convinced that this technique has a reasonable shot of generalizing past compiled toy examples, I would increase my score.\n\nA substantial improvement, for example, would be to train models from scratch to match the behavior of the 1.6 million Tracr-compiled networks (separately: trained to match logits; and trained to match only the predictions / argmax output), and report numbers on decompiling these trained models to RASP programs that match their behavior. Even though there would be no guarantee that the decompiled RASP program would implement the behavior *in the same way* as the trained network, getting a positive signal here would still be a substantial update towards direct meta-models being able to infer general behavior directly from the weights. Even an evaluation on a couple hundred such models could be quite interesting.\n\nMore minor weaknesses:\n\n- The characterization as of the validation on “a hand-written sorting algorithm” as “out-of-distribution with respect to the 1.6 million generated programs we use for training” (L45—47) is misleading. I would not call the sorting algorithm “out-of-distribution” just because it was removed from the training dataset. Unless there is a (relatively natural) axis of variation (for example, length, number of variables, number of operations, number of times any given operation is used) in which the sorting algorithm can be shown to be at least 1σ away from the mean, I think it would be less misleading to say “which is not in the training distribution”. (As an analogy, if I sample 1.6 million reals from $\\mathcal{N}(0, 1)$, remove all numbers within $10^{-5}$ of 0.2, and then train a model to learn $x \\mapsto x^2$, I wouldn’t say that 0.2 is “out-of-distribution” for this training.)\n- Section 5 (Related Work) should include at least a brief comparison with SAEs [1] and linear probes [2], both of which can be seen as training a (very simple) model to directly interpret a neural network (albeit from the activations, rather than the weights). [Lack of contextualization with respect to SAEs and linear probes was why I gave a \"3\" for presentation rather than a \"4\".]\n- The paper would benefit from a bit more analysis of the decompilation failures. For example, L229—230 “On a per-token level it achieves an accuracy of 98.3%” suggests that most of the failure comes from accumulation of small errors. I want to know: What is the per-token fraction of the time that the correct answer is in the top two tokens? Top three tokens? Top four tokens? Top five tokens?\n\n[1] Bricken et al., \"Towards Monosemanticity: Decomposing Language Models With Dictionary Learning\", Transformer Circuits Thread, 2023. https://transformer-circuits.pub/2023/monosemantic-features\n\n[2] Guillaume Alain and Yoshua Bengio. “Understanding intermediate layers using linear classifier probes.” *arXiv*, 2016, https://arxiv.org/abs/1610.01644"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* How does the learned meta-model generalize to the actually learned model weights? \n* Or can you train a meta-model using the actually learned model weights?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The interpretability of models is an important problem. The paper is easy to understand."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper trains transformers to de-compile RASP programs. It trains the meta-model (a transformer) to map transformer weights to RASP programs. It trains on randomly sampled RASP programs (1-9 operators) and evaluates the trained meta-model using i.i.d. samples. Accuracies range from 60% to 80% in various settings."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The most important concern is that I am not sure if training meta-models to decompile Tracr-compiled RASP programs can help interpret transformers in practice. It first assumes the functions to be learned in practice can be represented by RASP programs (at least the program shouldn't be too long to be covered in the training dataset). It also assumes the learned weights are in-distribution respective to the compilers of the RASP program so that the meta-model can generalize. It then needs to build a giant training dataset towards covering all possible RASP problems and then trains a potentially larger meta-model to learn to decompile. None of the previous assumptions are practical or intuitive to me. \n\nOther concerns are \n* The performance is not impressive. As stated by the authors, reversing Tracr is a relatively easy task at least for categorical inputs. \n* The novelty is mainly limited to learning a transformer to decompile Tracr-compiled RASP programs. \n* Limitations as stated by the authors: (1) Tracr-compiled weights are dissimilar to actually learned ones; (2) unlikely to cover all RASP programs in the training dataset at least using the current sampler; and so on."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024towards,\ntitle={Towards Meta-Models for Automated Interpretability},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1zDOkoZAtl},\nnote={under review}\n}"
},
"abstract": {
"value": "Previous work has demonstrated that in some settings, the mechanisms implemented by small neural networks can be reverse-engineered. \nHowever, these efforts rely on human labor that does not easily scale. \nTo investigate a potential avenue towards scalable interpretability, we show it is possible to use \\emph{meta-models}, neural networks that take another network's parameters as input, to learn a mapping from transformer weights to human-readable code.\nWe build on RASP and Tracr to synthetically generate transformer weights that implement known programs, then train a transformer to extract RASP programs from weights. \nOur trained compiler effectively extracts algorithms from model weights, reconstructing a fully correct algorithm 60% of the time."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"interpretability",
"safety",
"automated interpretability",
"ai safety",
"explainability",
"extraction",
"tracr",
"rasp"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f5585f12f08095b8b2f86546d17091c2ea7b56cc.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Towards Meta-Models for Automated Interpretability"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1zgil8py5o | Benchmarking Intelligent LLM Agents for Conversational Data Analysis | main | Active | Conversational Data Analysis;Large Language Models;Benchmark;Multi-agent Environment;Adaptive Interaction Reflection;Decision-making | datasets and benchmarks | 3;5;8 | 4;4;4 | 2;3;3 | 2;3;4 | 1;3;3 | 5.333333 | 4 | 2.666667 | 3 | 2.333333 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How were the conversational modes and action types specifically chosen, and were any other modes considered?\n\n2. What types of errors were most commonly observed in scenarios involving private libraries, and how might future models address these errors?\n\n3. Could human-in-the-loop interventions or feedback improve the realism of conversations, and if so, how would this influence the dataset’s construction costs?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper provides an evaluation framework with diverse conversational scenarios, including scenarios requiring private library handling and complex conversational reasoning.\n\n2. The paper presents an approach to scaling dataset generation cost-effectively, an essential aspect for building future benchmarks.\n\n3. The paper evaluates multiple state-of-the-art LLMs and provides a granular analysis of their performance, highlighting challenges in conversational data analysis that underscore the need for improved LLM capabilities."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Tapilot-Crossing, a benchmark designed to evaluate large language models for conversational data analysis tasks. The benchmark contains 1,024 conversational interactions across four scenarios. A multi-agent environment was developed to create this benchmark, enabling automated and cost-effective data generation. The paper also proposes Adaptive Conversation Reflection (ACR), a self-reflective strategy to help LLMs learn from past interactions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The main weakness is the potential over-reliance on simulated data: the exclusive reliance on simulated agent conversations might not fully capture the unpredictability and diversity of real-world human interactions in data analysis tasks.\n\n2. While the paper introduces different scenarios, it lacks an in-depth justification for the selection of these specific conversational modes and how each addresses unique real-world challenges.\n\n3. The results focus on improvements with ACR but offer limited exploration of failure cases and challenges within Tapilot-Crossing, such as common errors in multi-turn interactions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- L154-160: in this phase human annotators only select a single scenario that sounds the most interesting. What is the agreement between the annotators in choosing the scenario during this phase?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Very well-written and easy to understand with thoughtful explanations of details. \n- Conducted a highly sophisticated design of dataset construction process with rigorous human validations, which strengthens the findings of the paper and the implications to future topics (e.g., beyond tabular data processing scenarios). \n- They also conducted a qualitative analysis to identify error types across all models used, with a reasonable interpretation of the underlying reasons and patterns (as mentioned in the Appendix)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces TAPILOT-CROSSING, a novel benchmark for LLM evaluations in conversational data analysis, inspired by multi-agent LLM environments. Through rigorous human evaluations, the paper improves the reliability of human-AI approaches to constructing such conversational logs of data analyses in several action-focused dataset exploration scenarios. Also, the paper proposes Adaptive Conversation Reflection (ACR) to leverage previous conversation history to guide LLM agents to successful completion of the data analysis tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "It appears that generating the 'logic' of the prior conversation trace and incorporating it into the next step of generation is not a novel approach to enhancing LLM reasoning in generative tasks. This ACR method closely resembles existing techniques, such as prompt chaining, ReAct, and self-reflection, in its methodological approach."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. How do you ensure that the client persona is consistent with the table data (domain)?\n2. What is the definition of 'reasonable scenarios' in human intervention for dataset? The detailed criteria will help this work reproducible. \n3. For Line 168, What if the stereotype is not true on the dataset, making questions unanswerable?\n4. For Line 193, how other choices of multiple-choice questions are made?\n5. Could you add the number of turns in Data characteristics?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Synthetic conversation data generation incorporated with human in the loop data quality control.\n2. Clear division of different conversation modes to reflect real-world scenarios. \n3. Made use of multi-agent system to generate diverse and realistic dataset.\n4. Proposed method (ACR) give huge boost than base LLM approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduced TAPILOT-CROSSING, a benchmark for evaluating LLM in conversational data analysis task. The dataset is generated with a multi-agent environment, DECISION COMPANY, with necessary human intervention. It used human evaluation to show the quality of the dataset, and evaluated various LLMs, with the proposed method Adaptive Conversation Reflection. The experiment result presented that current models performed poorly on the task, while ACR gave 44.5% boost from the base approach."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The abstract did not mention conversational data analysis requires tabular data input. In fact this is a very important input component of this paper. Please consider adding the related explanation.\n2. The definition of conversational data analysis is unclear, and from the Related Work session, other papers' definition for this task also varies. For example, Wang et al., 2024 explained this task as one user turn and multiple cross-agent turns; Yan et al., 2023 saw this as multi user-agent turns. Based on the Task Formulation of this paper, sampled table content T is required in the setting. Since the definition of the task is not unified, this paper should explain clearly why a table is required, why the task is defined differently with others, and what distinct aspects of LLM do the task evaluating. \n3. There are many terms in the paper used without definition or explanation, making the paper hard to understand. For example, requirements of client (Line 162), mainstream result types (Line 215), intents (line 246) are not mentioned before. Consider adding clear definition of the terms before using them. \n4. The poor human evaluation result on the dataset before human calibration makes the overall DECISION COMPANY skeptical. The result represents that human is the main body of dataset construction, not the multi-agent system.\n5. Baseline used in this paper is too weak. The paper used base LLM, CoT, and ReAct. For the code generation task, there are multiple recent advanced methods such as MetaGPT. Furthermore, the proposed method should be tested with various LLMs to show the generalizability."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce Tapilot-Crossing, a new benchmark for evaluating Large Language Models on conversational data analysis, along with an adaptive reflection strategy (ACR) that improves model performance by up to 44.5%.models."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024benchmarking,\ntitle={Benchmarking Intelligent {LLM} Agents for Conversational Data Analysis},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1zgil8py5o},\nnote={under review}\n}"
},
"abstract": {
"value": "Conversational Data Analysis, a collaboration between humans and machines, enables real-time data exploration for informed decision-making. The challenges and costs of collecting realistic conversational logs for data analysis hinder comprehensive quantitative evaluation of Large Language Models (LLMs) in this task. To mitigate this issue, we introduce **Tapilot-Crossing**, a new benchmark to evaluate LLMs on conversational data analysis. **Tapilot-Crossing** contains 1024 conversations, covering 4 practical scenarios: *Normal*, *Action*, *Private*, and *Private Action*. Notably, **Tapilot-Crossing** is constructed by an economical multi-agent environment, **Decision Company**, with few human efforts. This environment ensures efficiency and scalability of generating new conversational data. Our comprehensive study, conducted by data analysis experts, demonstrates that Decision Company is capable of producing diverse and high-quality data, laying the groundwork for efficient data annotation. We evaluate popular and advanced LLMs in **Tapilot-Crossing**, which highlights the challenges of conversational data analysis. Furthermore, we propose **A**daptive **C**onversation **R**eflection (**ACR**), a self-generated reflection strategy that guides LLMs to **learn from successful histories**.\nExperiments demonstrate that **ACR** can evolve LLMs into effective conversational data analysis agents, achieving a relative performance improvement of up to 44.5%."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Conversational Data Analysis",
"Large Language Models",
"Benchmark",
"Multi-agent Environment",
"Adaptive Interaction Reflection",
"Decision-making"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/10944b107c36e16d0b1947e0ca98408058a2f299.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/d5c9efa3126171916d5a03091851d49cf7044ecc.zip"
},
"title": {
"value": "Benchmarking Intelligent LLM Agents for Conversational Data Analysis"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1ziPqVsDLc | Revealing the Unseen: Guiding Personalized Diffusion Models to Expose Training Data | main | Active | Diffusion Models;Data Extraction;Few-shot Fine-tuning;Copyright Protection;Trustworthy AI;Security | alignment, fairness, safety, privacy, and societal considerations | 5;5;6;6 | 3;3;3;3 | 2;3;3;2 | 3;3;3;2 | 3;3;2;3 | 5.5 | 3 | 2.5 | 2.75 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Can you provide more intuition on eq (3)? For example, besides the training iterations effects on $\\lambda$, how $\\lambda$ might be affected by the size of fine-tuned data and their similarities?\n2. What's the performance of extraction accuracy with increasing fine-tuning data $N_{0}$? \n3. Can you conduct more analyses on the impact of different combinations of DMs and training data (artistic styles vs. object). It seems their performance are quite different from table 1 and table 2."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is well-written with clear presentations. For example, the method sections of model guidance and text-guidance extension are presented in a way that is easy to follow, and further details on caption extraction are elaborated in the appendix. The figures and tables are quite easy to parse and get the main results.\n- The experimentations are quite comprehensive, with ablation studies on several important hyperparameters concerning models and datasets. \n- It is also very good to present some defenses against the proposed method and discuss the implications of these results"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a model guidance-based approach to fine-tuning data recovery, where it leverages the denoising prediction from pre-trained models $\\epsilon_{\\theta}$ to extrapolate the predictions from fine-tuned models $\\epsilon_{\\theta^{'}}$. The resulting denoiser can be used to sample data from a distribution similar to the fine-tuned data distribution $q$. The authors further extend this to text guidance diffusion models. After denoising steps, a clustering algorithm was applied to further improve the extraction accuracy. The experimentations demonstrate improved average similarity and average extraction accuracy of extracted images compared to text-to-image and CFG with clustering as baselines. Further ablation study was conducted to understand the impact of training images $N_0$ and generated images $N$, as well as model hyperparameters such as guidance scale $w'$ and correction term scale $k$. Results on possible defenses against the proposed method were also presented."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The main weaknesses are the task setup and the significance of results:\n- For task setup, the paper seems to address a relatively clean form of fine-tuned models, whereas in real-world settings the pre-trained models might be not always available (sometimes presented with noisy labels), and in many cases, the fine-tuned model could be a mixture of multiple fine-tuned data distributions and pre-trained models. I wonder how the proposed methods were able to consider much more realistic scenarios.\n- The main experiments are conducted on a relatively small test dataset that consists of 20 artist models (10 mages per model) and 30 object models (4-6 images per model), making the significance of results hard to judge. Moreover, the improvements over the two selected baselines are noticeable (table 1). When increasing $N_{0}$, the performance drops significantly (figure 4a), the model does not work very well on fine-tuned models with a larger scale of images. These results suggest space for improvements, which are all needed from this work considering the applications of real-world problems."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed.",
"Yes, Privacy, security and safety"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see \"Weaknesses\"."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper is overall clear written and easy to follow.\n2. The paper focuses on extracting fine-tuning data from diffusion models' checkpoints. This research topic has not been paid attention before.\n3. The code is available."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel framework FineXtract to extract fine-tuning data from fine-tuned diffusion models' checkpoints. The authors approximate the learned distribution during the fine-tuning process of diffusion models and use it to guide the generation process toward high-probability regions of the fine-tuned data distribution. Besides, a clustering algorithm is proposed to extract images visually close to fine-tuning datasets. Experiments result on fine-tuned checkpoints on various datasets, various diffusion models verify the effectiveness of the proposed FineXtract."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The experiment of the paper is not solid enough, the paper needs more experiments to verify the proposed method works. The paper only chooses \"Direct Text2img+Clustering\" and \"Classifier Free Guidance + CLustering\" as baselines. I think these two methods are only ablation counterparts. It is better to compare the proposed method with other relative works on extracting training/finetuning data.\n2. The proposed method seems sensitive to the guidance scale $w'$ and correction term $k$. How to decide the hyper-parameters in practice might be challenging.\n3. I am somewhat skeptical about the necessity of developing a dedicated method specifically for extracting images from the fine-tuning phase. It seems feasible to simply apply existing methods for extracting training images directly on the fine-tuned checkpoint, then filter out the results that overlap with images extracted from the pretrained model."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to Weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.\tThe focus on extracting fine-tuning data is indeed interesting. this focus reveals a new perspective on privacy concerns, which could enhance the security of diffusion models and preserve the privacy of data owners.\n2.\tThe experiments are also conducted on checkpoints from real-world platform, i.e., huggingface, demonstrating the practical effectiveness of the proposed method.\n3.\tThe paper is generally well-written, with a clear structure that is easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the data extraction problem of diffusion model, particularly focusing on the fine-tuning data. The authors use the parametric approximation of the distribution shift between the original model and fine-tuned model as guidance, to generate the fine-tuning data. Experiments across different diffusion models on various datasets and real-world checkpoints from huggingface demonstrate the effectiveness of proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe performance of the proposed method keeps decreasing with the growth of training data. Will the growth of class number have the same effect? How can this issue be mitigated in practice, especially given the vast volume of training data used for industry diffusion models?\n2.\tThe performance under LoRA fine-tuning is noticeably worse. Does this suggest that the proposed method is less effective for parameter-efficient tuning? \n3.\tThe effectiveness of the proposed method is significantly diminished when being attacked. The authors state that \"transformations render ... images largely unusable.\" Could you provide statistics on the extent of unusability? To what degree does the attacker lose model utility to achieve the attack performance reported in Table 3?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "This paper is both interesting and innovative. However, there are some weaknesses that need to be improved. Please refer to Weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper is well-written.\n2. This framework can be applied to both unconditional and conditional DMs.\n3. The result is significant, highlighting the potential risks of data leakage."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposed a framework FineXtract, which exploits the transition from the pre-trained DM distribution to the fine-tuning data distribution to accurately guide the generation process to the high-probability region of the fine-tuning data distribution, thereby achieving successful data extraction. Experiments on multiple datasets and real-world checkpoints, highlight the potential risks of data leakage and provide strong evidence for copyright infringement."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In Sec. 5.2, the performance of baselines under various $N$ and $N_0$ deserves further discussion.\n2. The reason why Correction Term Scale $k$ performs better in the negative case needs further analysis, which is inconsistent with its motivation.\n3. It is worrying whether using PCA to extract important signals from multiple-words prompts is feasible when $W$ is large.\n4. Some symbol errors, for example, the second $\\epsilon_{0,77}$ in Appendix A.3 should be $\\epsilon_{1,77}$."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Extrapolated guidance from pretrained to fine-tuned DMs enables strong fine-tuning data extraction."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024revealing,\ntitle={Revealing the Unseen: Guiding Personalized Diffusion Models to Expose Training Data},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1ziPqVsDLc},\nnote={under review}\n}"
},
"abstract": {
"value": "Diffusion Models (DMs) have evolved into advanced image generation tools, especially for few-shot fine-tuning where a pretrained DM is fine-tuned on a small set of images to capture specific styles or objects. Many people upload these personalized checkpoints online, fostering communities such as Civitai and HuggingFace. However, model owners may overlook the potential risks of data leakage by releasing their fine-tuned checkpoints. Moreover, concerns regarding copyright violations arise when unauthorized data is used during fine-tuning. In this paper, we ask: *“Can training data be extracted from these fine-tuned DMs shared online?”* A successful extraction would present not only data leakage threats but also offer tangible evidence of copyright infringement. To answer this, we propose FineXtract, a framework for extracting fine-tuning data. Our method approximates fine-tuning as a gradual shift in the model's learned distribution---from the original pretrained DM toward the fine-tuning data. By extrapolating the models before and after fine-tuning, we guide the generation toward high-probability regions within the fine-tuned data distribution. We then apply a clustering algorithm to extract the most probable images from those generated using this extrapolated guidance. Experiments on DMs fine-tuned with datasets such as WikiArt, DreamBooth, and real-world checkpoints posted online validate the effectiveness of our method, extracting approximately 20\\% of fine-tuning data in most cases, significantly surpassing baseline performance. The code is available at an anonymous link."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Diffusion Models",
"Data Extraction",
"Few-shot Fine-tuning",
"Copyright Protection",
"Trustworthy AI",
"Security"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/bd25933522e7743655741d38c60ffea7f40fd3fe.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Revealing the Unseen: Guiding Personalized Diffusion Models to Expose Training Data"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1zuJZ1jGvT | Offline Reinforcement Learning with Closed-loop Policy Evaluation and Diffusion World-Model Adaptation | main | Active | reinforcement learning;offline reinforcement learning;model-based reinforcement learning;diffusion model | reinforcement learning | 3;3;5;6 | 4;3;4;3 | 2;2;4;3 | 2;3;3;3 | 1;2;2;3 | 4.25 | 3.5 | 2.75 | 2.75 | 2 | -0.19245 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. I like the idea of using IS to adapt the world model but the reasoning is not exactly clear to me. Usually, we want to use IS to estimate some variable under an unknown distribution by reweighing it by some other known distribution. However, in this case, we can estimate both distributions very well. If my understanding is correct, can you motivate the use of IS more?\n2. From my understanding, IS is a poor technique to use when the two (policy) distributions are very different and that is a completely plausible scenario in your problem setting. Can you explain how you avoid this? Furthermore, have you considered alternative techniques such as MCMC?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The proposed method is novel, well-motivated and clearly explained.\n2. Theoretical results to support claims of bounding the return gap between world model and environment.\n3. Strong results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work attempts to address the important issue of policy distribution shift in offline RL. The authors propose a novel method which uses a world model as a synthetic data generator for closed-loop policy evaluation, where the world model is adapted via importance sampling to the changing distribution of the learned policy. The proposed method is supported be theoretical bounds on the return gap and shows impressive performance on key offline tasks with suboptimal offline datasets.\n\n\\* Note that while this paper does fall squarely within my expertise, I was not able to give it the time it deserves and that is reflected in my confidence score."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While I like the method proposed by the paper, I don't see a clear dependency on diffusion. From my understanding, the method can be generalized to any world model with some form of uncertainty estimation. As such, I think the paper would be stronger if the method is generalized to any world model. However, I would also be satisfied by an ablation comparing diffusion to other world model types, and clearly showing why diffusion is necessary for the level of performance presented.\n2. The paper can be made shorter and more concise to improve readability. I believe Section 3 is mostly unnecessary. You introduce the full notation and background of diffusion but barely sue it in the main text. I recommend shortening it significantly (possibly including it in Section 4) and leaving the full notation and explanation in the appendix as it is necessary for the proofs.\n3. In Section 4.1, you state that \"Introducing $s_{t+1}$ as an extra input significantly improves accuracy of reward prediction [...]\". This is atypical in world model literature and I would recommend backing up this claim with an ablation. Terminating based on high uncertainty is also new to me and a great suggestion! I would also love to see an ablation of this as this changes the distribution of your trajectories significantly. \n4. The main results in Table 1 are poorly presented. I would recommend replacing the table with aggregated statistics as suggested by [(Agarwal et al, 2022)](https://arxiv.org/abs/2108.13264). Table 2 can also be bundled into this figure.\n\n\nMinor remarks:\n1. I found Figure 1 insufficient to grasp the proposed method. I was only able to grasp it after I read the full work at which point the figure has little value. I recommend using a simpler graph with only a few data points and simpler text annotations within the figure itself. (b3) is unnecessary, only showing the (b1) -> (b2) will improve readability.\n2. Figure 2 can also be mode more self-explanatory and independent. The two replay buffers are confusing in the figure as they aren't sufficiently explained. The diffusion steps do not necessarily need to be visualized. You probably don't need to spell out each variable being sampled from the buffers. I would also suggest changing the blue box to 'Policy Evaluation within World Model'.\n3. Line 228, (I assume) missing 'Section 3'.\n4. Line 279, unclear what the loss $l$ is at this point of reading the text.\n5. Figure 3, keep y axis the same between all subfigures. Move legend under figures. Possibly remove 'random' as it does not add to the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Q1: Can you elaborate on assumption 4.3 and why this is a reasonable assumption to make? \nQ2: Can you elaborate on what parts of section 4.3 are claimed to be novel in this work and which parts are taken from previous work? \nQ3: Can you elaborate on what part of the theory is specific to your algorithm and which parts you believe are generally true for all approximate models? \nQ4: Can you elaborate on why you chose standard deviation as a measure of dispersion?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "I would like to preface this review by saying that I am not an expert in offline model-based RL. However, I am very familiar with the general online RL and theoretical RL landscape.\n\n1. Clarity \na) The language in the paper is largely clear and the text has a clear red line that the reader can follow. \nb) The visualizations in Figure 1 and 2 are helpful to understand the approach.\n\n2. Related work \na) From what I can tell, the work cites the most prominent approaches in offline (model-based) RL and provides a reasonable amount of related work to differentiate its contribution from prior art.\n\n3. Novelty \na) Based on my (incomplete) knowledge the idea to constrain a model based on the distribution shift of the policy based on offline data only seems novel enough to warrant publication. However, other reviewers may have more insight into this than I do.\n\n4. Experiments \na) The experiments are conducted with a sizable number of baselines to demonstrate the capabilities. I do think the experiments demonstrate that there might be benefits of the method in lower quality data regimes. However, I have several things that need to be addressed before I can make a certain statement about this. I will come to them later."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper operates in the setting of offline reinforcement learning. The paper proposes a new approach that uses an uncertainty-penalized diffusion model as a world model. This world model is used to update the offline policy by constraining a standard SAC's actions via uncertainty estimates. The world model is updated using importance sampling to address the distribution shift of the trained policy from the behavioral policy over time. The paper provides a theoretical analysis of error bounds as well as an experimental section highlighting the approaches performance in comparison with recent offline methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Mathematical rigor and theory \na) It is a well-known fact in the learning theory literature that approximate models are sufficient to obtain high return. Analysis on this goes as far back as [1] which can easily be adjusted to fixed distributions. Given that the paper states that this analysis is based on prior work, it is unclear to me what is being claimed to be novel here. There might be subtleties I am missing due to unclarity of notation which I will outline next. \nb) In equation 5, and all following similar expectations, it is not clear what $s_t$ is sampled from. This is quite important given that we are talking about distribution shifts and without this notation being precise it is difficult to determine the correctness. It is also not clear to me what an expectation over $\\mathcal{M}$ means which seems to be a set. \nc) In equation 6, the TV distance is ill defined since $R(s_t, a_t)$ is not a distribution and there seem to be no assumptions on this function anywhere else. \nd) It is unclear to me, why assumption 4.3 is reasonable. I will ask for clarification. \ne) Theorem statement should generally be concise, but they should also be self-contained. In order to understand Theorem 4.5, one would have to read large parts of the paper just to understand the notation. I recommend adjusting this as needed for readability. The provided proof is also not a proof, but it looks more like a sketch. I recommend stating it as a sketch and referring to the full proof. \n\n2. Experiments \na) Experiments over 5 seeds in reinforcement learning can often be misleading given the high variance. \nb) Tables 1 and 2 have the maximum average bolded. This can be misleading as the reader might think these methods are superior as it is not uncommon to bold statistically significant results rather than max averages. I recommend the manuscript is switched to the latter to avoid confusion. \nc) To address the previous point, it is necessary to report variance measures for all baselines and not the presented algorithm. That should in general always be the case. In Table 1, all favorable results on the med-exp dataset are within variance of another approach, at least one of the favorable results of med-rep is within variance of another approach and it is unclear how many of the other results are significant. In Table 2, at least 5/6 results seem to be within variance. Thus, the claim that the provided algorithm outperforms existing SOTA algorithms is not well supported. \nd) The paper does not provide any additional analysis besides best returns on D4RL and as a result, it is not clear when I should use this method as the results on lower quality datasets are not completely consistent. This makes things tricky because many of the other results may not be significant. One way to remedy the fact that the results are not necessarily much stronger in many cases would be to provide analysis as to *when* this method helps. This could include an experiment that validates the claims about lower distribution shift error or an ablation on the properties of the datasets on which the approach works well. \n\nOverall, I think this paper offers a neat new idea that can provide insights into how to build purely offline RL algorithms. However, I believe the theoretical section is the weakest part of the submission, and the paper might from this section being shortened. Further, precise notation is required should the authors intend to keep this section. The experiment section could be strengthened by additional analysis that helps understand when this method is useful. I do not think the claim that this method outperforms sota-algorithms is sufficiently supported. I do think that the paper provides an interesting new idea but in the current state I am recommending rejection.\n\n[1] Michael J Kearns and Satinder P Singh. Finite-sample convergence rates for q-learning and indirect algorithms. NeurIPS 1999."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "**Important questions**\n\n- How does the performance of ADEPT compare to past works that utilise diffusion models as world models for policy training on D4RL (namely DWM (Ding et al., 2024) or PGD (Jackson et al., 2024)?\n- Are the importance-sample updates more important to final performance in environments other than halfcheetah?\n\n**Less important questions**\n\n- Where are the uncertainty intervals for the baselines in Tables 1 and 2?\n- You set $H$ to 5, how does performance change with different values of $H$? \n\nIf the authors answer the important questions I'm more than happy to update my score."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The experimental results are strong. Most of the required baselines (see weaknesses) are implemented and the proposed method outperforms them in aggregate.\n- The theoretical analysis is additive, and as far as I can tell, sound.\n- The paper is generally well-written and well-motivated.\n- The appendix provides some interesting additional results that support the main body."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a model-based offline RL algorithm leveraging diffusion models. Unlike past works that use pre-trained, frozen diffusion models for generating synthetic rollouts, this work proposes to iteratively align the model's predictions with the evolving policy during training. A theoretical analysis is provided that provides an upper bound on the return gap between an optimal policy trained inside the diffusion model versus the real environment, and experiments are performed on D4RL that show an improvement over canonical offline RL methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Though the results are impressive, my primary concern is that the method appears to be very similar to that proposed in [1,2]. It seems the key difference from prior works is the proposed mechanism for realigning the diffusion model's predictions with the changing policy during training. However, when this mechanism is ablated in Figure 3, it appears to only significantly improve performance in one of the four datasets on halfcheetah (medium-replay). The reader would be able to better understand the performance gains expected from this method w.r.t. the methods from [1,2] if they were implemented as baselines, but unfortunately they are not. A more thorough comparison of the authors proposed method with those from [1,2] would improve the paper, and leave the reader more confident that their proposals are a concrete step forward from these works. \n\n**Minor feedback**\n\n- Lines 120-121: Ha & Schmidhuber's paper introduced world models for online RL, not offline RL\n- Section 3 title should be \"Preliminaries\"\n- Line 151: \"Agent [that] acts...\"\n- Line 153: transits -> transitions\n- Line 161: \"real dynamics $P$\"?\n- Line 272: collects -> collected\n- Section 5 title should be \"Experiments\"\n- Line 490 there should be text following _i.e._\n- Missing closed brackets in Equations 7 and 8\n- Missing brackets in Equation 12\n- Line 398: \"practically\"\n\n**References**\n\n[1] Zihan Ding, Amy Zhang, Yuandong Tian, and Qinqing Zheng. Diffusion world model. arXiv preprint\narXiv:2402.03570, 2024.\n\n[2] Matthew Thomas Jackson, Michael Tryfan Matthews, Cong Lu, Benjamin Ellis, Shimon Whiteson,\nand Jakob Foerster. Policy-guided diffusion. arXiv preprint arXiv:2404.06356, 2024"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "I am concerned about the referencing used by the authors in this paper. In the appendix (B.1 Baselines) without any detail, there is a long list of cited works (12 papers) which seem to hold no relevance to the paper and all have some degree of common authorship. These are not mentioned elsewhere in the paper and are cited without explanation. This is done in line 903 of the paper:\n\n> In addition, we cite to the following works. (Zhang et al., 2024b; Zou et al., 2024; Gao et al., 2024; Zhang et al., 2024a; Wáng et al., 2023; Fang et al., 2022; 2023; Zhou et al., 2022; 2023; Mei et al., 2023; Chen et al., 2023; 2021)\n\nObserving some of these papers, it is clear that they have no relevance to this paper. For example, one of the cited papers is (line 679):\n\n> Yì Xiáng J Wáng, Zhi-Hui Lu, Jason CS Leung, Ze-Yu Fang, and Timothy CY Kwok. Osteoporotic-\nlike vertebral fracture with less than 20% height loss is associated with increased further vertebral\nfracture risk in older women: the mros and msos (hong kong) year-18 follow-up radiograph results.\nQuantitative Imaging in Medicine and Surgery, 13(2):1115, 2023.\n\nBesides the above, a large amount of the papers are about reinforcement learning but in completely different areas to this work (such as MARL). It would be one thing if these papers had been motivated with explanation, and as such had some link to the research at hand. However, their means of introduction and the way they are buried in a subsection of the appendix makes me believe that this is a case of academic dishonesty; in particular, this is an example of self-citation to boost the author's citation count rather than being relevant to the paper, and also risks affecting the paper's anonymity."
},
"flag_for_ethics_review": {
"value": [
"Yes, Research integrity issues (e.g., plagiarism, dual submission)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- It seems a bit counterintuitive/non-obvious to compute your done flag as when uncertainty reaches a limit. Does this effectively just lead to truncated rollouts? What kind of bias does this have on bias?\n- I don't quite understand the assumption of line 340, that you can omit one of the inputs in the proof. To my understanding the reward model comes after the denoising model, and as such would not intrinsically be able to ignore $\\hat{s}_{t+1}$. Is this not correct?\n- How were hyperparameters for the underlying learning method tuned? It says they were consistent for different methods - were they tuned for ADEPT or for one of the base algorithms, or are they taking default values.\n- How does this method compare to other approaches which compensate for overestimation bias? For example, how does it compare against policy guided diffusion ([1] above)?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "In general, I find the work reports good results and, while in places is hard to parse, has writing of a reasonable standard. I raise a number of strengths for this paper below.\n\n- I find the related work has generally good coverage over crucial areas, besides what I would class as a significant error in description for [1] (below) and an omission.\n- I like the approach of penalising overestimation by relying on the uncertainty of the world model itself. This is quite elegant.\n- Using importance sampling to account for distribution shift is also intuitive (though based on the ablation has a relatively minimal impact on performance). I guess it just makes things theoretically sounder.\n- I am grateful to see an ablation study, which I think is very inforamtive. However, I think making claims about the significance of important sampling is hard, given that reported values are often in confidence. The ablation study does clearly suggest that the uncertainty penalty in ADEPT contributes to performance improvement."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper deals with synthetic data generation in offline RL using diffusion models. Their principal contribution revolves around two methodological changes; introducing a penalty in the reward calculation for uncertain states, and employing importance sampling to account for policy distribution shift. They motivate their design decisions theoretically, and demonstrate strong performance in D4RL, a standard offline RL benchmark."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I have a number of strong concerns about this paper.\n\n- I found figure 1 quite confusing, and personally think that if a figure needs such a long caption the figure is probably quite unclear. For instance, I don't follow what (b3) demonstrates. It seems like finding a way to show this without showing all the data points etc., for visual purposes, might make things clearer.\n- In related work, stating that [1] doesn't 'provide explicit mechanisms to solve the distributional shift issue' is **fundamentally false** - that is the entire and explicit basis of the method. Besides this, I found the related work relatively thorough; one other relevant work would be [2].\n- I found the description of the architecture hard to interpret. I would clarify that the MLP predicts the reward when the architecture is first interpreted. Similarly, the way the inputs are introduced ('we replace $\\mathbf{x}_0$ with ...') was a bit confusing and could be worded better.\n- Despite spending a long time with it, and attempting to verify the appendix proofs, I found I had a tough time with this maths and didn't find it intuitive to follow. It is also not made clear to me how the derivations in the paper lead to the reward gap guarantee at the top. Note I am not coming from a theoretical background, but imagine that others might also find this difficult.\n- It feels this should be compared to other existing methods for compensating for overestimation bias. The key baselines here should be comparing the same policy optimisation algorithm with different approaches for reducing overcompensation bias. This is not what is shown in this paper.\n- There are no error bars for any of the results besides ADEPT's, meaning it hard to see overlapping of confidence intervals.\n- It is unclear whether the error bars report standard devision or standard derror. In table 2, the caption reads 'we show the standard deviation of the performance... Note that the standard *error* is usually large...'\n- I feel it is important to raise the significant issue of referencing in this paper as a weakness as well as in my ethics review. Buried in the appendix (line 903) there are 12 cited papers, many with common authorship and none with relevance to this paper or explanation. Either these papers are relevant to this work, and should be raised as related work with explanation, or they are not and thus should not be included. I assume these papers should not be included in this paper, but if they should be describing **why** is important.\n\n\nThere are also a small number of minor points and typos to highlight:\n- In line 37, stating that offline datasets typically consist of limited transitions is tautological; the offline dataset can't be infinite by nature.\n- In line 44, the world model does not interact with the policy! The policy interacts with teh world model.\n- The acronym of the algorithm (ADEPT) does not fit its name at all really.\n- Defining in line 181 that $\\hat{P}$ is the transition distribution defined by the world model would be worthwhile.\n- Line 190 'is a certain various schedule' doesn't make sense and I am not sure what it is meant to say.\n- Line 190 'the diffusion model define another' - firstly, this should be 'defines another'. Secondly, the diffusion model is composed of a forward and backward process, rather than this being defined by the diffusion model itself.\n- Line 199 does not make sense.\n- The first sentence of the methodology does not make sense.\n- Line 346: I don't really know what this means - what is the discrepancy?\n- Line 357: 'of between $\\pi$' should not have 'of'\n- Line 381: 'there existing $\\delta$' should read 'there exists $\\delta$\n- Line 398: 'piratically' is not the correct word I asume.\n- Line 411: I don't know what $H$ is and it is not defined I don't think.\n- In table 2, all bolded values are with a standard error of each so bolding, which implies significant improvement, is misleading.\n\n\n[1] Policy Guided Diffusion, Jackson et al. 2024\n[2] World Models via Policy-Guided Trajectory Diffusion, Rigter et al. 2024"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper proposes a new model-based offline RL algorithm ADEPT adopting uncertainty-penalized diffusion world model and importance-sampled world model adaptation, with theoritical analysis and experimental results demonstrated."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024offline,\ntitle={Offline Reinforcement Learning with Closed-loop Policy Evaluation and Diffusion World-Model Adaptation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1zuJZ1jGvT},\nnote={under review}\n}"
},
"abstract": {
"value": "Generative models, particularly diffusion models, have been utilized as world models in offline reinforcement learning (RL) to generate synthetic data, enhancing policy learning efficiency. Current approaches either train diffusion models once before policy learning begins or rely on online interactions for alignment. In this paper, we propose a novel offline RL algorithm, Adaptive Diffusion World Model for Policy Evaluation (ADEPT), which integrates closed-loop policy evaluation with world model adaptation. It employs an uncertainty-penalized diffusion model to iteratively interact with the target policy for evaluation. The uncertainty of the world model is estimated by comparing the output generated with different noises, which is then used to constrain out-of-distribution actions. During policy training, the diffusion model performs importance-sampled updates to progressively align with the evolving policy. We analyze the performance of the proposed method and provide an upper bound on the return gap between our method and the real environment under an optimal policy. The results shed light on various key factors affecting learning performance. Evaluations on the D4RL benchmark demonstrate significant improvement over state-of-the-art baselines, especially when only sub-optimal demonstrations are available -- thus requiring improved alignment between the world model and offline policy evaluation."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"reinforcement learning",
"offline reinforcement learning",
"model-based reinforcement learning",
"diffusion model"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/febf2c0d26abc654b1de3055d063946dd9d7bcfa.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/fc7d733920ca70158e6c6ea182a0c65cbac95ddb.zip"
},
"title": {
"value": "Offline Reinforcement Learning with Closed-loop Policy Evaluation and Diffusion World-Model Adaptation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
204sPiwBbB | Learning from others' mistakes: Finetuning machine translation models with span-level error annotations | main | Active | machine translation;finetuning;fine-grained annotations;language model | applications to computer vision, audio, language, and other modalities | 3;3;5;8 | 3;3;5;4 | 2;2;2;4 | 2;2;2;3 | 3;2;3;4 | 4.75 | 3.75 | 2.5 | 2.25 | 3 | 0.552532 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Questions: \n1. What was the motivation to include a DPO baseline? \n2. [Clarification] in the SFT baseline, does the finetuning of the base model involve training with MT triples from the MQM data (without annotations)?\n3. Were there any discussions about evaluation on span-based MT metrics like XCOMET (https://arxiv.org/abs/2310.10482) or GEMBA MQM (https://arxiv.org/abs/2310.13988)?\n\n\n\nSuggestions:\n1. Please include a few more qualitative examples in the Appendix.\n2. Please release the code/path to corresponding data after the process.\n3. While there is still no consensus about the quality of translations produced by LLMs, it would be useful to add a comment about the extension of this work to LLMs in the discussion section.\n4. To get an idea of the effectiveness of this work with contemporary works, it may be useful to report the performance of a few MT models submitted to the WMT'23 shared tasks (where the outputs are already available)"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The proposed method is indeed novel - use of MQM annotations to train better MT systems is quite understudied and this work improves on that. \n\nThe method looks fairly extensible to other tasks where span level annotations are already available.\n\nThe design of the span based loss function carefully considers the potential pitfalls of its inclusion and incorporates additional loss terms to mitigate the same."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes a new method called Training with Annotations (TWA) that leverages the MT evaluation annotation data to improve the quality of machine translation systems. High quality MT evaluation consists of annotation of errors at span-level per example. TWA essentially uses these to annotations to create an additional span level loss while trying to keep \nThe baselines consist of supervised fine-tuning approaches and DPO based models. The experiments are carried on two language pairs and sufficient ablation studies are conducted."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Proposed experiments have been evaluated on high-resource languages. MQM based data is available for Indic languages (https://aclanthology.org/2023.acl-long.795/), African languages (https://aclanthology.org/2024.naacl-long.334/) as well as previous editions of the Quality Estimation Shared Tasks. Evaluation on a mix of different resourced languages can strengthen the contribution of this work.\n\nNot a serious concern with this regards to the content of this work but proposed method is extensible to language pairs/tasks where such annotated data is already available. Future work could indicate potential ways of including synthetic data/alternatives when such high quality annotations are not available."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "I would appreciate clarification on the questions raised in the weaknesses section. Additionally, please let me know if there are any other aspects I may have overlooked that could address areas of confusion."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The paper is easy to follow.\n* It motivates exploration of learning from detailed, fine-grained signals.\n* The discussion on the importance of allowing the model to learn which tokens in an error span should be penalized is clear and well-motivated. The experiment supporting this claim is appropriately designed."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work investigates improving machine translation performance through fine-grained, span-level human-crafted annotations. It introduces a hybrid training loss that treats error spans as negative partial samples, ignores tokens after the first error, and considers tokens before the first error as positive samples. This fine-tuning approach is termed TWA. TWA is then compared against a set of newly proposed baselines, demonstrating outstanding performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Training Data Overlap: MetricX-23 is fine-tuned on MQM WMT’20-’21, and TWA is also trained on this dataset. This overlap suggests that evaluation might leak into training, disqualifying MetricX-23 as an evaluation metric in this setup.\n* Motivation for Task: The task is not well-motivated. Obtaining fine-grained annotations is costly, and it’s unclear why methods are needed to utilize this type of supervision. Although this is discussed in the third paragraph of the conclusion, it comes too late (it is better to be addressed in the Introduction), and the motivation largely pertains to other tasks that might benefit from TWA techniques. This raises the question: why focus on machine translation instead of these other tasks?\n* Choice of Offline Learning: It’s not well-explained why offline learning is favored over RL-based models. Efficiency might be one reason, which could benefit from further discussion and experimental analysis.\n* Design Choice Clarity: The design choice mentioned in footnote 1 on page 4 lacks adequate explanation.\n* Evaluation Choices: The choices of evaluation metrics and experimental designs are not well-justified.\n* Statistical Analysis in Section 6.1: The statistical test mentioned in Section 6.1 lacks detail. It’s unclear what test is used or how it’s conducted. More clarity here would improve the reader's understanding, especially when there is only one instance of each model.\n* Baseline Selection: The baselines are loosely defined. While there are efforts to select the best variant of DPO, the approaches cited as baselines remain relatively simple and open to criticism. For example, why not consider weighted sampling instead of TWA-seq, or use erroneous samples as negative samples instead of Filter + SFT? Similarly, why not adopt a weighted contrastive learning approach rather than DPO? Additionally, it raises questions as to why RL-based methods are excluded as baselines. Moreover, for baselines that do not require fine-grained supervision, other larger and less costly datasets could be leveraged. Restricting models with fewer training data limitations to the same dataset may be unfair.\n* Impact of Ignoring Off-Trajectory Tokens: The observation that ignoring off-trajectory tokens benefits one translation path while impairing another needs further exploration, even though it’s noted as a topic for future work. Given that ignoring these tokens is presented as a critical step—likely essential for En->De to outperform baselines—it would be beneficial to discuss this more thoroughly. Experiments across more translation paths might shed light on this factor’s impact. Additional analysis to identify the underlying reasons is necessary.\n* Further Elaboration on Observation in Sec. 6.3: Observation mentioned in Sec. 6.3 would benefit from additional elaboration.\n* Experiment in Figure 2: The experiment illustrated in Figure 2 highlights the importance of allowing the model to learn which tokens within an error span should be penalized. While the presentation is intuitive, including more statistical evidence and quantitative analysis would strengthen this point.\n* Expansion of Translation Paths and Metrics: It’s suggested to test additional translation paths and incorporate more evaluation metrics, as the two currently provided are not strongly correlated.\n* Marginal Performance Gap with References: In the setup that utilizes References, the performance gap between TWA and other baselines is minimal. A stability test could help substantiate the claims more effectively.\n* Minor Weaknesses:\n * Line 064: “Training with Annotations (TWA)” is repeated (the abbreviation alone suffices) and is incorrectly linked.\n * Lines 124-126: Missing a verb, rendering the sentence incomplete.\n * Unaddressed Observations on TWA: TWA’s performance lagging behind DPO in one experiment is not addressed in the analysis."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How efficient is TWA training compared with SFT?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The authors utilize existing data annotations and show it’s helpful to train machine learning systems.\n2. The authors compare their method with DPO."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose to train machine learning systems with error annotations, where both the reference translation and a poor translation are given. Here, the poor translation is annotated by humans, indicating which spans of the text are wrong (and how wrong it is). The authors propose to use an unlikelihood loss to discourage the model to generate tokens in the error spans."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Novelty. The main novelty of this work is utilizing additional annotations to improve translation systems, which is not surprising. Otherwise, the proposed unlikelihood training is straightforward.\n2. Getting annotations is costly. The authors propose to utilize existing annotations, which is scarce. Although in the limited data setting, the proposed method is better than DPO, it’s likely that DPO is still much better in terms of annotation cost.\n3. Relatively weak experimentation. The authors only evaluated two translation directions in one dataset, which may be below standard practice of translation papers."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In Table 3, the main baseline I wanna see is applying SFT on reference data, while it is missing. I am speculating that the gain of this method is mainly from removing noise (which might even exist in the filtered submission data) based on human annotation. If so, the mechanism of success is far away from guiding translation based on negative signals. To resolve this concern, could you show some results of SFT on reference?\n\n2. As mentioned in Weakness-3, could you show some results of Table-3 when using a stronger model? E.g., apply TWA on M2M100 or NLLB. \n\n3. In lines 201-202, does it simply indicate truncating loss to the first error token? If so, some loss truncation methods could be compared, like https://arxiv.org/pdf/2407.02208\n\n4. In Table 1 and Table 3, the scores for the base model are not aligned? Could you explain a little bit about my misunderstanding?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The writing is clear and easy to follow.\n2. MQM shows fine-grained error annotation. Exploring and leveraging MQM data for MT training is interesting. Also, it might inspire some research in optimizing translation towards human preferences.\n3. Positive results in two language directions under their settings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a fine-grain loss to penalty error during supervised fine-tuning (SFT) for machine translation. The main contribution is to take the annotation of the MQM dataset (fine-grained, human-label translation errors) as both positive and negative supervision during SFT at the token level. The main results of this paper are compared with those of using DPO and SFT in two language directions, EN-DE and ZH-EN, showing some improvements in their setting. Also, ablation studies clearly show the difference among multiple variants of their methods. \n\nThe writing is clear and easy to follow. The method, to some extent, might inspire some developments in nowadays optimization toward human preference. However, I hold some concerns with their motivation and evaluation, see the weakness part."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. MQM data is hard to largely achieve: Compared to other MT-eval annotation data at the sentence level, like DA, MQM data shows more detailed human evaluation. However, it is also hard to largely achieve (even for the DA dataset, it only covers 10+ languages and hundreds k samples through years of work done by WMT). \n\n2. Feasibility aside, if we only focus on the generality of this technique, this method is hard to generalize to other domains, like QA, as it is hard to say that span annotation also applies to QA data collection.\n\n3. The baseline is not strong: 1) The baseline model leg behind the average performance of WMT submission quite a lot. 2) In Table 3, the SFT setting improves results a lot. This gain from SFT is weird if their base model is strong. It would be much better if they could simply increase the model size and clean data for base model training. \n\nSuggestions:\n1. Since DPO and SFT are concepts from the LLM community, it would be beneficial to show results on LLM-based MT. (I don't believe it's essential.)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning from others' mistakes: Finetuning machine translation models with span-level error annotations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=204sPiwBbB},\nnote={under review}\n}"
},
"abstract": {
"value": "Despite growing interest in incorporating feedback to improve language models, most efforts focus only on sequence-level annotations. In this work, we explore the potential of utilizing fine-grained span-level annotations from offline datasets to improve model quality. We develop a simple finetuning algorithm, called Training with Annotations (TWA), to directly train machine translation models on such annotated data. TWA utilizes targeted span-level error information while also flexibly learning what to penalize within a span. Moreover, TWA considers the overall trajectory of a sequence when deciding which non-error spans to utilize as positive signals. Experiments on English-German and Chinese-English machine translation show that TWA outperforms baselines such as Supervised Finetuning on sequences filtered for quality and Direct Preference Optimization on pairs constructed from the same data."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"machine translation",
"finetuning",
"fine-grained annotations",
"language model"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1c6f2e00a4431a50a8643477ac9d55e290f38c1b.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Learning from others' mistakes: Finetuning machine translation models with span-level error annotations"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
20mMK8UlFh | One-step Noisy Label Mitigation | main | Active | noisy labels;image-text matching;cross-modal matching;multimodal learning;image classification;noisy correspondences | applications to computer vision, audio, language, and other modalities | 5;5;5;5 | 3;5;3;3 | 2;2;3;2 | 2;2;2;2 | 3;1;3;2 | 5 | 3.5 | 2.25 | 2 | 2.25 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See **Weaknesses**."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper introduces a novel, model-agnostic method called One-step AntiNoise (OSA) that addresses the issue of noisy labels in a cost-efficient manner, which is an advancement over existing noise mitigation techniques.\n\n- It provides a theoretical framework that explains the stability and precision of the decision boundary in high-dimensional spaces, offering insights into why and how the proposed method works effectively.\n\n- The paper backs up its claims with empirical evidences, demonstrating OSA's superiority across various benchmarks, models, and tasks, which strengthens the credibility of the proposed method.\n\n- The paper shows that OSA introduces minimal additional training time compared to standard training methods, making it suitable for real-world applications.\n\n- The paper demonstrates that OSA is not only effective in standard noise settings but also exhibits strong task transferability and model adaptability, making it a versatile solution applicable to a wide range of scenarios."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses the challenge of mitigating the detrimental effects of noisy labels in large-scale pre-training tasks, where obtaining entirely clean data is often impractical. The authors propose a model-agnostic approach called One-step AntiNoise (OSA), which utilizes an estimator model and a scoring function to assess noise levels through single-step inference, significantly reducing computational costs. OSA leverages high-dimensional orthogonality to establish a robust boundary for separating clean and noisy samples, demonstrating enhanced training robustness, improved task transferability, and ease of deployment across various benchmarks and models. The paper provides a theoretical framework explaining the stability of the decision boundary and conducts comprehensive experiments to validate the method's effectiveness and efficiency. The authors conclude that OSA is a novel solution for noise mitigation in practical large-scale training scenarios, with code available for reproducibility."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Could the authors provide further insights into the design of the scoring function (Equation 5)? Specifically, what is the value of $\\beta$ across different datasets and models, and how does the sensitivity of $\\beta$ impact performance?\n\n- Regarding Table 4, is it possible to generalize OSA for multi-class classification?\n\n- In Figure 2, is it clear whether the estimator remains fixed or is updated during training? In other words, do the estimator and the target model share the same weights?\n\n- Could the authors include time statistics for more methods in Table 7? Specifically, how is the time recorded? Since convergence time can vary among different methods, it is important to also consider this aspect."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- I recommend discussing the relationships with previous works using pretrained models e.g.[1].\n\nAhn, S., Kim, S., Ko, J., & Yun, S. Y. (2023). Fine tuning pre trained models for robustness under noisy labels. *arXiv preprint arXiv:2310.17668*."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- This work is well-written, from the challenges and motivation to the theoretical analysis and method design.\n- This paper focuses on an extended scenario from traditional classification tasks to image-text matching task.\n- The proposed method also considers the computation consumptions. The efficiency analysis shows its huge potential in practical applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a model-agnostic noise mitigation paradigm for the limitations of current noisy label approaches. It leverages cosine similarity measures to distinguish between noisy and clean samples efficiently. It shows robustness across various real-world noisy benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The contribution of the one-step property is weakened due to the common sense that the pre-trained model performs well in distinguishing noise samples because the noisy samples do not damage it. Training a robust model from scratch from noisy datasets is more challenging and attracts more attention.\n- I suggest authors conduct more experiments on noise types and noise rates especially extreme noise rates.\n- I recommend experiments performed on different scoring functions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "See weakness."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The problem studied in this paper is very important, especially in the current era when large models are so popular. \n2. Though it has been discussed and proposed before, it is reasonable to use additional models to help with sample selection and reweighting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on the very practical problem of noisy labels in the dataset. The authors propose a sample weighting mechanism based on pre-trained models, especially visual language models such as CLIP."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper is difficult to read, primarily because it lacks a clear problem definition section. For instance, while I understand the intuitive idea, how is \"cleanness\" mathematically defined? Should we assume that \\( x \\) and \\( y \\) come from a shifted joint distribution? Then, how is the noisy distribution structured, and what type of noise is being used? Additionally, what does the noise ratio in the experiments represent? For instance, in the COCO dataset, did you randomly replace a proportion of captions? This lack of clarity also makes it hard for me to understand the significance of Theorem 1 and the related analysis. \n\n2. The paper lacks a discussion of important related literature. I would list a few representative methods in learning with noisy labels community:\n - *[1]* DivideMix: Learning with Noisy Labels as Semi-supervised Learning.\n \n And sample selection methods based on feature space, which are more relevant to this work:\n - *[2]* Multi-Objective Interpolation Training for Robustness to Label Noise\n - *[3]* FINE Samples for Learning with Noisy Labels\n\n There are also papers that use the CLIP model:\n - *[4]* CLIPCleaner: Cleaning Noisy Labels with CLIP\n - *[5]* Vision-Language Models are Strong Noisy Label Detectors\n - *[6]* Combating Label Noise With A General Surrogate Model For Sample Selection\n\n (*Some of these references may be considered concurrent work; The authors are suggested to discuss these papers in the future version.*)\n\nIn summary, the method presented in this paper essentially leverages a large pre-trained vision-language model to identify potentially correct samples and exclude likely incorrect ones. As I understand it, the method could be effectively explained within lines 253-258 alone, yet the presentation is overly complex. The authors need to restructure the manuscript to clarify the paper's contribution and explicitly compare it with relevant work."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The paper does not seem to describe whether the backbone in the experiment was randomly initialized or trained. As I understand it, the estimator is a trained CLIP and the backbone for the baselines is also a trained CLIP. Is this correct?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The writing and presentation of the paper is clear.\n2. The boundary principle analysis of the paper is instructive.\n3. The experiments in this paper are detailed and show validity."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on how to mitigate noisy labels, in particular noisy cross-modal matching. Specifically, the authors first use a pre-trained model, such as CLIP, to determine whether a data is noisy, and then give less weight to the noisy label during the training process. And they pointed out that the orthogonal boundary separates the clean and noisy sides. The authors conducted experiments on different tasks and datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. I think it would be better if the authors emphasised in the title or elsewhere that the proposed work focuses primarily on noisy cross-modal matching. Otherwise it could be confusing. For example, the authors claim that other methods cause additional computational overhead. However, papers cited by the authors in related work, such as [1], do not incur additional overhead; rather, the proposed work causes additional overhead.\n2. The paper doesn't seem to describe how big the CLIP is as an Estimator. If the author uses a trained maximum CLIP as an Estimator, then of course there will be a performance boost because it is a strong model. That doesn't seem fair to the baselines.\n3. An approach that relies on trained large models does not seem very interesting. And regarding Eq. 5, the authors do not provide a theoretical analysis.\n\n[1] Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels, NeurIPS 2018"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A low-cost model-agnostic noise mitigation paradigm with simple deployment for multiple tasks."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024onestep,\ntitle={One-step Noisy Label Mitigation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=20mMK8UlFh},\nnote={under review}\n}"
},
"abstract": {
"value": "Mitigating the detrimental effects of noisy labels on the training process has become increasingly critical, as obtaining entirely clean or human-annotated samples for large-scale pre-training tasks is often impractical. Nonetheless, existing noise mitigation methods often encounter limitations in practical applications due to their task-specific design, model dependency, and significant computational overhead. In this work, we exploit the properties of high-dimensional orthogonality to identify a robust and effective boundary in cone space for separating clean and noisy samples. Building on this, we propose One-step Anti-Noise (OSA), a model-agnostic noisy label mitigation paradigm that employs an estimator model and a scoring function to assess the noise level of input pairs through just one-step inference, a cost-efficient process. We empirically demonstrate the superiority of OSA, highlighting its enhanced training robustness, improved task transferability, ease of deployment, and reduced computational costs across various benchmarks, models, and tasks. Our code is released at https://anonymous.4open.science/r/CLIP_OSN-E86C."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"noisy labels",
"image-text matching",
"cross-modal matching",
"multimodal learning",
"image classification",
"noisy correspondences"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f55521bc59c41cfa76477fa3757eb5f1b3621d8a.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "One-step Noisy Label Mitigation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
20qZK2T7fa | Neuroplastic Expansion in Deep Reinforcement Learning | main | Active | Loss of Plasticity;Primacy Bias;Deep Reinforcement Learning;Continual RL | reinforcement learning | 3;3;5;5 | 4;5;5;3 | 2;2;3;2 | 2;3;3;2 | 1;2;2;3 | 4 | 4.25 | 2.25 | 2.5 | 2 | -0.301511 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Previous work, such as Plasticity Injection[1], has already proposed dynamically growing an agent's network, please provide a detailed comparison.\n\n2.Dynamically expanding the size of a neural network could potentially lead to policy instability. For instance, the policy before and after expansion might be inconsistent. However, the results reported in Figure 6 appear very stable. Please provide specific analyses and ablation studies demonstrating how NE maintains policy stability during network expansion. \n\n3.It would be helpful if the authors could provide experiments or analyses to explain the impact of dead neurons during training. Do these neurons store explored knowledge that contributes positively to the learning process, or do they have a negative effect on training? Please provide some analyses and visualizations to illustrate their impact on the learning process.\n\n[1]Deep Reinforcement Learning with Plasticity Injection."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.The paper is well-written.\n\n2.The concept of Neuroplastic Expansion (NE) is well-motivated."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel idea, Neuroplastic Expansion (NE), to address the problem of plasticity loss in reinforcement learning (RL). The paper is well-written and presents the concept clearly. However, there are some concerns, particularly regarding its contribution relative to existing work. If these concerns can be resolved, I would consider improving the rating from 5 to 6."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "See questions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Why is it called “Elastic” Neuron Generation? What is elastic exactly here?\n2. I’m not sure how experience review reduces learning instability. I’m not convinced by the revisiting-is-useful argument. Without experience review, sampling is iid, so temporally old samples are still revisited. Why is revisiting old samples with higher probability useful, particularly when the dormant neuron ratio is high?\n3. To get the gradients to decide which weight to regenerate, you need to use the fully expanded network and backpropagate everything, then find the top k weights that do not exist in the actual network. Is that correct? If so, then this metric is inaccurate because it adds all weights that take part in backpropagation. The accurate way is to add one weight at a time and backpropagate gradients each time. This is, of course, very expensive since you need to have $N$ additional forward and backward passes. In contrast, the process you described makes some approximations that are not clearly presented. \n4. Are both pruning and regeneration neuron/unit-based?\n5. In section 5.2, the environments have different action spaces; how did you handle that?\n6. The authors stated that resetting was deemed the most effective approach. But no references are given (line 467).\n7. In Figure 8, why are there spikes in the activated neurons ratio?\n8. What is meant by removing the difference in fitting ability in lines 217-218?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The authors present a novel approach that improves plasticity for deep reinforcement learning methods. The approach seems effective and achieves better performance than many existing methods, such as layer normalization, ReDo, and plasticity injection in many environments. The authors provided an extensive experimental study of their method in different environments (MuJoCO Gym and DMC) and with different learning algorithms (DrQ and TD3)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a new approach to maintaining plasticity for deep reinforcement learning methods based on intuitions about cortical cortex expansion in cognitive science. The approach includes three components: 1) neuron regeneration, 2) dormant neuron pruning, and experience review. While neuron regeneration and dormant neuron pruning parts help maintain plasticity, the experience review reduces instability due to high plasticity. The authors test the effectiveness of their approach and its components in various RL environments and compare it against other baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper significantly lacks mathematical rigor. Here are some examples that are representative of these inaccuracies, although they don’t constitute an exhaustive list:\n\n - It should be $\\breve{\\theta} \\subset \\theta$ not $\\breve{\\theta} \\in \\theta$. Or more precisely, $\\breve{\\theta}_l \\subset \\theta_l, \\forall l \\in \\\\{1,...,N\\\\}$, where $N$ is the number of layers in the network.\n\n - In line 213, $\\mathbb{I_{grow}}$ is not defined well. It should be a list, but you assign it with two random quantities added together so it looks like a vector or a scaler instead of a set. Additionally, how is the random function defined? The random function should output a set, which you then need to union with the other set, $\\mathbb{I_{grow}}= RandomSet1 \\cup RandomSet2 $ not $\\mathbb{I_{grow}}= RandomSet1 + RandomSet2$. A complete, rigorous mathematical description is expected.\n - It should be $ArgTopK$ not $TopK$ in equation 2 and line 256\n - what does this mean to write $\\mathbb{I_{prune}}= f(\\theta_i) \\leq 0$? The left side should be a list, and the right-hand side should be an inequality. It should be something like $\\mathbb{I_{prune}} = \\\\{\\texttt{index}(\\theta_i) | f(\\theta_i) \\leq 0 \\\\}$.\n - what does it mean to have a dormant ratio of negative in line 301? The ratio possible values are in $[0,1]$.\n- The paper presentation and writing are not clear. \n - The authors claim NE maintains a high level of elastic neurons (see line 60), but no definition of what elastic neuron is given. Is elasticity here something different from plasticity? How do we measure either of them?\n - The term plasticity is loosely used to represent activated neuron ratio (e.g., Figure 6f). A clear definition of what plasticity means should be presented. If plasticity is the activated neuron ratio, then the paper's approach does not actually address the loss of plasticity as claimed since in all figures where the activated neuron ratio is presented, we see a decrease in their percentage with the paper's approach, similar to other methods. \n - The algorithm is not complete. For example, the cosine annealing scheduler is missing, and the experience review part is not clearly shown. Additionally, Algorithm 1 works on the weight level, but the description from the text talks about neuron-level regeneration and pruning. The algorithm needs to reflect that.\n - Since the authors depend on the sparse network training framework as part of their approach. They should fully explain what the sparse network training framework is in writing and in the algorithm.\n - The process of experience review is not clear. The fluctuation rate of dormant neurons $\\nabla f$ is a function of each unit, but the authors talk about some aggregate quantity. Is that new quantity a summation of all units in the network, $\\nabla f = \\sum_i f_i$? Why isn’t this part of the algorithm?\n - Since the authors chose to rely on the activated neuron ratio (1-dormant neuron ratio), equation 1 needs to reflect that, currently, the definition is mentioned in-text, whereas it should be highlighted in equation 1 instead of dormant neuron ratio, which the authors do not really use.\n- Some claims in the paper are not supported by evidence.\n - The paper overclaims what their approach can address. The authors mention that their approach mitigates loss of plasticity primacy bias, reduces catastrophic forgetting, and strikes a stability-plasticity balance. Most of these claims are not supported by evidence. Using those terms loosely without being precise about what is being studied in an experiment makes the paper hard to navigate.\n - For example: “topology growth can effectively alleviate neuron deactivation and thus maintain the ability of policy learning to mitigate the loss of plasticity and alleviate the primacy bias.”--- It's unclear how the experiment shows loss of plasticity or primacy bias mitigation. The authors should instead only claim that their approach reduces the dormant neuron ratio and not claim anything about loss of plasticity or primacy bias.\n - The current ablation is not sufficient. Ideally, the authors should remove each component of the system: 1) neuron regeneration, 2) experience review, and 3) dormant neuron pruning. The authors did 1 and 2 but not 3. We need to know what happens if we remove dormant neuron pruning.\n- Issues in empirical evaluation:\n - Many figures do not have labels on the axes, so it is hard to know (even after careful investigation) what is being varied. For example, the x-axis in Figure 5 has no label, and I don’t know what 0 to 3 means here. Other examples include but are not limited to Figure 2 (missing x-axis label), Figure 4 (what is the score in the y-axis), and Figure 6 (missing y-axis label).\n - The results are not statistically significant. A very low number of independent runs (7 runs) are used, and they have overlapping error bars in most figures. More independent runs are needed, especially since the error bars are overlapping. I suggest the authors run each algorithm for 30 independent runs in all of their experiments.\n - In section 5.2, a fixed number of episodes is used in each task, whereas a fixed number of steps should be used to have consistent amount of experience in each task. \n\n**Minor issues:**\n\n- The author defines the gradient as $L_t$. Then the sentence after that says it’s $\\nabla L_t$. \n- The name of the approach is not very representative of what the algorithm does. It’s called neuroplastic expansion, emphasizing the expansion part. A better name, such as neuroplastic regeneration and pruning, can be more representative and accurate.\n\n\n \n \n \n\nOverall, I believe this paper could serve as a good algorithmic contribution to the community if the authors addressed my concerns based on this feedback. So, I’m willing to increase my score given that 1) the authors tuned down the claims and made them modest such that they accurately reflect what is being studied by their experiments, 2) the authors fixed all mathematical inaccuracies and provided a completed algorithm, 3) the terminologies are used carefully precisely instead of loosely, and 4) the empirical work is improved through more independent runs and improved figures."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. In Figure 5, at which point does the model reach its maximum capacity (i.e., no more room for growing unless pruned)?\n2. In Figure 6, why does Plasticity Injection fail even before injecting in Walker2d and HalfCheetah? Before injection, shouldn’t they be equivalent to vanilla TD3?\n3. In the 'Dormant Neuron Pruning' section, the expression $Clip(a,b,c)$ is confusing to read without any definition.\n4. The dynamic threshold defined in paragraph in 'Neuron Consolidation' as a whole is too confusing to read. I don't think $\\nabla f(\\theta)$ is the right definition, since it's not a derivative w.r.t. the dormant ratio, but rather an average change rate of the dormant neuron count. Another thing I want to make sure is that $\\nabla f(\\theta)$ is used as the dynamic threshold $\\epsilon$, right? It's not clearly stated in the text."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The core idea of NE seems very promising in terms of lifelong learning: add new capacity to learn new information, remove useless/dead neurons, and prevent catastrophic forgetting. Connection to biology is also a big plus.\n2. The necessity for each component was well explained (Figure 2,3,4). I found it especially interesting to see a proof of catastrophic forgetting in a Mujoco task."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents Neuroplastic Expansion (NE), a generally applicable training scheme designed to mitigate plasticity loss in RL. NE comprises of three components:\n\n1. Adding neuron connections based on potential gradients (elastic neuron generation)\n2. Pruning neuron connections based on dormant ratio (dormant neuron pruning)\n3. A training batch sampling scheme that focuses on early samples depending on dormant ratio fluctuation (neuron consolidation)\n\nCompared to prior methods such as Reset, ReDo and Plasticity Injection, NE showed superior performance in state-based Mujoco tasks (with TD3) and several pixel-based DMC tasks (with DrQ). NE was also able to maintain plasticity while sequentially training through multiple environments in a cyclic manner. Its plasticity — measured by dormant ratio — is well preserved in the majority of the experiments, proving its effectiveness in maintaining trainability and preventing loss of plasticity."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The writing is sometimes not detailed enough and causes confusion (see questions and weaknesses below).\n2. Some crucial design choices are not well justified and/or validated.\n 1. Neuron consolidation is proposed to prevent catastrophic forgetting, which often occurs late stage (as shown in Figure 4). However, the dynamic threshold they use to control the strength of consolidation plateaus to its lowest value (strongest consolidation) even before halfway through the training process (Figure 5). This discrepancy raises question on whether this complexity is really necessary, especially since a simple time-dependent scheduling scheme could also fit the justification of ‘not forgetting early state-action distribution’.\n 2. The amount of pruned dormant neurons are forced to be less than amount of added connections in order to guarantee that the network is increased in size. This also looks like an unnecessary detail since we can achieve the same by using ReDo and then growing a small number of neurons. I think there should be an explanation on why this design choice is essential.\n 3. Although less critical, some components of elastic neuron generation also needs more careful consideration, such as the cosine annealing schedule (especially since RigL [1] was not primarily designed for continual learning).\n3. The experimental setup needs more refinement.\n - For the main experiments, although it’s convincing that NE surpasses prior works in Mujoco and DMC tasks, it would be nice to see whether NE is also effective in more challenging environments.\n - The recently proposed NaP [2] is an extremely competitive method for continual learning, and I think it’s a crucial baseline in the main experiment.\n - The cycling Mujoco experiment (Figure 9) is plotted on ‘episodes’ (and is the only one). This is problematic, since different lengths of episodes would result in different number of update steps and thus varying degree of plasticity loss.\n - It would have been nice to see whether NE can synergize with other methods such as CReLU or LayerNorm.\n\nOverall, I think this paper needs more improvements before it can be published. However, I am also ready to change my scores if some of the above concerns are refuted/addressed.\n\n[1] Rigging the Lottery: Making All Tickets Winners., Evci et al., ICML 2020.\n\n[2] Normalization and effective learning rates in reinforcement learning., Lyle et al., arXiv."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "n/a"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* How is the network reinitialized based on the growth criterion? Is it initialized with random weights similar to the initial stage?\n* If reinitialization involves random weights, how does this approach effectively reduce dormant neurons, especially considering that large feature values after extended training might lead to immediate pruning of reinitialized neurons.\n* Is the Experience Review technique effective across all experimental setups, or is its efficacy primarily validated only in specific environments like HalfCheetah? \n* PThe performance curves indicate that the dynamic actor maintains a stable plasticity rate similar to the static one. Why does the dynamic actor perform better despite this similarity in plasticity rates?\n* For a more comprehensive evaluation, should the authors include additional baselines such as the base TD3 and TD3 with only network growth (without pruning) to isolate the effects of different components of NE.\n* What constitutes a valid starting sparsity rate for NE?\n* What are the optimal rates for growth and pruning, and how do these rates influence overall performance? An analysis of hyperparameter sensitivity would provide deeper insights into NE's robustness and adaptability."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* **Novelty:** The paper introduces a novel idea inspired by human brain mechanisms, specifically cortical expansion, to address plasticity loss in deep RL. This biologically motivated approach offers a novel perspective for significant advancements in continual learning for artificial agents.\n* **Good architectural design:** Neuroplastic Expansion (NE) is meticulously designed to balance network growth with resource efficiency. By adding elastic neurons based on gradient potential and recycling dormant neurons, NE maintains network expressivity and adaptability without causing uncontrolled growth in network size."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the critical issue of plasticity loss in deep reinforcement learning (deep RL), where agents' adaptability decreases over time, hindering continuous learning in dynamic environments. Inspired by biological neural networks, the authors propose Neuroplastic Expansion (NE), a novel mechanism that dynamically enlarges the neural network by adding elastic neurons based on gradient potential. NE maintains high plasticity by regenerating and recycling dormant neurons, effectively mitigating the plasticity-stability dilemma inherent in deep RL."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* **Missing Relevant Work:**\n * Plasticity-Related Studies: The paper overlooks several relevant studies on neural network plasticity ([1]-[5]).\n * Cortical Expansion Citations: The discussion on cortical cortex expansion cites works that describe patterns of cortical expansion. I think the authors miss foundational studies that first identified evidence of cortical expansion.\n* **Experimental Setup**:\n * The evaluation of MuJoCo environments is limited to the TD3 algorithm, which is considered outdated. Assessing NE using more recent and robust algorithms such as TD7, TD-MPC2, or BRO would enhance the relevance and robustness of the findings.\n * Other than Mujoco, I think it is beneficial to test in the state-based DMC, maybe by trying to compare with reset-based methods under identical experimental configurations as primacy bias paper. \n\n[1] On warm-starting neural network training., Ash et al, 2020. \n\n[2] A study on the plasticity of neural networks., Berariu et al, 2021. \n\n[3] PLASTIC: Improving Input and Label Plasticity for Sample Efficient Reinforcement Learning., Lee et al, 2023. \n\n[4] Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise Networks., Lee et al, 2024. \n\n[5] Normalization and effective learning rates in reinforcement learning., Lyle et al, 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce Neuroplastic Expansion, a novel method that mitigates plasticity loss in deep RL, which outperforms strong baselines in maintaining adaptability and enhancing performance across various standard and continual RL tasks."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024neuroplastic,\ntitle={Neuroplastic Expansion in Deep Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=20qZK2T7fa},\nnote={under review}\n}"
},
"abstract": {
"value": "The loss of plasticity in learning agents, analogous to the solidification of neural pathways in biological brains, significantly impedes learning and adaptation in reinforcement learning due to its non-stationary nature. To address this fundamental challenge, we propose a novel approach, *Neuroplastic Expansion* (NE), inspired by cortical expansion in cognitive science. NE maintains learnability and adaptability throughout the entire training process by dynamically growing the network from a smaller initial size to its full dimension. Our method is designed with three key components: (1) elastic neuron generation based on potential gradients, (2) dormant neuron pruning to optimize network expressivity, and (3) neuron consolidation via experience review to strike a balance in the plasticity-stability dilemma. Extensive experiments demonstrate that NE effectively mitigates plasticity loss and outperforms state-of-the-art methods across various tasks in MuJoCo and DeepMind Control Suite environments. NE enables more adaptive learning in complex, dynamic environments, which represents a crucial step towards transitioning deep reinforcement learning from static, one-time training paradigms to more flexible, continually adapting models."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Loss of Plasticity",
"Primacy Bias",
"Deep Reinforcement Learning",
"Continual RL"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/978b276de4b5a548b334e0ce5ea040d8493f68fc.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Neuroplastic Expansion in Deep Reinforcement Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
21rSeWJHPF | Balanced Ranking with Relative Centrality: A multi-core periphery perspective | main | Active | Ranking algorithms;community structure;clustering;balanced ranking;centrality measures | learning on graphs and other geometries & topologies | 3;5;5;5 | 3;5;4;3 | 2;3;2;2 | 2;2;2;2 | 2;3;3;3 | 4.5 | 3.75 | 2.25 | 2 | 2.75 | 0.522233 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Could you provide further clarifications on W1 and W2 listed above?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "S1. The paper aims to promote balancedness in nodes’ centrality ranking, using community detection as a concrete application scenario. I find this focus interesting.\n\nS2. The paper proposes a multi-core-periphery structure with communities (MCPC) to quantify unbalancedness in centrality measures."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper is motivated by the observation that traditional ranking algorithms can produce unbalanced rankings, and it aims to promote balancedness in centrality estimation. It first defines the concept of relative centrality and then proposes an iterative, graph-dependent local normalization of the centrality score. Empirical studies are provided to demonstrate the effectiveness of the proposed concepts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1. The illustrative example in Figure 3 is unclear to me. The blue nodes in Figure 3(a) have more in-neighbors in Figure 3(b), but the out-degrees of the in-neighbors are also larger than those in Figure 3(b). Can we trivially conclude that the blue nodes in Figure 3(b) have smaller PageRank scores than those in Figure 3(a)?\n\nW2. Following W1, if the answer is no, the core idea of defining the MCPC structure requires further clarification. Otherwise, the advantages of MCMC over traditional centrality measures (e.g., PageRank) seem marginal.\n\nW3. The paper does not theoretically demonstrate the superiority of clusters detected by the proposed method over previous approaches, which limits the paper’s contributions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How does the computational complexity of the proposed relative centrality algorithms compare to traditional centrality measures?\n\n2. Can the MCPC structure and relative centrality concepts be extended to undirected graphs or weighted networks?\n\n3. How does the performance of the proposed methods change as the number of communities in the graph increases?\n\n4. How does the proposed method handle dynamic or temporal networks where the structure may change over time?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper introduces the concept of \"relative centrality\" and proposes a new structural assumption called Multi-Core Periphery with Communities (MCPC), which combines community structure and core-periphery structure.\n\n2. The paper provides theoretical analysis of their proposed methods, including proofs of unbalancedness with MCPC structure and how their relative centrality approach overcomes this issue.\n\n3. The paper demonstrates the usefulness of their balanced ranking algorithm on real-world data, specifically in improving the inference of community structure in single-cell RNA sequencing data.\n\n4. The authors compare their method against several popular centrality measures and provide extensive simulations on real-world datasets."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a new approach for achieving balanced rankings in graphs that have community structures. It addresses the problem of unbalanced rankings produced by traditional centrality measures. The authors introduce a structural concept called Multi-Core Periphery with Communities (MCPC), which combines both community and core-periphery structures. They propose \"relative centrality\" and develop a ranking algorithm that produces more balanced results than common centrality methods. The paper includes a theoretical analysis of ranking imbalances with MCPC structure and shows how their relative centrality approach resolves this issue. The paper demonstrates that their method improves clustering accuracy while achieving greater ranking balance compared to existing methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper focuses primarily on directed graphs, which may limit the applicability of the methods to certain types of networks.\n\n2. While the authors mention some existing work on multi-core structures, they don't provide a comprehensive comparison with these methods.\n\n3. The paper briefly addresses the computational complexity of M-Rank for k-regular directed graphs in Section 3.2, but lacks analysis for other approaches, such as N2-Rank and RN-Rank. Providing additional clarification or a more comprehensive complexity analysis, especially for larger or irregular graphs, would enhance the paper's practical relevance for large-scale network applications.\n\n4. Although the authors tested their method on 11 diverse single-cell datasets, these datasets are relatively small—only the TM dataset reaches 54K data points, with others below 16K. The superior results on Onion approach on the TM dataset in Table 2 raise questions about the scalability of the MCPC method on larger datasets. Besides, the PR for Onion is higher (.98) than RN-Rank (.87), yet RN-Rank is incorrectly highlighted in bold, which should be corrected. Evaluating the method on more larger datasets (e.g., millions of data points) could strengthen the paper’s contribution.\n\n5. While the PR scores are high for RN-Rank and N2-Rank, the Purity metric is consistently lower than traditional centrality measures across most datasets in Table 2. The paper would benefit from a more in-depth discussion of this trade-off between Preservation Ratio and Purity, including potential ways to improve Purity scores while maintaining a high Preservation Ratio."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Ranking is often used in recommender systems. The authors point this out in the first sentence of the introduction. Why did they not compare relative centrality for recommending nodes instead of using it for community detection?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper has a limitations section. Kudos to the authors for being honest about the technical limitations of their study."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper argues against global centrality measures such as PageRank for ranking nodes and suggests using relative centrality instead. As the name suggests, relative centrality measures centrality of a node relative to its neighborhood. The paper shows that relative centrality on Louvain community detection algorithm produces better clusters (as measured by preservation ratio of top 20% points and purity score)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- I have no objection to adding another centrality measure to the long list of node centrality measures. However, the results would have been more convincing if the experiments had been conducted for recommender systems rather than for community detection.\n\n- The authors may find these references related to their work:\n\nSotiris Tsioutsiouliklis, Evaggelia Pitoura, Panayiotis Tsaparas, Ilias Kleftakis, and Nikos Mamoulis. 2021. Fairness-Aware PageRank. In Proceedings of the Web Conference, pp. 3815–3826. https://doi.org/10.1145/3442381.3450065\n\nKijung Shin, Tina Eliassi-Rad, Christos Faloutsos. 2016. CoreScope: Graph Mining Using k-Core Analysis - Patterns, Anomalies and Algorithms. In Proceedings of the IEEE International Conference on Data Mining, pp. 469-478. https://ieeexplore.ieee.org/document/7837871\n\n- The captions for Figures 10 to 20 should be more informative. As is, they only list the name of the dataset."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Could you provide more justifications for the relevance of this work to the ICLR community? For example, you can give some relevant papers published in ICLR or similar conferences/journals and add discussions to the paper.\n2. The quantities in Table 1 are magical to me. Could you explain them?\n3. Could you explain \"single-cell RNA seq data\" in more detail? Why do you focus on directed graphs as stated in Line 80? What about for undirected graphs?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The goal of computing balanced ranking for vertices on graphs is a meaningful and interesting problem.\n2. The proposed assumption of multi-core-periphery structure is a natural combination of the community structure and the core-periphery structure, and the intuitions are conveyed nicely through Figures 2 and 3.\n3. The proposed methods and conducted experiments are generally described in detail."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on the task of **unsupervised ranking on graphs** and aims to generate **balanced ranking**, where the top-ranked nodes contain a reasonable fraction of nodes from each community on the graph.\n\nThe authors propose a novel notion called **relative centrality**, which better preserves balancedness across different communities. Based on relative centrality, the authors propose several new approaches to iteratively update the centrality scores, which can be subsequently used for node ranking and graph clustering.\n\nOn the other hand, the authors propose a novel structural assumption for the underlying graphs, called **multi-core-periphery with communities (MCPC)**. Based on this, the authors define a stochastic block model and show that typical centralities are unbalanced under this model. Finally, experiments on 11 single-cell datasets are conducted to show that the proposed methods achieve higher balancedness while maintaining similar clustering quality."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. I am not sure if this work is interesting to the ICLR community. This paper deals with unsupervised ranking, which can be regarded as unsupervised learning, but it may not fall into the scope of \"representation learning,\" which is \"generally referred to as deep learning\" as indicated on the official website of ICLR 2025. This paper may be more suitable for some venues on data mining or network science, for example.\n2. The theoretical analysis seems limited and not supportive for the balancedness of the proposed methods. The analysis relies on several simplifications: for example, the number of underlying communities is $2$, the size of all the cores and peripheries are the same, and $t=1$ in Theorem 3.6. The authors do claim that the analysis can be extended, but do not provide further explanations. On the other hand, although Theorem 3.6 and Lemma A.7 verify that the relative centrality scores of core vertices are close to $1$ and larger than those of periphery vertices, this does not imply that the induced ranking is balanced, since the scores in one community may be all larger than those in other communities.\n3. The paper is messy and needs significant improvement in presentation and layout. First, there lacks a detailed section on related work, making it hard to judge the contribution of this paper and its relevance to the ICLR community. Although Section 2.1 discusses some related work, it is brief and only concerns part of the contributions of the paper. Second, the text for the proposed assumption, methods, and analysis are not structured nicely, which is somewhat confusing. In particular, the order of the main text is not consistent with the contributions outlined in Section 1.1. Finally, there are numerous writing issues that affect the readability of the paper, as listed below.\n4. Some background concepts are not explained clearly. For example, the meaning of the single-cell data, the metrics of NMI and purity, and the \"onion\" baseline are not introduced clearly enough.\n5. The experiments only focus on single-cell datasets, which is limited. More experiments on other types of networks (e.g., social networks) are expected, and the number of tested single-cell datasets can be reduced.\n\nMinor issues:\n\n1. Most (if not all) citations in this paper should use `\\citep{}` instead of `\\cite{}`, so that the author names are placed in the parentheses. Also, the authors should cite the published version instead of the arXiv version of some papers.\n2. Line 163: \"community structure\" -> \"core-periphery structure\".\n3. Line 287: here the notation $k$ is ambiguous since it has a different meaning in Line 286.\n4. Line 352: \"$N_{G}(v_{j})$\" -> \"$N_{G}(v_{i})$\". Also, here the term \"neighborhood\" should be specified as \"in-neighborhood\" or \"out-neighborhood\".\n5. Line 762: \"upper bounded\" -> \"lower bounded\".\n6. There are some grammatical issues or typos:\n\t1. Lines 22-23;\n\t2. Line 74: \"for e.g.,\" -> \"e.g.,\";\n\t3. Line 130: \"behind of\" -> \"behind\";\n\t4. Line 180 and other occurrences: \"w.r.t\" -> \"w.r.t.\";\n\t5. Line 264: remove \"is defined\";\n\t6. Lines 419 and 1012: remove repetition of \"look at\";\n\t7. Line 463: remove \"to\";\n\t8. Lines 75 and 270: add space before left parentheses;\n\t9. Lines 192-193: the parentheses are not matched;\n\t10. the hyphens in compound words should be used correctly and consistently. For example, it should be \"centrality-measure-based\" in Line 17 and \"Core-Periphery structure\" in the caption of Figure 2(b).\n7. I recommend to beautify the layout of the paper:\n\t1. the captions of Figure 4 are hard to read;\n\t2. the annotation in Lines 352-353 are separated across lines;\n\t3. Table 1 and Figure 5 are placed weirdly;\n\t4. there should be punctuations around multiline math expressions;\n\t5. math expressions are not aligned nicely, and the ones at the top of page 15 are aligned terribly;\n\t6. the font of notations in math expressions is inconsistent in many places (e.g., $k$, $\\mathrm{deg}(\\cdot)$, and $o(\\cdot)$ in Lines 849-856);\n\t7. the text font changes in Lines 1332-1346."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We provide a novel approach to design balanced unsupervised ranking algorithms, improving on a large class of centrality measures, along with applications on data with underlying communities"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024balanced,\ntitle={Balanced Ranking with Relative Centrality: A multi-core periphery perspective},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=21rSeWJHPF},\nnote={under review}\n}"
},
"abstract": {
"value": "Ranking of vertices in a graph for different objectives is one of the most fundamental tasks in computer science. It is known that traditional ranking algorithms can generate unbalanced ranking when the graph has underlying communities, resulting in loss of information, polarised opinions, and reduced diversity (Celis, Straszak \\& Vishnoi [ICALP 2018]).\n\nIn this paper, we focus on *unsupervised ranking* on graphs and observe that popular centrality measure based ranking algorithms such as PageRank may often generate unbalanced ranking here as well. We address this issue by coining a new approach, which we term *relative centrality*. Our approach is based on an iterative graph-dependent local normalization of the centrality score, which promotes balancedness while maintaining the validity of the ranking.\n\nWe further quantify reasons behind this unbalancedness of centrality measures on a novel structure that we propose is called multi-core-periphery with communities (MCPC). We also provide theoretical and extensive simulation support for our approach towards resolving the unbalancedness in MCPC.\n\nFinally, we consider graph embeddings of $11$ single-cell datasets. We observe that top-ranked as per existing centrality measures are better separable into the ground truth communities. However, due to the unbalanced ranking, the top nodes often do not contain points from some communities. Here, our relative-centrality-based approach generates a ranking that provides a similar improvement in clusterability while providing significantly higher balancedness."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Ranking algorithms",
"community structure",
"clustering",
"balanced ranking",
"centrality measures"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/19a98acc87c1e10f6fb1b86f60fa8c184c0daf21.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on graphs and other geometries & topologies"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/efac1a80226286c6e8c3066da5de5578035593c1.zip"
},
"title": {
"value": "Balanced Ranking with Relative Centrality: A multi-core periphery perspective"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
22ywev7zMt | On the Out-of-Distribution Generalization of Self-Supervised Learning | main | Active | Self-Supervised Learning;Representation Learning;Out-of-Distribution | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 3;6;6 | 4;2;3 | 2;3;3 | 2;3;3 | 1;2;3 | 5 | 3 | 2.666667 | 2.666667 | 2 | -0.866025 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Why does $f^\\star$ maximize the loss function in L225, since the proof indicates minimization instead?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The proposed rebalancing technique can be embedded into general SSL procedures, whether discriminative or generative, allowing for wide applicability.\n\n2. Experiments are extensive in scope, covering both discriminative and generative SSL (appendix). Multiple learning tasks under distribution shift is considered, including semi-supervised, transfer learning and few-shot learning. A clear improvement of around 3% in accuracy is reported for most results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper inspects SSL from a causal perspective, which assumes a SCM for generating augmentations in both generative and discriminative approaches. To address spurious correlations between images and their non-semantic features, e.g., backgrounds and styles, the paper proposes rebalancing the training batches by sampling to decorrelate images from their non-semantic features. Experiments show enhanced performances across various existing SSL methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "In general, I am not convinced that the proposed SCM for generating augmentation, especially the characterization of spurious correlation, is relevant for OOD generalization of SSL. \n\n1. The proposed SCM and the rebalancing strategy does not address the identifiability of spurious variables in the context of SSL. Spurious variables $s$ in supervised machine learning are the variables rejected by the conditional independence test $Y \\perp e | s$, where $e$ is the group label. However, spurious variables are generally not identifiable without labels. Literature has introduced inductive bias, e.g., simplicity bias, to identify spurious variables for SSL [1]. However, the SCM in the paper does not consider similar assumptions to address the identifiability of $s$. For example, in Figure 1(b), $s$ and $x^{label}$ (raw image) hold symmetric roles in the SCM. Since $s$ is learned as a latent variable by variational inference, there can be infinitely many solutions of $s$. The identifiability results in Theorem 4.3 does not resolve the identifiablity of $s$, because it depends on the condition that $p(x^+|x^{label},s)$ is learned, implying $s$ has been identified.\n2. The conditional independence implied by the SCMs may not reflect practice in SSL. The PID SCM in Fig.3 models the statistical independence between styles or backgrounds (s) and images ($X^{label}$), but the style and background can be directly identified from the image in practice. In general, $s$ is always measurable with respect to $X^{label}$. Similarly, both $X^{label}$ and $s$ are direct causes of $X^{+}$ in Fig.2, which is also inconsistent to the augmentation practice that takes as input the raw images only, since the background is just part of the raw image. Does this paper consider a special augmentation procedure? \n3. I identify a gap between self-supervised representation learning, whose target is $p(X^{label})$, and the models used in theory. The binary classification model in Proposition 3.1 learns the density ratio $p(X^{label})/p(X^{+})$, and the \"alignment\" model in Theroem 3.4 learns $p(X^{label}|p^{+})$. The paper has not addressed that a non-spurious classification model or \"alignment\" model implies a non-spurious generative model. A simple counterexample: assume that the augmentation procedure retains the style of the image. The classifier does not depend on the style to distinguish between anchor and augmentations because they share the same style. However, styles can still be learned by the generative model.\n\nMoreover, I think this paper can be substantially improved in writing for its message to be more effectively conveyed.\n\n4. Some concepts and statements are not well defined and formulated. \n - The \"task distribution\" is not defined. In L131-132, a statement is made that \"this framework involves estimating the true task distribution from discrete training tasks, enabling the SSL model to generalize to new, unseen tasks (i.e., test tasks).\" Is task modeled as a random variable? What does task correspond to in the SCM? For example, if task refers to batch index in Fig.2(b), then generalization is essentially impossible because training and test batch indices do not overlap. If task refers to batches of X in Fig.2(b), than generalization is only possible when the image batches are i.i.d., which is irrelevant to OOD generalization. In L157, the author states that s, denoting the style or background, does not contain any causal semantics related to the task. This statement contradicts the SCM in Fig 1 as well, where s is a direct cause of X+. Therefore, the definition of \"task\" is more vague here.\n - What does the statement mean that \"$x^{label}$ is regarded as the label\" (L245), since $x^{label}$ is the raw image? A formulation of this equivalence may help improve clarity.\n5. Models, assumptions and theorem statements are not explicitly presented.\n - I understand the benefits for deferring formal theorem statements to the appendix. However, the formal statement of Proposition 3.1 is missing in both the main text and the appendix. The assumption of mixture of gaussians, balanced labels, equal dimensions between spurious variables and images, and the model of binary classification are all woven into the proof.\n - This paper models the SSL procedure by two parts: a classification model and a conditional generative alignment model. The formulation of the classification model is mixed in the proof. The alignment model is not formulated until Theorem 3.4. However, since the learning procedure is repeatedly mentioned throughout the theory, I suggest a clear statement of the models at the beginning.\n6. Multiple notations are unexplained.\n - The notation $L^{PID}$ in L225 is vague because PID is a family of distributions. Which distribution is the loss evaluated with respect to? \n - Similarly, $\\perp_{PI}$ in L217 is also unexplained. Is the independence condition satisfied for all PI distributions?\n - nu in L313 and mu in L315, 333.\n7. The implication of the identifiability result in Theorem 4.3 is insufficiently addressed. Also related to the first point, what does the equivalance in Definition 4.2 imply for the identiability of spurious variables, and more importantly, the generative model?\n\nMinor points:\n\n8. Experiments are in relatively small scale. The results are presented for Imagenet-100 instead of the more popular ImageNet-1k. Models are trained with a limited number of epochs.\n9. There has been theoretical and empirical analysis of the vulnerability of SSL to spurious correlation, e.g., in [1]. Related work on spurious correlation in SSL can be reviewed to establish the paper's position in the broader literature. \n\n[1] Hamidieh, K., Zhang, H., Sankaranarayanan, S., & Ghassemi, M. (2024). Views Can Be Deceiving: Improved SSL Through Feature Space Augmentation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "There are also a few concerns/typos that need to be taken care of for better readability: \n\n1. In Algorithm 1, the steps, especially the count of i, seem to be a bit confusing. It may be better to write: \"Set $i \\leftarrow 0$\", and select the initial pair $(x_0^+, x_0^{\\rm label})$. Then, for $i \\ge 1$, write the two steps and finally add, \"Set $i \\leftarrow i + 1$\". \n\n2. The number of samples mu should be $\\mu$, I guess? \n\n3. It seems that the definition of PID is the same as assuming $x^{\\rm label}$ and $s$ are independent. Maybe it would be easier to present it that way. \n\n4. What is $\\mathcal{L}^{\\rm PID}$? Is it $\\mathcal{L}^{\\rm e}, e \\in \\mathcal{D}$ where $e$ satisfies PID? How is $f$ related to $F$ in Equation (1)? Is $f$ a generic function in the class of hypothesis and $F$ the true generating function? \n\n5. Are we assuming that minimizer $f^*$ is same for all distributions in $\\mathcal{D}$ that satisfies PID? \n\n6. I am a little bit confused about Assumption 3.3. In PID, we have $x^{\\rm label}$ is independent of $s$, whereas in Assumption 3.3, we also have $x^{\\rm label}$ is independent of $s$ given $x^+$. Are we assuming Assumption 3.3 for all distributions in $\\mathcal{D}$? A remark with some intuitive explanation of Theorem 3.4 would be helpful."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The main strength of the paper lies"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a training batch sampling strategy designed to enhance self-supervised learning and improve generalization beyond the training distribution. The approach is inspired by the concept of invariant causal structure across different environments: while causal relationships between features and labels remain consistent, spurious correlations vary across environments. The proposed methodology employs a constraint, PID, during mini-batch sampling, which disregards spurious correlations and supports out-of-distribution generalization."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "At times, the presentation is overly technical or abstract, which might be challenging for practitioners who seek to grasp the main insights of the paper. The core message is to introduce a sampling strategy combined with a distributional constraint (PID) that encourages the self-supervised method to disregard correlations that change across domains and focus on stable, causal correlations. The objective is to enhance out-of-distribution generalization by learning these invariant structures. Adding a non-technical explanation, perhaps as a remark, on how the algorithm achieves PID enforcement would be beneficial. Please refer to my questions below for further clarification."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Could you shed light on why using an exponential family distribution to model $p(s|x^{label})$?\n2. In line 227, how does Theorem 3.4 \"implies that when $\\mathcal{D}$ is sufficiently large and diverse, an optimal $f^*$ trained on one distribution will perform worse than random guessing in some other environment.\"?\n3. In line 459, why \"We can observe that the performance of BYOL rapidly deteriorates with batch size.\"? It seems that BYOL suffers from smaller performance degradation than BYOL+ours.\n4. In line 267, should it be $T_{ij}=a_{ij}\\times \\cdot$?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The perspective of converting the SSL training to a domain-generalization-like problem using an SCM is natrual and interesting.\n2. The proposed method is built with theoretical guarantees on the identifiability of the distribution parameters and the recover of the PID.\n3. The experiments cover many scenarios, including semi-supervised learning, transfer learning, and few-shot learning tasks. The improvements of the proposed method are significant."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper regards the mini-batches in the SSL training as the environments (domains) in OOD generalization problems and proposes that each mini-batch can be viewed as a multi-class classification task. Based on this formulation, the authors points out that when the similarity is measured using non-causal features, SSL will learn spurious representations. To address this issue, the authors propose to model the Post-Intervention Distribution (PID) using VAEs for each mini-batch and further propose a mini-batch sampling strategy that selects samples with similar balancing scores based on the $p^e(s|x^{label})$ learned by the VAE. The experiments demonstrat the effectiveness of the method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Some points are not quite clear and need further clarification. For example, despite Theorem 4.7, it is a bit confusing that why sampling samples with the same propensity score would help to recover $p^{PI}$. It would be better to provide some high-level explanations.\n2. The authors didn't evaluate their method on classic OOD tasks like PACS, OfficeHome, ColoredMNIST, etc. Since this work aims to improve SSL's OOD performance, it would be necessary to evaluate these tasks. Otherwise, the author should explain why not doing so."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024on,\ntitle={On the Out-of-Distribution Generalization of Self-Supervised Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=22ywev7zMt},\nnote={under review}\n}"
},
"abstract": {
"value": "In this paper, we focus on the out-of-distribution (OOD) generalization of self-supervised learning (SSL). By analyzing the mini-batch construction during SSL training phase, we first give one plausible explanation for SSL having OOD generalization. Then, from the perspective of data generation and causal inference, we analyze and conclude that SSL learns spurious correlations during the training process, which leads to a reduction in OOD generalization. To address this issue, we propose a post-intervention distribution (PID) grounded in the Structural Causal Model. PID offers a scenario where the relationships between variables are free from the influence of spurious correlations. Besides, we demonstrate that if each mini-batch during SSL training satisfies PID, the resulting SSL model can achieve optimal worst-case OOD performance. This motivates us to develop a batch sampling strategy that enforces PID constraints through the learning of a latent variable model. Through theoretical analysis, we demonstrate the identifiability of the latent variable model and validate the effectiveness of the proposed sampling strategy. Experiments conducted on various downstream OOD tasks demonstrate the effectiveness of the proposed sampling strategy."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Self-Supervised Learning",
"Representation Learning",
"Out-of-Distribution"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/9c505f30b83be8fae3b91141681596c848774862.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/ce25753fbe34376cbb5678ba8262b7f5821b5f4e.zip"
},
"title": {
"value": "On the Out-of-Distribution Generalization of Self-Supervised Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
23uY3FpQxc | A General Framework for Producing Interpretable Semantic Text Embeddings | main | Active | Semantic Text Embedding;Interpretability;Question Generation;Question Answering | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 5;5;6;6;6 | 4;4;3;4;3 | 3;2;3;4;3 | 3;2;3;3;3 | 3;3;3;3;3 | 5.6 | 3.6 | 3 | 2.8 | 3 | -0.666667 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1) In order to achieve performance of regular dense representation models, which aspects of the framework and the implementation details do the authors think worth scaling up? Is there any interesting evidence? e.g., better datasets for generating questions; more questions to form the embedding dimensions, etc..\n2) From my understanding in the appendix, the paper uses UAE-large-v1 as the encoding model, why? What happens if the encoding model is some better models - does it help with the performance of the final interpretable embeddings?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) The question generation component of the framework concerns generating questions that are both discriminative and general. It groups similar texts by clustering for the generation of questions, such that nuanced questions can be asked for each group, as opposed to simple questions in the baseline method. The concept is analogous to leveraging hard negatives in regular training of embedding models.\n2) The authors show good understanding at related work; the implementation and the evaluation are sound from the perspective of sentence embeddings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes CQG-MVQA, an intepretable embedding framework. The framework generates questions and binary answers about texts, and trains binary classifiers on each question. Each of the prediction to these questions forms a dimension of an interpretable embedding. It is shown that these interpretable achieves decent performance on MTEB and a reduced cognitive load for interpretability compared to baseline methods in interpretable embeddings."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) Performance ablations about setups in the framework can be very interesting although currently missing (e.g., performance across different dimensionality, question difficulties, different encoding models, etc..).\n2) Implementation details can be moved more to the main paper as they are mostly in appendices."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"Yes, Privacy, security and safety"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Does the CQG-MBQA framework need to generate a new set of yes/no questions every time it is applied to a different dataset? If so, is there a way to enhance the generalizability of the CQG-MBQA model? In other words, could a more universal set of yes/no questions be designed to handle multiple tasks/datasets, rather than creating a separate set tailored to each specific task/dataset?\n\n2. Figure 4 shows that with around 3,000 questions, CQG-MBQA can achieve high-quality text embeddings on STS tasks, and using additional questions does not improve embedding quality; instead, it decreases interpretability. Does this imply that the yes/no questions generated by CQG have semantic overlap or inclusion relationships? In other words, is there a large number of semantically similar questions within the set of yes/no questions?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. CQG-MBQA focus on producing interpretable embeddings, which is important for domains requiring transparency.\n2. Compared with QAEmb, CQG produces more discriminative questions.\n3. By integrating MBQA, the framework achieves cost-effective embeddings compared to LLM-based alternatives.\n4. This paper conducts extensive experiments on semantic textual similarity, retrieval, and clustering tasks, showcasing its utility and competitiveness."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces CQG-MBQA (Contrastive Question Generation - Multi-task Binary Question Answering), a general framework to create interpretable semantic text embeddings for NLP tasks. This framework emphasizes interpretability, which is essential for tasks requiring transparency, such as legal and medical applications. Traditional black-box text embedding methods, while effective, lack interpretability, limiting their utility in such cases. By comparison, CQG-MBQA is able to generate interpretable semantic text embeddings via binary yes/no questions. To be concrete, this framework first generates binary yes/no questions through contrastive question generation (CQG) using GPT-4o-mini for the entire corpus. Then, it fine-tunes a multi-task binary question-answering (MBQA) model by distilling knowledge from GPT-4o-mini. In this way, one can use MBQA to create the interpretable embeddings for text, without relying on LLMs, thus reducing the API costs. The experimental results show that CQG-MBQA performs comparably to advanced black-box models, and outperforms other interpretable text embedding models across various downstream tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Please refer to the Questions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Did you evaluate your model on the fMRI task presented in the QAEmb paper for a direct performance comparison?\n- How do variations in the number of initial clusters (k) or the use of different clustering methods affect the performance of CQG?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The ideas behind CQG and MBQA are novel and effective, supported by thoughtful experiments and ablation studies. The paper is clearly written and well-structured. Given the increasing demand for model transparency, CQG-MBQA could have significant implications and represent a meaningful approach that would be of interest to the ICLR audience.\n\n- The paper builds upon QAEmb with important innovations: a contrastive approach to question generation that improves discrimination using positive, hard negative, and easy negative samples for fine-grained specificity, and a multi-task model for efficient inference without the use of LLMs.\n- The technical quality is demonstrated through comprehensive empirical evaluation across multiple tasks and strong baselines including both interpretable and black-box models, with clear cost analysis showing significant efficiency gains through MBQA during inference. For reproducibility, detailed implementation specifics and code are provided."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces CQG-MBQA, a framework designed to produce interpretable semantic text embeddings for diverse NLP tasks. The framework uses Contrastive Question Generation (CQG) to automatically generate meaningful yes/no questions without domain experts. The Multi-task Binary Question Answering (MBQA) model answers these questions, producing embeddings with human-interpretable dimensions at a much lower cost than answering with LLMs, while maintaining comparable accuracy. The authors validate CQG-MBQA through experiments, comparing it to black-box and interpretable models across STS, retrieval, and clustering. The experimental results show that CQG-MBQA offers better embedding quality than existing interpretable models and provides comparable embedding quality to several black-box models, maintaining high interpretability and efficiency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- In retrieval tasks, there is a significant performance gap compared to black-box models, and the performance is also lower than BM25. Therefore, additional performance comparisons are needed when applying them to various downstream tasks such as sentiment classification and retrieval. \n- Lack of ablation studies to assess the efficacy of the proposed approach\n - lack of comparison between different models in Figure 4 and 5, and lack of comparison between the MBQA method and directly using the LLM’s outputs.\n - comparison between vanilla CQG with positive and hard/easy negative, and CQG with positive and negative samples\n - comparison between having and not having the probing mechanism to refine the generated questions\n- Also, because the cognitive load is defined using the dot product, this measure would be directly influenced by the total number of questions. A normalized version (e.g., dot product divided by number of questions) would provide a fairer comparison across different interpretable models in Table 5.\n- Having cost analysis would be beneficial as MBQA requires significant LLM inferences (or API calls) during training time and even more may be required during Post-processing (probing). \n- Including a case study on bad examples would also be beneficial—for instance, displaying cases where two texts result in similar embeddings even when those two texts do not necessarily have similar semantics. Are they completely off? Or how could one improve your approach?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- What inspired this measurement of cognitive load?\n- How much inference time does it take to run on the retrieval datasets?\n- How were the retrieval datasets chosen?\n- How much does the QA model quality affect embedding quality?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- This paper tackles the important problem of creating interpretable text embeddings\n- Some of the steps laid out in the \"framework\" explanation will be useful for other practitioners\n- The source code is available and could be used by other researchers and engineers to build systems for interpretable embeddings\n- Consideration of the tradeoff between interpretability and quality is interesting – although I have qualms with the \"cognitive load\" measurement of interpretability, which are mentioned below."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes a \"framework\" for creating interpretable semantic embeddings. They tackle the important and relevant problem of creating embeddings that are useful for search & clustering but also understandable to humans."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- I find it important to point out that this paper isn't really proposing a \"framework\" in a very general sense; it's much closer to a method (and in fact the authors interchange the two words liberally throughout the paper). For this reason I object to calling it a framework at all and would prefer the paper to be about CQG-MBQA, which is an interesting and apparently effective method for interpretable\n text embeddings.\n- As a related point, the organization is confusing. Currently the paper mixes together the \"framework\" part (which should be a general process for producing interpretable embeddings) with the \"method\" part (about CQG-MBQA) and also some \"experimental setup\" and \"results\" (all in Section 3). As one example of the murky sectional boundaries, is post-processing really a necessary step of the framework?\n- I'm also not sure if the Case Study is exactly a Case Study.\n- The cognitive load metric seems unprincipled and lacks grounding in real cognitive science. \n- Cognitive load is simply the number of overlapping \"yes\" answers (or 1s in the embeddings) between the representations of a pair from an STS dataset. It is highly dependent on dimensionality and sparsity (Figure 4 & 5). It also doesn't really make sense because the interpretability of an embedding should depend on how many yes's there are for a pair *compared to other pairs*; embeddings cannot be understood simply by looking at the inner product of a pair of embeddings. \n- Many of the important design decisions in the framework are not ablated. Is filtering important? How much does choosing positive and negative samples matter, or clustering? How much does training a surrogate model affect performance?\n- This is not necessary to me for accepting the paper, but a real human study could be crucial for arguing that these embeddings are in fact more interpretable\n- Due to the complicated system and lack of ablations, it is not easy to understand why these embeddings outperform other interpretable embeddings such as QAEmb\n- Unclear cost analysis of running this method on a downstream dataset\n\nI think this citation could be relevant:\n- Learning Interpretable Style Embeddings via Prompting LLMs (Patel et al., EMNLP Findings 2023)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Can the authors provide a comparison with the baseline including the sparsity penalty? Maybe showing the performance as a function of the number of questions kept?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The authors study an interesting and important problem\n- They obtain strong performance results with reasonable efficiency\n- They evaluate interpretability and cognitive load well, in addition to more standard performance metrics"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce CQG-MBQA, a general framework for producing interpretable semantic text embeddings. It builds these embeddings from a set of yes/no questions that are designed to be highly discriminative (by separating text clustered by a pre-trained embedding model). To improve efficiency, the answers to these yes/no questions are distilled into a smaller model. The CQG-MBQA model reveals improvements relative to baseline interpretable models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The main issue seems to be the authors’ treatment of related work: the CGQ method generates questions through prompting and filters them based on their discriminative ability. The baseline QA-Emb also generates questions through prompting but filters them with a sparsity penalty. From their description, the authors don’t seem to implement the sparsity penalty, which likely skews the comparisons.\n- The authors should discuss this [2023 style embeddings paper](https://arxiv.org/abs/2305.12696), which was an early precursor to the work here\n- The authors should clarify whether distilling the yes/no answers into a single model is a novel contribution — the style embeddings paper & QA-Emb paper both seem to do this as well"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce a general framework for producing interpretable semantic text embeddings across diverse tasks."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024a,\ntitle={A General Framework for Producing Interpretable Semantic Text Embeddings},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=23uY3FpQxc},\nnote={under review}\n}"
},
"abstract": {
"value": "Semantic text embedding is essential to many tasks in Natural Language Processing (NLP). While black-box models are capable of generating high-quality embeddings, their lack of interpretability limits their use in tasks that demand transparency. Recent approaches have improved interpretability by leveraging domain-expert-crafted or LLM-generated questions, but these methods rely heavily on expert input or well-prompt design, which restricts their generalizability and ability to generate discriminative questions across a wide range of tasks. To address these challenges, we introduce \\algo{CQG-MBQA} (Contrastive Question Generation - Multi-task Binary Question Answering), a general framework for producing interpretable semantic text embeddings across diverse tasks. Our framework systematically generates highly discriminative, low cognitive load yes/no questions through the \\algo{CQG} method and answers them efficiently with the \\algo{MBQA} model, resulting in interpretable embeddings in a cost-effective manner. We validate the effectiveness and interpretability of \\algo{CQG-MBQA} through extensive experiments and ablation studies, demonstrating that it delivers embedding quality comparable to many advanced black-box models while maintaining inherently interpretability. Additionally, \\algo{CQG-MBQA} outperforms other interpretable text embedding methods across various downstream tasks. The source code is available at \\url{https://anonymous.4open.science/r/CQG-MBQA-483F/}."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Semantic Text Embedding",
"Interpretability",
"Question Generation",
"Question Answering"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b0a9ffcdaa2846fc57b8945a1e86f4db634bfd24.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "A General Framework for Producing Interpretable Semantic Text Embeddings"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
246rHKUnnf | TrackTheMind: program-guided adversarial data generation for theory of mind reasoning | main | Active | theory of mind reasoning;adversarial data generation;program-guided data generation | applications to computer vision, audio, language, and other modalities | 5;5;6;6 | 4;3;3;4 | 3;2;3;3 | 2;3;3;3 | 3;2;2;4 | 5.5 | 3.5 | 2.75 | 2.75 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* As the accuracy of LLMs on some TracktheMind data is quite low (e.g., $5$%), have you tried finer-grained metrics to assess the model's ability? For example, instead of directly enforcing the model to answer yes/no, it would help to diagnose its understanding of the context by extracting its confidence regarding the question and probing the level of uncertainty in the corresponding scenario.\n\n* How the *important actions* are defined to determine a desired user condition? Is this a crucial design to control the generated data's difficulty, quality, and diversity? Would it generalize across different scenarios?\n\n* What is the background of the annotators? Does this matter for the performance in task completion?\n\n* Could you elaborate on the difference among the chosen ToM benchmarks in Table 3? Why the last two did not benefit from the TracktheMind training?\n\n* Why does the model performance on ToMi drop significantly (compared to llama3.1-8b-instruct baseline) when training with 0% of interesting questions? It should be at least the same level as the baseline performance unless I missed something.\n\n* It appears that interestingness and asymmetry are not the crucial factors that impact task difficulty or model performance in evaluation. What might be the cause of such misalignment/inconsistency?\n\n* OpenAI o1 with inference-time scaling may boost the performance by exploring more possibilities for better state tracking. It would provide some insights by assessing it using the TracktheMind-generated ToM data to check whether it can improve performance as expected. This could help to better understand the bottleneck in existing LLMs to tackle such ToM reasoning tasks."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* **A controllable and generalizable data generation pipeline to collect ToM reasoning data**.\nWith predefined ToM-specific language and a rule-based state tracker, the proposed pipeline can automatically collect ToM data of various difficulty levels with high-precision annotated labels.\n\n* **Intriguing results regarding the effect of interestingness in training and evaluation data**.\nThe superior model performance on interesting questions against the \"uninteresting\" ones is unexpected and insightful. This may indicate a mechanism different from that of humans in LLMs to tackle ToM tasks.\n\n* **Details of hyperparameter settings and prompt designs**.\nThe authors provide plenty of details about the hyperparameters and categories of actions, scenarios, etc. they consider in data construction. This ensures the reproducibility and the convincingness of the experimental results in the paper."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes TracktheMind, an adversarial data generation pipeline to collect challenging ToM data via A* search.\n\nWith adversarial control on the difficulty of the generated data, the collected evaluation data poses a significant challenge to existing LLMs.\n\nThe authors also demonstrate the effectiveness of the TracktheMind-generated data as a training corpus to enhance ToM reasoning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* **Potential bias in topics, scenarios, and stories generated by LLMs**.\nThe LLMs are included in several crucial stages of the TracktheMind pipeline. For example, the plausible context creation and sampling is important as an initial stage to determine the topics and possible actions that can be covered in the data. However, this process is done by LLMs themselves, which can introduce inherent bias that hinders the generalizability of the generated data. The authors could provide more statistics and strategies they utilize to balance the story topics and scenarios in data generation to better fit real-world situations.\n\n* **Lack of detailed discussion on the exact cost of data generation via A\\* search**.\nA\\* search can be computationally expensive as the size of the search space increases. The authors mentioned that they reduced the cost by restricting the number of neighbors to consider in $f(x)$ evaluation. The authors could elaborate on how this hyperparameter balances the quality, diversity, and cost of data generation and clarify the exact cost (e.g., #tokens) required in different settings. This could help estimate the proposed method's efficiency and how it would work in practice.\n\n* **Lack of deep analysis to disentangle the specific factors that bottleneck the LLM ability of ToM reasoning**.\nThe results of ablation on #people and #actions in Figure 3 are a bit confusing. On the one hand, the number of actions seems to matter as fewer actions per person reduce the task difficulty. On the other hand, the increase in the number of actions makes little difference in the model performance in the right plot. Unless the variance in performance causes this, given the limited ranges of #people and #actions or number of test samples considered, there might be some factors (or even spurious features) that dominate the model performance. For example, the number of people and actions may not be directly related to the reasoning steps required to answer some ToM questions, whether it is interesting or not. The authors could provide some meso-analysis on the factors that can reflect the task difficulty more directly."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- In the title, you mention \"adversarial,\" but there is little explicit explanation of what makes the dataset adversarial. Could you expand on this concept?\n- Could you provide additional statistics on your synthetic dataset to offer a clearer understanding of its characteristics? I think detailed dataset statistics are often essential in synthetic data-related research.\n- Is there a significant difference in the quality of synthetic stories generated by different models, such as Llama3-8B-Instruct and Llama3-70B? It would be useful to investigate how the varying capabilities of these models impact the quality and characteristics of the synthetic data.\n- If time permits, could you try gathering training data from other Mind Reasoning Datasets to train the Llama3-8B-Instruct model and evaluate it on your benchmark? This cross-evaluation could offer valuable insights into model performance across datasets."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- A novel framework for synthesizing mind reasoning data.\n- A sufficiently challenging benchmark to evaluate the mind reasoning capabilities of LLMs.\n- A robust training set offering more data, a complex structure, and strong generalization potential.\n- Facilitates investigation into why mind reasoning tasks remain challenging for LLMs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel methodology for generating program-based Theory of Mind stories using state control, difficulty evaluation, and A* search to ensure a suitably challenging benchmark. The authors conduct two main experiments:\n\n- Benchmark Evaluation: They first evaluate the performance of LLMs on the new benchmark created with TrackTheMind-generated stories. Results indicate that even advanced models struggle with this benchmark, highlighting its potential as a rigorous test for mind reasoning.\n\n- Model Fine-Tuning with Synthesized Data: Using their framework, the authors synthesize training data to fine-tune a model, resulting in significant improvements on both in-domain and out-of-domain benchmarks.\n\nAdditionally, the authors offer insights into potential factors contributing to the observed limitations in model performance on mind reasoning tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The predefined action sets may limit the variety and richness of the story, potentially constraining creativity and depth.\n- Other weaknesses align with the questions section, where I have shared thoughts on things needing further explanation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "In addition to the questions above, I have the following question:\n\n1- In the caption of Figure 3, the authors mention that \"A story with greater number of people suggests lower difficulty, possibly because there is a fixed number of actions, thus fewer actions per person.\" However, I'm not sure if I completely followed the reasoning here. When representing the state programmatically, we need to include the status of each person before/after action. So I would argue the number of people has an impact on the state size, and also total number of actions has an impact on number of times we need to update state. Thus, both of them should have an impact on difficulty, but Figure 3 shows otherwise. Could the authors explain this?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1- Do large language models (LLMs) have theory of mind? I think this is a very important research question!\n2- Overall, the paper does a good job of presenting arguments and claims.\n3- The proposed benchmark seems to be very challenging for LLMs, as indicated by the results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces TrackTheMind, a framework for generating challenging theory of mind (ToM) testing and training data for LLMs. To generate stories, this work samples plausible contexts, uses A* search to find challenging story structures, and infills these with an LLM. The results show that LLMs seriously struggle on some scenarios, potentially due to poor state tracking skills and the scarcity of training data that specifically requires ToM reasoning, which can be alleviated to some degree by finetuning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1- The paper argues that \"basic theory of mind is still elusive to LLMs,\" and thus this \"demonstrates the need for training data that purposefully requires theory of mind.\" Do the authors think the lack of theory of mind skills can be \"resolved\" (we know it can be \"alleviated\") with enough training data? The results on the FANToM benchmark in Table 3 suggest that even finetuning on 114,000 data points of TrackTheMind does not necessarily improve the theory of mind abilities of LLMs. Instead, the reported gain can be explained by the fact that the proposed benchmark is similar to some benchmarks, and by training on TrackTheMind data, models can perform better on similar benchmarks like ToMi without really developing an internal skill that can be generalized across other scenarios.\n\n2- While providing a new benchmark is a contribution, in terms of \"new insights,\" it is not very clear to me how much contribution this work makes. Several other works are suggesting the lack of abilities in the context of theory of mind. But it is not clear to me what \"new\" insights this work offers researchers that cannot be obtained from other similar works.\n\nWhile I appreciate the effort for development of this new and challenging benchmark, the work falls short of providing novel insights into theory of mind capabilities in LLMs."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please refer to the weaknesses part."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper is well-structured and generally easy to follow, except for Section 2, especially in describing the overall TrackTheMind pipeline and the description of the A* search.\n2. The types of ToM questions considered are comprehensive, especially those containing asymmetric belief updates, which can create complex questions.\n3. The ToM question generation process is automatic, and given the 'tree' structure, its correctness can be easily verified."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces the TrackTheMind method, which is used to generate a theory of mind story with specific constraints, such as having exactly 3 people in the story.\n\nGenerally speaking, TrackTheMind is a tree search process. It starts from a \"root node\": TrackTheMind uses an LLM to generate a context, including characters, environment, objects, and possible actions. Then, it generates n leaf nodes from this node, where each leaf node can contain n actions that modify the environment state. Among these n leaf nodes, A search is used to select one while discarding the others. The A value function f(s) = g(s) + h(s), where g(s) is the accuracy rate of all questions that the LLM can generate at leaf node s, and h(s) is the probability that subsequent nodes from this leaf node can fulfill the specific constraints.\n\nThe authors first used TrackTheMind to generate evaluation data, demonstrating that current LLMs still need improvement in their performance on complex theory of mind datasets. Furthermore, the authors used TrackTheMind to generate training data, and experimental results showed that this training data can effectively improve the model's theory of mind capabilities while maintaining the model's basic utility."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. First, how can we quantitatively evaluate the complexity of the generated ToM stories? If complexity is quantified by the number of people and actions involved, why do the experiments in Fig 3 show that model performance increases as the number of people involved increases?\n2. In A* search, g(s) requires to evaluate LLM performance of the entire question generated by state s, which maybe time-consuming.\n3. The authors demonstrated that models trained on the TrackTheMind training set largely maintain their utility. However, only Multi3Woz and MMLU were evaluated. I expect to evaluate it on more common datasets as it is easy to implement.\n4. In Section 2.1, the story context structure is simple and may not be general enough for complex, real-world scenarios."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We develop an A*-powered algorithm for adversarially generating challenging and diverse theory of mind data, that can be effectively used as to stress-test LLMs capabilities or as fine-tuning data"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024trackthemind,\ntitle={TrackTheMind: program-guided adversarial data generation for theory of mind reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=246rHKUnnf},\nnote={under review}\n}"
},
"abstract": {
"value": "Do large language models (LLMs) have theory of mind? A plethora of papers and benchmarks have been introduced to evaluate if current models have been able to develop this key ability of social intelligence. However, all rely on limited datasets with simple patterns that can potentially lead to problematic blind spots in evaluation and an overestimation of model capabilities. We introduce TrackTheMind, the first framework to allow large-scale generation of diverse and challenging theory of mind data for robust training and evaluation. Our approach leverages an A* search over a custom domain-specific language to produce complex story structures and novel, diverse, yet plausible scenarios to stress test the limits of LLMs. Our evaluation reveals that state-of-the-art LLMs, such as Llama-3.1-70B and GPT-4o, show accuracies as low as 5% on TrackTheMind-generated data, highlighting the need for more robust theory of mind evaluation. As our generations are a conceptual superset of prior work, fine-tuning on our data yields a 26-point accuracy improvement on the classic ToMi benchmark (Le et al., 2019). TrackTheMind also enables uncovering underlying skills and factors missing for models to show theory of mind, such as unreliable state tracking or data imbalances, which may contribute to models' poor performance on benchmarks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"theory of mind reasoning",
"adversarial data generation",
"program-guided data generation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a875c46264ed4dae5731c8fbd4088fd1330c66e9.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "TrackTheMind: program-guided adversarial data generation for theory of mind reasoning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
254NJe9JEw | A deep inverse-mapping model for a flapping robotic wing | main | Active | robotics;control;flapping drones;deep learning;time series;inverse mapping;sequence to sequence | applications to robotics, autonomy, planning | 5;5;6;6 | 3;4;4;4 | 2;3;3;3 | 2;3;3;2 | 3;3;3;3 | 5.5 | 3.75 | 2.75 | 2.5 | 3 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Around 1000 experiments were performed for the hyperparameter search of the Seq-2-Seq ASL model. Was the same search conducted for other models in Table 2 ?\n2. What is the effect of scaling this model on the MAE ? Considering the goal of this effort is to deploy on highly compute constrained platforms, it would be interesting to see if this model scales better than a transformer."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Extensive hyperparameter search to get optimal results.\n2. Data collection is commendable.\n3. Valid ablation and limitation sections have been provided."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper releases a deep learning architecture to model inverse dynamics of a flapping wing which addresses the challenge of controlling such complicated and intricate systems. They also developed an experimental setup to collect data on the wing motion using high speed cameras. Their model uses a sequence-to-sequence framework enhanced with a frequency domain layer for adaptive learning, outperformed baseline models on author collected test data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The results present in Table 2 does not present more significant information than what is present in figure 5. Instead the ablation results can be moved to the main manuscript from the supplementary material\n2. The Seq-2-Seq ASL model does not outperform the transformer on the open source dataset. But does perform better on the authors dataset. An explanation for this would greatly help the contribution of this paper. \n3. The abstract should clarify that the 11% improvement is over the median since over mean the model performs worse than the baselines."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "One of the contribution is the whole pipeline to collect the data. As for the ASL structure, I am not sure the necessity of the complex network structure. One core aspect of the paper is modeling aerodynamics, and similar work exists in the UAV field, such as NeuralFly. Therefore, this core contribution or novelty needs to be better clarified by the authors."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "**Originality:**\n- Novel inverse mapping approach for flapping wing control\n- Creative integration of frequency domain processing (ASL) with sequence learning\n- New experimental setup combining force and motion measurements\n- Innovative application of deep learning to fluid dynamics control\n\n**Quality:**\n- Rigorous experimental validation:\n * Two different datasets (air and viscous fluid)\n * Comprehensive ablation studies of ASL components\n * Clear performance metrics and comparisons\n- Thorough implementation details:\n * Full hyperparameter specifications\n * Clear architectural choices\n * Reproducible results\n\n**Clarity:**\n- Well-structured presentation\n- Clear problem formulation and motivation\n- Detailed technical explanations with supporting figures\n- Comprehensive supplementary materials\n- Open-source data and framework\n\n**Significance:**\n- Practical impact:\n * Real-time capable control system\n * Improved performance (11% over state-of-art)\n * Direct application to existing robotic systems\n- Broader implications:\n * Framework applicable to other complex dynamic systems\n * Potential applications in biomedical devices\n * Open datasets for future research\n- Technical contributions:\n * New insights into frequency domain processing\n * Improved understanding of flapping wing dynamics\n * Efficient model architecture design"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a machine learning approach to control flapping-wing robots by developing a model that determines how wings should move to achieve desired forces. The key innovation is combining a sequence-to-sequence neural network with a new Adaptive Spectrum Layer (ASL) that better handles periodic motions. Tested on experimental data, the approach shows 11% improvement over existing methods and provides practical real-time control capabilities."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- typo L418: (forces vs. force and torque) \n- paper mentions different measurement types between datasets without explaining impact on model or justification\n- What is the sim2real gap for the real wing-driven robot? How to narrow the sim2real gap to make the research more useful.\n- Can you scale to multiple degrees of freedom? how to evaluate the scaling?\n- Can you scale to different geometry and material? How to evaluate?\n- What are the flight conditions?\n- Any analysis of frequency selection? Why 100Hz/210Hz?\n- More implementation details could be provided: synchronization for different sensors, delay?\n- How is the sensor data aligned between cameras (10,000 fps) and force sensors (5,000 samples/sec)?\n- What's the real-time performance on actual hardware? Processing delays?\n- Any stability analysis or guarantees for the control system?\n- How does the system handle disturbances or noise?\n- Is there any theoretical justification for the model architecture choices?\n- How generalizable is this approach across different Reynolds numbers?\n- How about the performance of some basic NN structures, like MLP/LSTM/RNN?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "There are a few questions I have for the authors and these are included below. \n\nPlease would the authors perform the statistical test to compare the statistical differences between MAE of their method against the baselines. \n\nSince ASL is a novel contribution of the authors, I would emphasise this to a greater degree in the abstract and the introduction. Is there a reason for not doing this?\n\nPlease could you point me to where you state the size of the dataset and evaluation sets used to compare results against the baselines.\n\nI have a further optional suggestion for authors. Can the authors comment on the frequency spectrum present in the data? \n\nThis question is more one of curiosity, but may be of interest to other readers: I have a question about the RFFT and the IRFFT. How do you implement this in practice? I had assumed that most “fast” implementations are non-differentiable and I am curious which implementation you used."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper has a number of strengths. The experimental setup is unique and interesting to the robotics community. The authors present a novel layer Adaptive Spectrum Layer (ASL), which in the experiments section is shown to improve the overall prediction performance. The choice of baselines are appropriate. The paper is well presented and clear to follow. The paper is significant."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel method for mapping the complex dynamics of a wing flapping to output thrust. The method employs a sequence to sequence model with a GRU encoder and decoder. The authors present a novel layer dubbed the Adaptive Sequence Layer (ASL) to attend to features across the frequency spectrum of the input sequence. The authors use two datasets: one is created by building a mechanical flapping wing and the second is an open source wing flapping dataset. The Seq2Seq+ASL method outperforms baseline methods such as Seq2Seq without ASL and a Transformer. The paper makes a clear and useful contribution to literature."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Despite the strengths of the paper, there are a few weaknesses, but these should be easily addressed. Some of the key figures such as Fig. 1 and 2 are quite small. A recommendation would be to increase the size of these at the expense of some of the text of by resizing Fig. 5. For example, the abstract and introduction are a little on the verbose side. Nevertheless, these paragraphs are clear. \n\nThere are many comparisons between different baseline methods, which is good. However, it would be beneficial to have a statistical test to show that the improvement in performance between Seq2Seq+ASL and the other baseline methods is statistically significant. In Figure 5, there is not that much difference between the Seq2Seq and Transformer models. A statistical test such as Mann Whitney U-test to compare the differences between the MAE losses. I leave the choice of statistical test to the authors discretion."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The equation (2) is called the one-step-ahead prediction model. Does \\tau refer to the one-step size? However, \\tau in equation (1) is defined as a period. It is confused.\n\n2. Through the learned inverse model, the desired attitude can be obtained from the desired force. In real application, how to plan the desired force and how to implement the obtained attitude? A special control framework can be provided to illustrate the implementation of the learned inverse model. It is wondered if the uncertainty that exists in the control loop would influence the learning performance."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Adaptive spectrum layer enhanced seq2sep learning framework.\n2. Real experimental tests.\n3. Clear presentation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presented an adaptive spectrum layer enhanced seq2sep deep learning framework to learn the inverse-mapping model of a flapping robotic wing. The employed adaptive spectrum layer is found to have the advantage of learning the periodic features in the frequency domain. Overall, the presentation is clear, and a real experimental test is conducted. However, some important improvements are further needed, mainly including the theoretical contributions and more practical tests."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. It seems the main contribution of this paper is the introduction of the adaptive spectrum layer in seq2sep prediction missions. Empirical effects are provided in the results (although the improvement provided in Table 2 seems not very promising). The innovation is relatively weak. The theoretical principle and intuitive explanation behind the learning architecture should be further abstracted and discussed.\n\n2. The implemented experiments are relatively simple. The real applied scenarios are usually more complicated, such as external disturbance, unknown dynamic model, and actuator uncertainty. The generalization problem of the developed learning architecture is not considered. To improve contributions, it is highly recommended to further include the learned inverse model in the online control framework, not just offline demonstration on the dataset of a flapping wing. Moreover, a comparison with a traditional control method (e.g., MPC) is needed.\n\n3. Some presentation errors. Every abbreviation that appears first in an article should be given its full name, such as FFT. All symbols employed in Fig 3 should be illustrated."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We solved an inverse-mapping problem of robotic, flapping-wing systems, by learning the input wing motion required to generate a desired aerodynamic force outcome. This framework is expected to simplify the control of such complex systems."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024a,\ntitle={A deep inverse-mapping model for a flapping robotic wing},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=254NJe9JEw},\nnote={under review}\n}"
},
"abstract": {
"value": "In systems control, the dynamics of a system is governed by modulating its inputs to achieve a desired outcome. For example, to control the thrust of a quadcopter propeller the controller modulates its rotation rate, relying on a straightforward mapping between the input rotation rate and the resulting thrust. This mapping can be inverted to determine the rotation rate needed to generate a desired thrust. However, in complex systems, such as flapping-wing robots where intricate fluid motions are involved, mapping inputs (wing kinematics) to outcomes (aerodynamic forces) is nontrivial and inverting this mapping for real-time control is computationally impractical. Here, we report a machine-learning solution for the inverse-mapping of a flapping-wing system based on data from an experimental system we have developed. Our model learns the input wing motion required to generate a desired aerodynamic force outcome. We used a sequence-to-sequence model tailored for time-series data and augmented it with an adaptive-spectrum layer that implements representation learning in the frequency domain. To train our model, we developed a flapping-wing system that simultaneously measures the wing's aerodynamic force and its 3D motion using high-speed cameras. We demonstrate the performance of our system on an additional open-source dataset of a flapping wing in a different flow regime. Results show superior performance compared with more complex state-of-the-art transformer-based models, with 11% improvement on the test datasets. Our open-source data and framework may improve modeling and real-time control of systems governed by complex dynamics, from biomimetic robots to biomedical devices."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"robotics",
"control",
"flapping drones",
"deep learning",
"time series",
"inverse mapping",
"sequence to sequence"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/defa0182cb4ae0cafbef9a30d0551e179ba7ec3f.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to robotics, autonomy, planning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/1531f142a7f65578d46a6fb940f91240cb87f718.zip"
},
"title": {
"value": "A deep inverse-mapping model for a flapping robotic wing"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
25Zlvl7JxW | HQGS: High-Quality Novel View Synthesis with Gaussian Splatting in Degraded Scenes | main | Active | 3D Reconstruction;3D Gaussian Splatting | applications to computer vision, audio, language, and other modalities | 5;5;5;6 | 4;3;4;4 | 3;2;2;3 | 3;3;3;3 | 3;2;3;3 | 5.25 | 3.75 | 2.5 | 3 | 2.75 | 0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. I am curious about the rationality of generating edge maps from low-quality images. Since edge maps are generated from low-quality images, can they still effectively capture key edge information in severely degraded scenes? Can the author provide edge maps with different degrees of visual degradation and the impact of failed edge map visualization on the results? Furthermore, in severely degraded scenes, is it possible to use a pre-trained image restoration model to generate high-quality images before extracting edge maps?\nThe paper mentions that low-quality images result in sparse point clouds but does not clarify whether the ESFG module impacts the density distribution of Gaussian elements. Can the ESFG module improve the density of the point cloud while maintaining the total number of Gaussian elements? Is there a densification strategy or explanation of how the ESFG module affects 3DGS densification to better handle the sparse point clouds generated by low-quality images?\n\n2. The authors mention that the method combines high-frequency and low-frequency features. Could you provide a visualization of the number of the Gaussian elements across high- and low-frequency regions within an image to show how the method effectively handles these different areas?\n\n3. Figure 7 shows only the differences between HQGS and 3DGS. Could the authors supplement this with a robustness comparison to SRGS for a more comprehensive evaluation of HQGS’s performance?\n\n4. Could the authors provide a comparison of training times across different methods, especially discussing the impact of the ESFG module on training time?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper combines edge-awareness and semantic awareness through the ESFG module, providing essential high-frequency edge information to improve 3D Gaussian splatting (3DGS) reconstruction on low-quality images. The introduction of LSCS further enhances the global structural consistency of rendered images, which is an innovative design.\n2. The experiments cover a wide range of common degradation conditions (e.g., low resolution, JPEG compression, blur, and noise) and compare the performance of HQGS against other state-of-the-art methods. The results demonstrate that HQGS not only outperforms these methods in image quality but also maintains efficiency in rendering time."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel view synthesis method called HQGS, specifically optimized for low-quality images, such as those with low resolution, blur, and noise. HQGS employs an Edge-Semantic Fusion Guidance (ESFG) module to enhance the detail-capturing ability of 3D Gaussian splatting and introduces a Structural Cosine Similarity Loss (LSCS) to further improve global consistency in image rendering. Experimental results show that HQGS demonstrates stable performance across various degraded scenarios, outperforming other NeRF and 3DGS-based methods in metrics like PSNR, SSIM, and LPIPS."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. This approach heavily relies on high-frequency edge maps. For severely degraded images, using the Sobel operator to generate edge maps may result in significant detail loss. Given the instability of edge information in low-quality images, it is questionable whether ESFG can reliably extract edge information under various levels of degradation. There is a lack of robustness experiments on edge maps to verify the applicability of this approach.\n2. The paper mentions that low-quality images produce sparse point clouds, which can negatively impact reconstruction quality. However, the paper does not clarify whether ESFG influences the density or number of Gaussian elements. If the point cloud density is insufficient, simply adjusting the distribution might not achieve optimal results.\n3. Although the paper mentions that the method combines high and low-frequency information, it does not present the actual distribution of Gaussian elements in high- and low-frequency regions of the images. A lack of intuitive visualization makes it difficult to verify the practical effectiveness of ESFG and LSCS in these regions.\n4. While Figure 7 demonstrates that HQGS exhibits greater robustness compared to 3DGS, it lacks a direct comparison of robustness with SRGS (e.g., in noisy or low-resolution scenarios). This omission limits the understanding of HQGS's robustness relative to other 3DGS optimization methods.\n5. The paper mentions only the total training iterations but does not provide specific data on training time. Given that the addition of the ESFG module may increase training costs, the paper should ideally compare training efficiency, particularly in terms of the impact of ESFG on training duration."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "How were the experiments for Table 3 and Table 4 conducted? In which scenes were they performed? What types of degradation were used?\n\nFrom the input in Figure 2a, it is impossible to see the presence of the \"power lines\". I am curious whether it is really possible to reconstruct the clear \"power lines\" in Figure 2b from such low-quality input views. How can this phenomenon be explained? Shouldn't 3D Gaussians be unable to imagine and reconstruct features that are not present (or almost completely blurred) in the input views?\n\nI noticed that the model was trained for 50,000 iterations, which is more than the number used for vanilla 3D-GS. Would this have an impact? If the model is trained for 50,000 iterations, would all other parameters remain unchanged, including those for densification? If so, do the additional 30,000+ iterations seem redundant, or are they used to mainly for the optimization of the MLPs?\n\nAre the weights of the MLP optimized individually for each scene, or are they generalized after pre-training?\n\nRegarding lines 531-532, since you have added an MLP and trained for 50,000 iterations, the training time for HQGS would at least be longer, right?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed HQGS framework effectively addresses the challenges of degraded images in novel view synthesis by introducing an Edge-Semantic Fusion Guidance (ESFG) module and a Structural Cosine Similarity Loss (LSCS).\n\nThe ESFG module enhances the distribution of Gaussian primitives and improves detail generation, while LSCS ensures global low-frequency structure consistency, leading to higher quality rendered images.\n\nExtensive experiments demonstrate superior robustness and performance in various degradation scenarios, outperforming state-of-the-art methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors identify that 3DGS performs poorly with low-quality images due to insufficient attention to detailed regions, leading to a lack of Gaussian primitives and loss of detail.\n\nTo improve this, this paper presents an approach named HQGS, including Edge-Semantic Fusion Guidance Module and Structural Cosine Similarity Loss.\n\nEdge-Semantic Fusion Guidance Module: Combines high-frequency edge-aware maps with semantic information to guide the distribution of Gaussian primitives, enhancing detail in rendered images.\n\nStructural Cosine Similarity Loss: Complements pixel-level constraints by focusing on structural similarities, further improving image quality.\n\nExperimental results demonstrate that HQGS enhances robustness and performance in various degraded scenes."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The method relies heavily on high-quality edge and semantic information, which may be challenging to obtain in extremely degraded or noisy images.\n\nThe computational complexity introduced by the ESFG module and LSCS could increase training and inference times, potentially limiting real-time applications.\n\nThe presentation of the paper is not optimal in several aspects:\nFigure 1 suffers from color blending issues, making it difficult to distinguish between different color regions corresponding to various methods.\nFigure 2 is mentioned before Figure 1 in the text, which can be confusing for readers.\nTables 1 and 2 present similar results but use different formatting (one with colored text and one without), leading to inconsistency and potential confusion.\nFor Figure 5, the effectiveness of the method cannot be understood due to the lack of visualized input views."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Q1: In the abstract of the work, the author's state: \"The fused features serve as prior guidance to capture detailed distribution across different regions, bringing more attention to areas with a higher concentration of Gaussian primitives.\". I find this sentence to be confusing. Later in the work it becomes clear that the ESFG module emphasizes edge related information in the input images in order to adapt the layout and properties of Gaussian's to better capture key information during training. In other words, the ESFG module guides the training of the Gaussian primitives by bringing more attention to key areas of the input images. It does not bring \"more attention to areas with a higher concentration of Gaussian primitives.\" as this implies that the ESFG module is concerned with drawing attention to the density of Gaussians in the radiance field, which is not the case. It draws attention to key features of the input images and this in turn effects the density of the Gaussian primitives. I suspect this is what the authors meant, but the language is vague and admits the other interpretation. I suggest the following rewording of this sentence: \"The fused features serve as prior guidance to capture detailed distribution across different regions, bringing more attention to areas with higher semantic meaning, such as edges, in turn allowing for higher concentration of Gaussian primitives to be assigned to such areas.\".\nQ2: On line 48, the authors state \"Our preliminary experiments (Figure 2(b)) show that, for reconstruction, the distribution of reconstructed Gaussian primitives becomes too sparse to allow the capture of fine scene details. \" Which distortions are the authors referring to here? Noise? Low resolution? Blur? Compression artifacts? All distortions? Please clarify what is being referred in this text? Please state clearly whether this observation refers to specific types of distortions (for example if it refers solely to blur) or whether this statement refers to all types of distortions.\nQ3: Figure 4 has a spelling error, \"Position paprameter in Gaussians\" should be \"Position parameter in Gaussians\". In addition, in the caption, the authors state \"It separately learns semantic-aware feature and edge-aware feature, and\nthen jointly guides the training of HQGS.\" Please avoid the usage of vague terms like \"It\". What is \"It\" precisely? For example a potential better sentence is: \"The ESFG module learns semantic-aware features and edge-aware features, and...\".\nQ4: Equation (2) introduces a notation for matrix multiplication that is not explained until after equation 4. Please explain notation at the point at which it is introduced.\nQ5: Line 275, \"then HQGS model it as G(x)\" should be \"then HQGS models it as G(x)\".\nQ6: Line 352, \"methods that provide codes and\" should be \"methods that provide code and\".\nQ7: Figure 7 is a pastel, set of 3D overlapping bars with partial transparency that make the plot overly artistic and hard to read. A simple set of non-overlapping groups bars would have provided the same information and been clearer.\nQ8: Figure 8 contains pastel colored, semi-transparent overlaid plots with some form of fill gradient transitions. The pastel colors are very similar and hard to differentiate in the plot. Please simplify and remove the unnecessary additional graphics. Key information like the numbers on the graphs are overlapping making them difficult to read.\nQ9: In section 3.1, the authors state that the JPEG Compression will only be studied at a quality level of 10. Please explain why this particular value was chosen and why only a singular value was chosen for this parameter. In addition, only one value of Low Resolution was selected (4x downsampling). Why was this number chosen? Please provide additional text to describe the justification of the choice of JPEG quality level and downsampling factor. In addition please consider the testing of a wider range of these parameters (for example JPEG quality settings higher and lower than 10 as well as downsampling factors of 2x and 8x). If it is not appropriate to test a wider variety of values for JPEG Compression and downsampling, please state the rationale clearly."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper is very well written. The authors supply sufficient detail on the proposed method to allow it to be correctly understood both on it's own and in the context of the prior art. The contribution is quite novel and although the edge fusion guidance module is motivated by the prior art, it is certainly not a trivial increment on the prior art and represents a new way of looking at the problem of low quality input images to radiance field training. The experimental section is quite strong, with comprehensive comparisons to the prior art and convincing improvements. The ablation studies are quite thorough, showing that the authors have put a lot of thought into the study and gone to considerable efforts to explore the work. The results of the ablation studies support the inclusion of each aspect of the two proposed modifications clearly. Conclusions are well-founded and justified by the experimental results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work is concerned with the improvement of 3D Gaussian Splatting-based radiance fields computed for images that have quality issues. In particular, blur, reduced resolution, compression artifacts, and noise. The authors present a proposed method with two key modifications over the prior art. The first modification is an edge fusion guidance module that merges semantic information with edge information to favor the representation of fine details in the final radiance field overcoming issues with the above distortions. The second key modification is the introduction of a structural cosine similarity loss that acts on the low frequency areas of the rendered images to ensure better representation of low texture areas of the radiance field."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Overall the work is strong, but there are a couple of areas of improvement. I found that certain figures contained unnecessary details or were difficult to read, while certain aspects of the explanations are unclear or seem contradictory. I also found that the analysis of compression artifacts was somewhat limited. In the \"Questions\" section of this review, I list these areas specifically and make suggestions for improvements."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How to apply the colmap on low quality images. I think it's not accurate.\n2. What is M in line 265.\n3. Why do you downsample the I and E by 2?\n4. In Eqn. 3, authors used F'M, while in the above contents, authors used F'. What's the difference of them?\n5. No other layers after the fusion features but before the sigmoid?\n6. I guess the M represents the number of points, then how to get fusion features in dimension M?\n7. In the original 3DGS, there is a loss called D-SSIM loss. Does it help to emphasizes directional consistency in the low-frequency feature space? Why do you change it with SCS loss?\n8. The number of points is not fixed. 3DGS will split and clone points. How to you know how many M do you need?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The proposed method performs better than other compared SOTAs on 3D reconstruction form low-quality images. And the idea that learning a modulation for the position is good."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors proposed a novel training strategy that can help to reconstruct 3D scenes from low-quality images."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I think the description of the paper is not clear. Please see my following questions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024hqgs,\ntitle={{HQGS}: High-Quality Novel View Synthesis with Gaussian Splatting in Degraded Scenes},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=25Zlvl7JxW},\nnote={under review}\n}"
},
"abstract": {
"value": "3D Gaussian Splatting (3DGS) has shown promising results for Novel View Synthesis. However, while it is quite effective when based on high-quality images, its performance declines as image quality degrades, due to lack of resolution, motion blur, noise, compression artifacts, or other factors common in real-world data collection. While some solutions have been proposed for specific types of degradation, general techniques are still missing. To address the problem, we propose a robust HQGS that significantly enhances the 3DGS under various degradation scenarios. We first analyze that 3DGS lacks sufficient attention in some detailed regions in low-quality scenes, leading to the absence of Gaussian primitives in those areas and resulting in loss of detail in the rendered images. To address this issue, we focus on leveraging edge structural information to provide additional guidance for 3DGS, enhancing its robustness. First, we introduce an edge-semantic fusion guidance module that combines rich texture information from high-frequency edge-aware maps with semantic information from images. The fused features serve as prior guidance to capture detailed distribution across different regions, bringing more attention to areas with a higher concentration of Gaussian primitives. Additionally, we present a structural cosine similarity loss to complement pixel-level constraints, further improving the quality of the rendered images. Extensive experiments demonstrate that our method offers better robustness and achieves the best results across various degraded scenes. The source code and trained models will be made available to the public."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"3D Reconstruction",
"3D Gaussian Splatting"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8d3cad752849dae5728779dd0e123a6024355f64.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "HQGS: High-Quality Novel View Synthesis with Gaussian Splatting in Degraded Scenes"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
25j2ZEgwTj | How do students become teachers: A dynamical analysis for two-layer neural networks | main | Active | learning theory;over-parameterization;learning dynamics | learning theory | 5;6;6 | 2;3;3 | 3;3;3 | 3;3;3 | 2;4;4 | 5.666667 | 2.666667 | 3 | 3 | 3.333333 | 1 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Besides the questions listed in the above, I am curious about the connection to [1], which also treats the three-stage convergence for regularized two-layer neural networks in the teacher-student settings.\n\n[1] Zhou, Mo, and Rong Ge. \"How Does Gradient Descent Learn Features--A Local Analysis for Regularized Two-Layer Neural Networks.\" arXiv preprint arXiv:2406.01766 (2024)."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The teacher-student setting is one of the well-studied topics in deep learning theory literature, and treating teachers with multiple neurons is still lacking investigation. This paper tackles this critical problem and obtains certain results. The writing of this paper provides a detailed explanation of theoretical outcomes and their proofs, which makes the paper more accessible for readers to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors analyze the training dynamics of two-layer neural networks with ReLU activation in teacher-student settings, where both the teacher and student networks have multiple widths. Motivated by the analysis of (Xu and Du, 2023) for the teachers with single neurons, they provide a three-phase convergence framework, consisting of alignment, tangent growth, and local convergence, to the training, and finally obtain the global convergence guarantee with $O(T^{-1/3})$ local convergence, where $T$ is the number of iteration."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While this paper provides novel theoretical findings to the teacher-student settings literature, I have several concerns about them.\n\n- The first one is about the restriction of the teacher model. The authors impose several restrictions to the teacher model, such as orthogonality of each neuron and positivity of each coefficient of each neuron. Could the authors relax these assumptions? While the authors mention the orthogonality in the paper, is there any (possible) quantitative evaluation when the orthogonality does not hold? Moreover, I am curious about the accessibility to the case where both positive and negative teacher neurons exist.\n\n- The other one is the assumption of weak recovery, which the authors refer to in the conclusion. Although how the student neurons align to one of the teacher neurons is of interest, this assumption seems to impose this at initialization while the alignment phase still exists. Moreover, I could not find how $\\zeta$ in Assumption 1 can be small in the statements. Please correct me if there is anything I may have missed."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- In Fig. 2-3, bottom rows, it seems that in **all cases** the long time behavior of the loss is significantly slower than $T^{-3}$. Is this not in contradiction to the analytical results?\n\n- The authors write in line 58 about sample complexity, though it seems none of the bounds depend on $n$. Is sample complexity at all investigated in this work?\n\n- Is Assumption 3 (line 206) justified in a generic setting? It seems that if student neurons are initialized at random, the amount of student neurons that will be close to a given teacher neuron should be distributed binomially, and one should expect a small fraction of them to violate this assumption, no?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is overall well-written and the contribution is important, if true. I am not versed in the relevant literature and cannot attest for the validity of the proofs or derivations. I review the paper while accepting the claims of the authors in the main text at face value. The ACs should verify that the other reviewers can judge the content of the proofs.\n\nThe paper is very heavy on mathematical notation and all the \"juice\" is buried in the 30-page appendix. However, the authors provide intuitive informal explanations of the various theorems, which helps a lot in making the manuscript readable."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a comprehensive analysis of training dynamics in a 2-layer teacher-student model with ReLU activation and iid Gaussian inputs. The authors prove that, under reasonable assumptions, learning has three distinct phases:\n1. alignment - each student neuron aligns with one specific teacher neuron, and not too many students cluster around the same teacher neuron\n2. tangential growth - student neurons grow in norm \n3. final convergence - when all students neurons are sufficiently alinged with their respective teacher neurons, the loss converges at a rate of $T^{-3}$."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- As I wrote above, the notation is elaborate and difficult to follow, and all the actual scientific content of the paper is in the appendix. \n- The paper would benefit a lot from a paragraph or two, preferably accompanied by a diagram, that summarizes the main results, showing the 3 phases and the processes that occur in each phase, the bounds for the duration of each phase and so on. To save space, the current Fig. 1 can be safely omitted IMHO.\n- On the same note, it seems that some of the notation is introduced but never used (e.g. $r_j$ in line 152) and others is used only once"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1.\tIn phase 2, the authors claim that upper bound of the angle will increase, but this doesn’t necessarily mean that the angle will increase. Also, from the empirics, the angles appear to be monotonic. Will the angle actually increase as the authors state in line 337 that the angle is slightly larger than that of Phase 1?\n\n2.\tIn phase 3, is it possible to derive the convergence rates for the angle and the norm as well? If so, which factor dominates the overall convergence rate? Understanding this would provide deeper insight into the dynamics of this phase.\n\n3.\tIn the numerical experiments, how is the boundary between phase 1 and phase 2 determined?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well written and clearly structured. The extension of previous results to a multiple teacher neurons case is a notable advancement towards understanding the learning dynamics of neural networks. The finding that GD balances the student neurons corresponding to the same teacher neuron also provides insights for the implicit minimum-norm bias in GD. The dynamical system analysis that handles the interactions of multiple teacher and student neurons is also an important theoretical contribution of the paper."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper theoretically investigates a two-layer ReLU network in the teacher-student setup. The authors manage to derive a global convergence rate of $O(T^{-3})$ for a multi-neuron teacher and a multi-neuron student. The proof follows a three-phase structure, and the authors develop techniques to handle the interactions of multiple teacher and student neurons."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Major point:\n\n1.\tThe balancing results seems to reply heavily on the special initialization (a direct consequence from assumption 3 and lemma 1). The role of GD is mainly to preserve this balance throughout all three phases. While it’s interesting that GD can maintain the balance, the result feels somewhat limited due to the dependency on this initialization, making it seem more like a consequence of the setup than a profound discovery about GD itself.\n\n\nMinor point:\n\n1.\t$\\sigma$ is used for both the nonlinearity and the variance of initialization. It’s better to prevent notation overlap.\n\n2.\tThere appears to be a typo at line 189, where student neuron should likely refer to teacher neuron.\n\n3.\tIn theorem 3 (informal), $\\epsilon$ should be related to $\\zeta$ in assumption 1 but is not stated."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024how,\ntitle={How do students become teachers: A dynamical analysis for two-layer neural networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=25j2ZEgwTj},\nnote={under review}\n}"
},
"abstract": {
"value": "This paper investigates the fundamental regression task of learning $k$ neurons (a.k.a. teachers) from Gaussian input, using two-layer ReLU neural networks with width $m$ (a.k.a. students) and $m, k= \\mathcal{O}(1)$, trained via gradient descent under proper initialization and a small step-size. Our analysis follows a three-phase structure: alignment after weak recovery, tangential growth, and local convergence, providing deeper insights into the learning dynamics of gradient descent (GD). We prove the global convergence at the rate of $\\mathcal{O}(T^{-3})$ for the zero loss of excess risk. Additionally, our results show that GD automatically groups and balances student neurons, revealing an implicit bias toward achieving the minimum balanced $\\ell_2$-norm in the solution. Our work extends beyond previous studies in exact-parameterization setting ($m = k = 1$, (Yehudai and Ohad, 2020)) and single-neuron setting ($m \\geq k = 1$, (Xu and Du, 2023)). The key technical challenge lies in handling the interactions between multiple teachers and students during training, which we address by refining the alignment analysis in Phase 1 and introducing a new dynamic system analysis for tangential components in Phase 2. Our results pave the way for further research on optimizing neural network training dynamics and understanding implicit biases in more complex architectures."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"learning theory",
"over-parameterization",
"learning dynamics"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e7bd09b5ce4d38d4d96dc73ee26abe05a5b84514.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning theory"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "How do students become teachers: A dynamical analysis for two-layer neural networks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |