id
stringlengths
10
10
title
stringlengths
3
179
track
stringclasses
1 value
status
stringclasses
3 values
keywords
stringlengths
2
2.39k
primary_area
stringclasses
21 values
author
stringclasses
501 values
authorids
stringclasses
501 values
aff
stringclasses
1 value
aff_domain
stringclasses
1 value
position
stringclasses
1 value
rating
stringclasses
355 values
confidence
stringlengths
0
19
soundness
stringclasses
642 values
contribution
stringclasses
596 values
presentation
stringclasses
782 values
rating_avg
float64
0
9
confidence_avg
float64
0
5
soundness_avg
float64
0
4
contribution_avg
float64
0
4
presentation_avg
float64
0
4
corr_rating_confidence
float64
-1
1
project
stringclasses
1 value
github
stringclasses
1 value
Review
listlengths
2
10
0VP3LuzZ8K
Generalization of noisy SGD under isoperimetry
main
Active
generalization;langevin;non-convex;information theory
learning theory
3;5;6;8
3;4;2;3
4;2;3;3
2;2;3;3
3;2;2;3
5.5
3
3
2.5
2.5
-0.196116
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "My main questions are about Section 6, as stated in the weaknesses part." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Clear statement of setup and theoretical results.\n\n2. Detailed proof with several illustrations of proof steps via pictures.\n\n3. Previous results are clearly mentioned with detailed references.\n\n4. The results in this paper extend previous findings under convexity to weaker conditions, which is an important improvement." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the stability of SGLD, which implies generalization and differential privacy guarantees of SGLD. Instead of assuming strong convexity of the loss function, the authors demonstrate that stability results still hold under the dissipativity assumption. Technically, their result is established via verify the uniform LSI of SGLD outputs. Beyond the dissipativity assumption, they also establish a stability result via utilizing the regularizing properties of Gaussian convolution." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The writing in some parts is confusing, making it difficult to clearly understand the contribution in Section 6:\n\n1. In line 199, the authors state, \"we assume in the following that the Gibbs distribution with density proportional to exp(-Fn) satisfies the LS,\" but in Assumption 19, the authors seem to state this LSI assumption again. Is there any difference?\n\n2. In Section 6.1, the authors seem to claim two important preliminary results in Lemma 16 and 17 but don't explain how they affect establishing the main result.\n\n3. It seems that the results in Section 6 are established without verifying the uniform LSI. If so,I am wondering if the analysis template in Section 4 is only applied in Section 5 and whether it should be merged with Section 5. Moreover what is the main proof framework for establishing results in Section 6?\n\n\nOther minor writing problems\n\n1. In line 90, should it be \"the bound does not decay to zero\"?\n\n2. In lines 439, 452, 874, \"given in Theorem 12.\"\n\n3. In line 504, \"given in equation 8 and equation 9.\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I'm not sure if I understood the results in Section 6, which seems to be adopting an entirely different style of analysis as compared to section 5, which helps in lifting the LSI assumption on the entire sequence of intermediate distributions to LSI on the Gibbs distribution corresponding to the loss function. It would help if this approach is explained more thoroughly to see the idea in there a bit more clearly.\n\nI'm open to increasing my score, especially if Section 6 has some good ideas that I might have missed." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The paper is well-written and explains its ideas in a well-paced manner, introducing a bit of background, assumptions, and supporting theorems at convenient locations for the reader to follow.\n- The paper presents an example where non-convex loss can provide generalization guarantees that are non-vacuous in number of iterations.\n- The paper simplifies the expansive-contractive decomposition of SGLD steps used in related works for bounding information divergence." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper explores KL and Rényi divergence stability of Stochastic Gradient Langevin Dynamics (SGLD) algorithm. The main characteristic of the presented stability bounds is that they do not become vacuous with the number of iteration of SGLD, which is achieved by assuming log-Sobolev type isoperimetric inequality being satisfied, either throughout the stochastic process, or just by the steady-state Gibbs distribution that SGLD asymptotically approximates. Such isoperimetric properties have also been recently shown to provide rapid convergence in informational divergence as well as convergent DP properties. In a similar vein, the paper derives non-asymptotic and convergent generalization bounds for SGLD as well as bounds on Rényi DP under isoperimetric assumptions. Moreover, the paper shows that the isoperimetric assumption is satisfied under settings considerably milder than strongly-convex losses, such as under dissipative and smooth losses." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Non-analytical issues:\n\n- There is no comparison of the presented bounds with existing results in literature. The generalization bounds presented should be compared to both information-theoretic and non-information theoretic bounds under similar sets of assumptions.\n\n- Contrasting with lower bounds on stability is needed to assess gaps in the tightness of the analysis presented. If such an analysis proves difficult, a well-designed experimental evaluation to compare the generalization bounds with the actual generalization behaviour under the stated assumptions should have been included.\n\nTechnical comments:\n\n- Firstly, information-theoretic generalization bounds inspired by Xu and Raginsky seem to have an O(1/sqrt(n)) dependence on the dataset size, even in cases where other generalization approaches give a better O(1/n) bounds [1]. Since the bounds presented in this paper show dependence in the dataset size n only through Lemma 2 (by Xu and Raginsky), I believe the paper's generalization guarantees might have suboptimal dependence on n under the assumptions made.\n\n- Lemma 3 for conversion of Rényi DP to $(\\epsilon, \\delta)$-DP isn't the best known bound. [Theorem 21, 2] gives a strict improvement which is the best known conversion in my knowledge.\n\n- While Theorem 7 neatly presents the change in Rényi divergence under LSI after a single SGLD step, I believe this inequality might be loose, specially in the dependence on the order $q$ of Rényi divergence. That's because the paper slightly modifies the expansion-contraction template used in other prior works for simplicity. In [3] the expansion-contraction step seems to occur simultaneously, which yield a PDE that is better able to quantify the change in Rényi divergence when integrated over a single step.\n\n- In Section 5.1, the constant of LSI under convexity is dimension independent. But on relaxing strong convexity to dissipativity, the LSI constant has an exponential dependence O(e^d) on the dimension size. The paper further claims in line 418 that this dependence on dimension can't be improved without additional assumptions. To me, this seems like a major hurdle that greatly limits the applicability of the generalization bounds presented (both Corollary 14.1 and 15.1) as plugging in the $C_{LSI}$ constant of Theorem 12 gives an $KL(X_t\\Vert X'_t) = O(e^d)$ dependence on dimension $d$. \n\n\n[1] Haghifam, Mahdi, et al. \"Limitations of information-theoretic generalization bounds for gradient descent methods in stochastic convex optimization.\" International Conference on Algorithmic Learning Theory. PMLR, 2023.\n\n[2] Balle, Borja, et al. \"Hypothesis testing interpretations and renyi differential privacy.\" International Conference on Artificial Intelligence and Statistics. PMLR, 2020.\n\n[3] Chourasia, Rishav, Jiayuan Ye, and Reza Shokri. \"Differential privacy dynamics of langevin diffusion and noisy gradient descent.\" Advances in Neural Information Processing Systems 34 (2021): 14771-14781.\n\nCrafting examples of loss functions satisfying the assumptions made and computing a lower bound on how the KL or Rényi divergence changes with iterations seems doable." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "(1) In the second paragraph under contributions, \"all the iterates of verify a uniform log-Sobolev inequality''. Should it be \"all the iterates verify a uniform log-Sobolev inequality''?\n\n(2) In Lemma 2, is the constant $c$ the same one from Assumption 1? If so, you should mention in the statement of Lemma 2 that you are assuming Assumption 1. For a related question, for each lemma and theorem, it would be really nice if the author(s) can make it more transparent which assumptions are used, especially because the paper contains quite many theoretical results in different settings which require different assumptions.\n\n(3) It would be nice if the author(s) can add some intuitive explanations about the half-step technique in the analysis. For example, when you split the Gaussian noise $N_{k+1}$ into $N_{k+1}^{(1)}$ and $N_{k+1}^{(2)}$, why the former becomes expansive, whereas the latter becomes contractive.\n\n(4) Assumption 14 seems to be a bit strange. If it is pseudo-Lipschitz, shouldn't it be small\nwhen $z$ and $z'$ are close to each other but I do not see $z$ and $z'$ appearing on the right hand side." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "(1) The paper is well written, and it is a very solid theoretical paper. \n\n(2) The bounds are uniform-in-time, obtained under Renyi divergence under dissipativity condition and KL stability without dissipativity.\n\n(3) A key ingredient in the proof is to show that under dissipativity, all the iterates verify a uniform LSI, which was previously shown only in the strongly-convex setting. This by-product resolves an open question in the literature." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies generalization of stochastic gradient Langevin dynamics (SGLD) via information theoretic bound. The author(s) obtained Renyi stability by assuming the iterates verify the log-Sobolev inequality (LSI). The author(s) further showed that the LSI indeed is satisfied under some dissipativity condition. Further results are obtained when dissipativity is not available, in which case KL stability can still be achieved. The bounds are uniform-in-time which are strong. A by-product is that the paper shows that under dissipativity, all the iterates verify a uniform LSI, which was previously shown only in the strongly-convex setting, that resolves an open question in the literature." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1) Assumption 15 seems to be a really strong assumption. It would be nice if the author(s) can comment on whether this assumption is needed because of the proof technique or it might be unavoidable.\n\n(2) As the author(s) mentioned in the conclusion section, the dimension dependence is strong. But since the author(s) are working with non-convex setting, this is understandable." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Isoperimetric vs. Log-Sobolev Inequality:\nThe paper mentions the use of the isoperimetric inequality, but the arguments seem entirely based on the log-Sobolev inequality (LSI). In probability theory, the isoperimetric inequality is usually considered a separate concept. Could this be a typo or an imprecise reference?\n\n2. What is \\Tilde{X}_k' in Theorem 5? Is it a typo, and should it be S_k instead?\n\n3. The $S_k$ is not carefully discussed. In SGLD, when drawing a batch of size $b$, could using a smaller batch size lead to a tighter bound on the KL divergence?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The problem studied is interesting and has practical relevance, with clear motivation provided.\nThe paper is well-structured, with different cases and scenarios analyzed in depth.\nThe results are extensively studied across various settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses stochastic optimization, where the goal is to minimize $F(x):=E_{Z\\sim \\nu}f(x;Z)$ for some underlying distribution $\\nu$.\nLet $D$ and $D'$ be two datasets, each consisting of $n$ i.i.d. samples from $\\nu$. Running noisy stochastic gradient descent (SGD) on these two datasets yields sequences $\\{X_k\\}$ and $\\{X_k'\\}$ respectively. It is known that the generalization error scales with the KL divergence between the distributions of $X_k$ and $X_k'$ .\n\nThis paper provides a time-independent upper bound on the KL divergence, even as $k\\to \\infty$. The authors first show that when the log-Sobolev inequality (LSI) holds, an upper bound on the KL divergence can be derived. They further demonstrate that under appropriate conditions, such as dissipativity, the distribution satisfies the LSI, thus leading to the desired bound." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The main proofs rely heavily on existing work, like Theorems 6, 8, and 12.\n2. Some assumptions require further discussion. For instance, Assumption 15 seems restrictive in unbounded domains like $R^d$.\n3. In Theorem 12, the LSI constant scales exponentially with the dimension, which could be problematic for high-dimensional settings." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024generalization,\ntitle={Generalization of noisy {SGD} under isoperimetry},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0VP3LuzZ8K},\nnote={under review}\n}" }, "abstract": { "value": "We study generalization of iterative noisy gradient schemes on smooth non-convex losses. Formally, we establish time-independent information theoretic generalization bounds for Stochastic Gradient Langevin Dynamics (SGLD) that do not diverge as the iteration count increases. Our bounds are obtained through a stability argument: we analyze the distance between SGLD iterates on two datasets sampled from the same distribution. Our result only requires an isoperimetric inequality to hold, which is merely a restriction on the tails of the loss. We thus relax the assumptions of prior work to establish that the iterates stay within a bounded KL divergence from each other. Under an additional dissipativity assumption, we show that the stronger Renyi divergence also stays bounded by establishing a uniform log-Sobolev constant of the iterates. Without dissipativity, we side step the need for local log-Sobolev inequalities and instead exploit the regularizing properties of Gaussian convolution. These techniques allow us to show that strong convexity is not necessary for finite stability bounds and thus for finite generalization and differential privacy bounds." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "generalization", "langevin", "non-convex", "information theory" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4b3bc60c70ff45d538fca902b4ebfe4d381f521e.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Generalization of noisy SGD under isoperimetry" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0Wl6h2CZeJ
RealTracker: Simpler and Better Point Tracking by Pseudo-Labelling Real Videos
main
Active
Point tracking;Optical flow;Motion estimation;Pseudo labelling
applications to computer vision, audio, language, and other modalities
5;5;6;6
4;3;5;4
3;3;3;3
3;2;2;3
3;3;3;3
5.5
4
3
2.5
3
0.707107
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Overall, I think this is an interesting paper that focuses on an essential problem in the community, i.g., enabling existing TAP trackers to leverage real videos w/o annotations for training. The idea is somewhat incremental but effectively addresses an essential problem in a simple yet effective way. Thus my current rating is ``accept''. I would like to see more author rebuttal in terms of differences w/ existing pseudo label based approaches as mentioned above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper focuses on an interesting problem in the community, i.e., aiming to explore to train TAP models w/ real videos w/o annotations, since the previous approaches mainly focus on learning w/ synthetic datasets;\n- The proposed RealTracker shows that a simpler architecture and training protocols can outperform SOTA trackers like BootsTAPIR and LocoTrack;\n- The paper is well written and organized;" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a simpler and better point tracking approach by pseudo-learning real videos. Specifically, the proposed approach allows real videos without annotations to be used during training by generating pseudo-labels using off-the-shelf teachers. The proposed approach explores to use real video for training point tracking models w/o annotations. Moreover, the authors also study the scaling law to understand the impact of using more real training videos." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Using pseudo-labels for training trackers is well explored, e.g., for some online learning-based trackers like Dino-Tracker, it uses pre-computed optical flow which provides the pseudo ground truth pixel-level correspondences for online training the tracker. For DinoTracker3, pseudo-labelling is explored. Please illustrate more differences with these trackers for better highlighting the contributions;\n- Are there any specific concerns for choosing a teacher model for pseudo label generation? Does the better teacher model with higher tracking performance commonly lead to better tracking performance? Can a single teacher model well support the tracker learning?\n- In Table 2, the time of the per frame and per tracked point is shown. For the online variant, what’s the overall tracking speed (i.e., fps) given an online testing video?\n- Missing Refs for discussion. For completeness, please include more pseudo-label based tracker training approaches [1,2,3,4] for discussion in the related work.\n\n[1] Progressive Unsupervised Learning for Visual Object Tracking;\n\n[2] Unsupervised Learning of Accurate Siamese Tracking;\n\n[2] DINO-Tracker: Taming DINO for Self-Supervised Point Tracking in a Single Video;\n\n[3] CoTracker3: Simpler and Better Point Tracking by Pseudo-Labelling Real Videos;" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The terminology \"self-supervised fine-tuning\" is indeed questionable in this context. Using state-of-the-art models from the same domain to generate pseudo-labels for supervision is more aligned with teacher-student learning or pseudo-labeling approaches rather than traditional self-supervised learning, where the supervision signals are typically derived from the data itself without external models.\n\n2. The incorporation of domain adaptation strategies during the fine-tuning process would have significantly enhanced the paper's contribution. This could have included techniques specifically designed to address domain shift and better align feature distributions between source and target domains." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper's motivation is well-justified, particularly in its approach to eliminate model redundancies, resulting in a more lightweight yet powerful architecture.\n2. The paper demonstrates effective utilization of unlabeled real-world datasets for training, achieving significant performance improvements through this approach.\n3. The experimental analysis is comprehensive, and the visualization results are particularly impressive in demonstrating the model's capabilities." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "1. The authors address the redundancy in modules of various existing point tracking models and propose RealTracker, a network with simplified architecture that achieves better performance and faster processing speed.\n\n2. The authors leverage existing models to generate pseudo-labels for real video data, enabling effective utilization of unlabeled videos for network fine-tuning, which further enhances performance.\n\n3. The authors analyze the impact of real data scale on the network model's performance, providing insights into the relationship between dataset size and tracking effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The methodology appears to be more engineering-oriented rather than theoretically innovative, primarily consisting of combinations and modifications of existing methods. The pseudo-label fine-tuning approach is relatively common. Given this is a **deep learning conference**, the technical contributions seem somewhat limited.\n2. As acknowledged in the limitations section, the model's improvement of performance is heavily dependent on the teacher model's capabilities. This strong reliance on existing methods' performance creates a ceiling effect where the training results are constrained by the teacher model's performance limits, potentially reducing the method's generalizability.\n3. The authors aim to bridge the domain gap using real-world dataset training. However, the paper lacks substantial technical innovation in terms of cross-domain adaptation techniques. The approach merely relies on real-data fine-tuning and teacher model voting effects for enhanced robustness, neither of which represents a significant contribution to the field of domain adaptation. More sophisticated cross-domain strategies or novel technical approaches would have strengthened the paper's contribution in addressing the domain gap problem." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "From Table 2, it appears that the training set does matter in the results, The methods training with Kub+15M performed on average better than the methods trained with Kub, please explain and elaborate. What is the difference?\nWhy does the offline method perform better than the online method, Intuitively I would assume the opposite?\nWhat are the limitations and failure cases?\nTable 6, why does SIFT turn on the best results?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper incrementally builds upon point trackers by producing a better approach that leverages other point trackers to produce supervised training data. In the past other trackers have used synthesized data however this is all based on real data. The results seem to better than other point trackers." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "the approach leverages other point trackers to produce training data for their point tracker. Supposedly less additional training data is required compared to other point trackers. The biggest contribution is that the other trackers use real data and not synthetic data for training. Other approaches in the past have typically used point data for tracking." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "It is not clear on what type of motions were tested, if parallax for motion is required, what about zooming like motions with no parallax, does the method work.\nWhat % of occlusion in terms of coverage of the object and in terms of time occluded were not clearly tested.\nThe limitations and failure cases of the algorithm were not explored." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please follow the weakness. If the issues are addressed, I will improve the rating." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. RealTracker combines valuable ideas from the recent state-of-the-art point trackers and eliminates some unimportant modules.\n2. RealTracker proposes a simple semi-supervised training protocol and achieves better results on several public datasets compared to state-of-the-art trackers.\n3. RealTracker explores the training scaling low via its proposed training protocol." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces the RealTracker, a point tracker that combines several ideas from other related trackers but eliminates some components and simplifies others. RealTracker also designes a semi-supervised training protocol, where real videos are annotated utilizing several off-the-shelf trackers. With this protocol, RealTracker can achieve encouraging results on the Kinetics, RGB-S, and DAVIS datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The idea of using trackers to annotate unlabeled datasets, such as [1], is not new.\n2. The authors should use the Kub+15M data to train the CoTracker and TAPTR and verify the proposed method's effectiveness.\n3. To prove the effectiveness of the RealTracker, it is suggested that confidence and visibility be visualized.\n4. More ablation studies are suggested to verify that eliminating some modules in the listed trackers and simplifying some modules is useful, including the computation cost and tracking performance.\n\n[1] Muller M, Bibi A, Giancola S, et al. Trackingnet: A large-scale dataset and benchmark for object tracking in the wild[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 300-317." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a simple point tracking architecture, RealTracker, along with a pseudo-labelling protocol to improve point tracking models. We outperform BootsTAPIR, the state-of-the-art point tracking model, while using 1000x less real data." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024realtracker,\ntitle={RealTracker: Simpler and Better Point Tracking by Pseudo-Labelling Real Videos},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0Wl6h2CZeJ},\nnote={under review}\n}" }, "abstract": { "value": "Most state-of-the-art point trackers are trained on synthetic data due to the difficulty of annotating real videos for this task.\nHowever, this can result in suboptimal performance due to the statistical gap between synthetic and real videos. In order to understand these issues better, we introduce RealTracker, comprising a new tracking model and a new semi-supervised training recipe. This allows real videos without annotations to be used during training by generating pseudo-labels using off-the-shelf teachers.\nThe new model eliminates or simplifies components from previous trackers, resulting in a simpler and smaller architecture.\nThis training scheme is much simpler than prior work and achieves better results using 1,000 times less data.\nWe further study the scaling behaviour to understand the impact of using more real unsupervised data in point tracking.\nThe model is available in online and offline variants and reliably tracks visible and occluded points." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Point tracking", "Optical flow", "Motion estimation", "Pseudo labelling" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f165a50bd890a7ae69459dbc2afd80de7fa6ded6.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/95ccb3ac9b89574da84f5b498acaf51f3d732567.zip" }, "title": { "value": "RealTracker: Simpler and Better Point Tracking by Pseudo-Labelling Real Videos" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0WqAnYWi7H
Mitigating Distribution Shifts: Uncertainty-Aware Offline-to-Online Reinforcement Learning
main
Active
Reinforcement learning;Out-of-distribution detection;Uncertainty estimation;Offline RL
reinforcement learning
3;3;3;5;6
4;5;3;3;4
2;2;3;3;3
2;2;2;3;3
3;3;3;2;3
4
3.8
2.6
2.4
2.8
-0.211289
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Typo: Algo 1 line 7, should it be \"OOD\" instead of \"ODD\"?\n2. How do you define \"progressively expanding the randomization range\" for different environment parameters? More specifically, increasing friction by 1% and increasing the agent's mass by 1% may have vastly different impacts on the task difficulties. Could you discuss more on the relative impact of changing each parameter to the environment difficulty?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The overall method is well-motivated and clearly stated.\n2. Weighting samples in the normal dataset and repulsive dataset differently is intuitive and is demonstrated to be effective empirically.\n3. Using a set of critiques and its variances as a measure of environmental uncertainty explore new possibilities from the existing DENN method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a novel pipeline to detect OOD environment variations and gradually fine-tunning the agent until high confidence safe deployment is possible." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Repulsive locations discussion could be written more formally and thus more concisely. The current version is a bit too dense verbally. I'm also a bit confused by Figure 2, while it's a nice visual, is it something derived from the experiments or is it just a conceptual illustration?\n2. Lack of related works: changing environment parameters to achieve repulsive locations is quite related to the literature on curriculum learning. A good survey to start with is https://arxiv.org/pdf/2003.04960. Also, blindly varying the environmental parameters may lead to unexpected harmful environments dampening the agent training: https://openreview.net/forum?id=hp4yOjhwTs&noteId=vZMeHQbnJK \nI would suggest authors add a subsection for curriculum reinforcement learning in the related work for a more thorough introduction to the problem backgrounds." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "see above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The problem raised in this paper is important and the experiments are solid." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, a novel RL pipeline, Uncertainty-aware Adaptive RL (UARL), has been proposed to enhance policy generalization across diverse variations of a given environment. UARL frames distribution shifts as OOD issues and integrates a new OOD detection method to quantify uncertainty. This method enables iterative policy fine-tuning, beginning with offline training on a limited state space and gradually expanding to more diverse variations of the same environment through online interactions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Weakness:\n\n1) The core problem in this paper, i.e., distributional shift, and many important concepts are not discussed in detail. In the introduction, the authors discuss about the theoretical shortcomings of robust RL, safe RL, and the distributional shift in offline2online RL, but they ignore the discussion about the relationships between these concepts. For example, what is the relationship between the robust RL and the distributional shift in offline setting? Furthermore, what is the difference between the distributional shift problems in offline RL and offline2online RL settings? Why the proposed method could successfully solve the problem of distributional shift? Please answer this question from a high-level view.\n\n2) In offline (to online) RL, the uncertainty quantifier is defined clearly as the upper bound of the error produced by the empirical Bellman operator (see [1], Eq.(4.1)). Then we may concern that whether the uncertainty defined in this paper's Eq.(5) has the relationship with the uncertainty quantifier as we have known in [1]? Does it a valid uncertainty quantifier theoretically? The author should discuss about this point.\n\n[1] Jin. et al., Is Pessimism Provably Efficient for Offline RL.\n\n3) In offline RL, there have been many uncertain-aware methods to deal with the distributional shift problem, such as [2] and [3]. In the list two works, they both penalize the OOD actions by the constructed uncertainty quantifiers. So in our view, the method in this work is not beyond the scope of these methods and lack of the sufficient discussion with the advantage over the existing uncertain-aware methods.\n\n[2] Bai. et al., Pessimistic bootstrapping for uncertainty-driven offline reinforcement learning.\n[3] Sun. et al., Model-Bellman Inconsistency for Model-based Offline Reinforcement Learning.\n\n4) the convergence property of the proposed algorithm should be discussed, especially line 8 in Algorithm 2 - what if this condition is never voilated?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Comparing the computational overhead of your method with that of baseline algorithms would strengthen your work. Could you include this information to provide a clearer understanding of its efficiency?\n\n1. Does this algorithm fall within the scope of Offline Reinforcement Learning? If so, it would be helpful to clarify its placement within the Offline Reinforcement Learning landscape. Enhancing the abstract and introduction to better position the algorithm within this broader context would significantly improve the clarity and impact of your paper.\n\nI am open to raising my score based on these improvements." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- UARL presents a compelling approach to address the challenges of deployment a policy in RL. The progressive expansion of state space via repulsive locations and a balanced replay buffer to manage data distribution shifts are novel and theoretically sound.\n-The usage of an ensemble of diverse critics to perform OOD detection and policy refinement represents a robust methodology that has support from the experimental result" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Uncertainty-aware Adaptive RL (UARL), an innovative framework to tackle distributional shifts and out-of-distribution (OOD) issues when deploying reinforcement learning (RL) policies in real-world environments. This is accomplished by implementing OOD detection to quantify policy uncertainty and iteratively refine high-uncertainty regions (of the state space), adapting the policy\nfor safe and effective performance deployment. UARL demonstrates several notable advancements,\n- A method for quantifying policy uncertainty using OOD detection.\n- An offline-to-online (O2O) adaptation strategy that balances online and offline data, utilizing a diverse ensemble of critics to better handle distributional shifts.\n- Experiments on MuJoCo continuous control tasks that validate UARL’s effectiveness in terms of performance, robustness, and sample efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper could better highlight its unique contributions compared to existing OOD and ensemble-based offline RL methods. A clearer differentiation of UARL's specific advancements would help underscore its novelty within the landscape of similar approaches.\n\n- The experimental validation, limited to few environments such as the Ant-v4 and HalfCheetah-v4 environments, may not fully capture the method’s effectiveness across a diverse range of tasks. Extending the experiments to include more varied environments would provide a more comprehensive assessment and enhance the generalizability of the results.\n\n- A comparison with recent state-of-the-art methods, such as PBRL[1], RORL[2], would strengthen the empirical evaluation. By benchmarking UARL against PBRL and similar approaches, the paper could provide a more robust validation of its improvements in uncertainty handling and performance stability.\n\n[1] Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning\n[2] RORL: Robust Offline Reinforcement Learning via Conservative Smoothing" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please c.f. the comments above. Besides, I have the following questions,\n\n- In Lines 184-185, you wrote, *disagreement among ensemble models, particularly at the boundary of the training distribution*, what do you exactly mean by *at the boundary of the training distribution*?\n- what are the advantages of the diversity term in Equation 5 compared to other diversity terms? (e.g., the diversity term used in the EDAC paper) The authors ought to justify the advantages of doing so rather than using some other methods.\n- how can the authors tell that the uncertainty measurement provided in this paper is valid? It would be better to compare against some other uncertainty estimation methods and visualize the uncertainty measurement for a better comparison\n- do you have any parameter study on the threshold parameter in Algorithm 2? How it can affect the performance of the agent? Do we need to tune this hyperparameter per task? How can we ensure that the policy is safe when $V_Q \\le {\\rm threshold}$?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "## Pros\n\nThis paper enjoys the following advantages,\n\n- This paper is well-written. The presentation of this paper is very good and of high quality. The figures are very nice and helpful. Some of the illustration figures significantly convey the idea and core design of UARL, e.g., Figure 1, Figure 2\n- This paper is easy to read and easy to follow\n- The authors provide open-source codes in the anonymous website, and I believe that the results reported in this paper are reproducible" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper deals with the distribution shift issue in reinforcement learning (RL). The authors introduce an approach called Uncertainty-aware Adaptive RL (UARL) that enhances policy generalization across diverse variations of a given environment. UARL views distribution shifts as OOD problems and integrates an OOD detection method to quantify uncertainty, i.e., Q-value variance. UARL realizes diversity in critics via the DENN method. The authors claim that UARL enables iterative policy fine-tuning, starting with offline training on a limited state space and progressively expanding to more diverse variations of the same environment through online interactions. The authors demonstrate the effectiveness of UARL through some experiments on continuous control tasks, showing improved performance and sample efficiency compared to existing methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "## Cons\n\nDespite the aforementioned advantages, this paper has the following flaws.\n\n- **Offline-to-online RL or off-dynamics RL?** This paper claims that they focus on the offline-to-online setting, while it seems that they actually are dealing with off-dynamics RL [1]. The offline-to-online RL typically refers to training a policy offline from a static dataset and then fine-tuning it with some extra online interactions in the same environment. Instead, the authors conduct experiments by modifying the environmental parameters, which essentially constructs dynamics shifts. The experimental setting resembles that in off-dynamics RL [2,3,4]. It seems unclear to me whether it is suitable to name the paper *offline-to-online* or *off-dynamics*.\n- **Insufficient related work and Limited novelty.** The authors emphasize that the proposed method can enhance the safety and robustness of RL, but it includes too few related works on safe offline/offline-to-online RL and robust offline/offline-to-online RL. Meanwhile, I have doubts about the novelty of this work. The authors progressively increase hyperparameter randomization of the environment (e.g., friction) when the variance between the Q-ensemble is large and terminates until the policy is safe enough to be deployed. Such an idea resembles [5], which progressively adapts its learned policy by modifying the parameters of the environment. Furthermore, I am a bit confused about the benefits of parameter randomization against domain randomization. If the user can adjust the parameters of the environment, then why not directly use domain randomization? I would expect reasonable justifications for the design of the tasks here. Furthermore, the diversity term is not novel, it is borrowed directly from the existing literature. These all together make the contribution of this paper somewhat limited.\n- **Lacking baseline algorithms.** As commented above, this paper claims that it addresses the offline-to-online RL but actually focuses on the off-dynamics RL setting, the authors should include the following baselines,\n - baselines on off-dynamics RL, e.g., [2,3,4]. This is vital to show the effectiveness of the UARL in terms of policy generalization to the target domain\n - RLPD [6], which is a method specially designed for learning with offline data and online interactions with the environment. This baseline is important since it exhibits superior performance given the experimental setting described in this paper (offline data and online interactions). Based on my experience, RLPD can achieve quite strong performance even when there exists dynamics shifts between the offline data and the online environment. Involving this baseline can justify the necessity of the components adopted in UARL (otherwise, one can directly use RLPD for deployment)\n - baselines on safe RL and robust RL. The authors claim that UARL can enhance the safety and robustness of the executed actions, while they do not include any safe RL or robust RL methods for comparison, making it hard to see the rationality and effectiveness of UARL\n - baselines on offline-to-online RL. Unfortunately, this paper also does not include offline-to-online RL methods as valid baseline methods. It is hard to tell the true effectiveness of UARL without these methods, e.g., [7,8,9]\n- (minor) **Lacking theoretical justifications.** There is no theoretical analysis of the UARL. I do not want to blame the authors too much on this point. I understand that this paper may set the focus mainly on the empirical side, but including some theoretical analysis can strengthen this paper.\n- (minor) **Other issues.**\n - in Equation 5, you wrote $R(s,a)$ in the bellman error, while $r$ in the diversity term $\\mathcal{L}_{div}^{RL}$. I think they should be identical, right?\n - the authors do not discuss the limitations of their method in the main text or the appendix. It is important to acknowledge both the advantages and the limitations of the proposed method. \n - the performance improvement of UARL seems limited and incremental on some tasks (e.g., see Figure 3)\n - UARL can still suffer from performance degradation during the fine-tuning phase (e.g., see Figure 5)\n\nGiven the above concerns, I vote for rejection since I believe that this paper needs a significant revision before being accepted for possible publication.\n\n[1] Off-dynamics reinforcement learning: Training for transfer with domain classifiers. ICLR\n\n[2] When to trust your simulator: Dynamics-aware hybrid offline-and-online reinforcement learning. NeurIPS\n\n[3] Cross-domain policy adaptation via value-guided data filtering. NeurIPS\n\n[4] Cross-domain policy adaptation by capturing representation mismatch. ICML\n\n[5] Revolver: Continuous evolutionary models for robot-to-robot policy transfer. ICML\n\n[6] Efficient online reinforcement learning with offline data. ICML\n\n[7] Offline-to-online reinforcement learning via balanced replay and pessimistic q-ensemble. CoRL\n\n[8] Bayesian Design Principles for Offline-to-Online Reinforcement Learning. ICML\n\n[9] Proto: Iterative policy regularized offline-to-online reinforcement learning. Arxiv" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. According to Eq. 5, $R(s, a)$ is not sampled from the dataset $\\mathcal{D}$, but the general Q-function update for offline RL, as in Eq. 1, is to use the sampled $r$. Is this a mistake here?\n2. Is there a performance or computational advantage of UARL over direct processing of $E_\\omega$'s using the Robust RL algorithm or the algorithm with domain randomization techniques? Can this be illustrated experimentally?\n3. Notice that in offline training (1st iteration), EDAC performs much worse than CQL and TD3+BC in many environments, which doesn't seem to match the experimental results in the EDAC article?\n4. The experiments in this paper were all performed in Mujoco, how do we obtain the real-world demonstration dataset $\\mathcal{D}_\\omega$ in the simulation environment like Mujoco?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.This paper propose a novel OOD detection method and an iterative online policy fine-tuning training framework.\n2.Good experimental results are obtained on Mujoco environments with randomized environmental hyperparameter, verifying the validity of the method.\n3.The writing is good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel approach aimed at solving the OOD problem faced by deploying reinforcement learning strategies in real-world scenarios when the distribution of training environments is shifted. The proposed approach tackles this issue by adopting a new diversity ensemble Q-network approach to OOD detection. Furthermore, the method incorporate an iterative policy fine-tuning method that starts with offline training in the original environment and gradually scales up to more stochastic environments through online interactions. Experimental results show that this approach outperforms the Baseline algorithm in Mujoco environments with randomized environmental hyperparameter and typically requires fewer samples to converge." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. It should be further shown, either theoretically or experimentally, why the diversity term $\\mathcal{L}^{\\text{RL}}_{\\text{div}} $ in Eq. 5 allows the ensemble Q network to learn the value of diversity. Intuitively, minimising $\\text{exp}(-\\Vert Q_i(s, a) - (r + Q_i(s^\\prime, a^\\prime)) \\Vert^2/2\\delta^2)$ will allow the Q network to not converge quickly to a certain value on the repulsive dataset, but it does not guarantee that the ensemble Q network learns diverse values.\n2. Further clarification is needed for how to calculate $V_Q$ in Algorithm 2 and how to calculate critical variance in uncertainty estimation experiments.\n3. The ablation experiments in Appendix B.3 are not detailed enough. The training curves for different parameter combinations should be differentiated to illustrate the algorithm's parameter sensitivity to $\\lambda$ and $\\delta$ during training.\n4. For each $E_i$, the randomized environmental hyperparameter range is determined without a common metric but as a hyperparameter, which may require a lot of time for online tuning for complex scenarios." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024mitigating,\ntitle={Mitigating Distribution Shifts: Uncertainty-Aware Offline-to-Online Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0WqAnYWi7H},\nnote={under review}\n}" }, "abstract": { "value": "Deploying reinforcement learning (RL) policies in real-world scenarios faces challenges due to distribution shifts from training environments. Past approaches have shown limitations such as poor generalization to out-of-distribution (OOD) variations or requiring extensive retraining on new data. We propose Uncertainty-aware Adaptive RL, UARL, a novel RL pipeline that enhances policy generalization across diverse variations of a given environment. UARL frames distribution shifts as OOD problems and incorporates a new OOD detection method to quantify uncertainty. This approach enables iterative policy fine-tuning, starting with offline training on a limited state space and progressively expanding to more diverse variations of the same environment through online interactions. We demonstrate the effectiveness and robustness of UARL through extensive experiments on continuous control tasks, showing improved performance and sample efficiency as well as reliability in OOD detection compared to existing methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Reinforcement learning", "Out-of-distribution detection", "Uncertainty estimation", "Offline RL" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ca50c0b6885834141e7e8abca8ffa54cab3f4a7c.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Mitigating Distribution Shifts: Uncertainty-Aware Offline-to-Online Reinforcement Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0XT3Lg6S2Q
Efficient Adaptive Filtering for Deformable Image registration
main
Active
Deformable image registration;Adaptive filtering;Bilateral Grid;Piece-wise Smooth
interpretability and explainable AI
3;5;5;6
4;3;4;3
2;2;3;3
3;3;2;3
2;2;2;3
4.75
3.5
2.5
2.75
2.25
-0.688247
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N.A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the above strengths and weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The motivation behind this paper is reasonable. By analyzing daily CT and MRI scans in the cardiac and abdominal regions, the authors observed two consistent patterns across certain subjects, leading to the formulation of the Piece-wise Smooth (P-S) Assumption. This assumption leverages physical priors from observed medical image patterns, which is both innovative and plausible, enhancing neural network-based registration tasks by grounding them in realistic assumptions about medical image structures.\n2. The paper provides thorough comparative experiments. The authors test AdaWarp on two registration datasets spanning different modalities and input constraints, which demonstrates robustness and broad applicability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper leverages prior knowledge observed in medical images to introduce the Piece-wise Smooth (P-S) Assumption as a basis for addressing medical image registration tasks. Specifically, the authors propose AdaWarp, a warping method that utilizes learnable adaptive filtering to register medical scans in line with the P-S assumption. By employing a low-resolution latent representation along with a differentiable bilateral grid, the method achieves a better balance between accuracy and efficiency. Experiments conducted on two registration datasets validate the effectiveness of the proposed approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty of this paper does not seem particularly strong. While the method leverages an encoder to extract a latent representation that approximates the deformation field at a low resolution, this approach mainly contributes to the model's efficiency but is not unique. The use of latent feature representations for similar tasks has already become common in the field.\n2. The core of AdaWarp is a differentiable bilateral grid, which naturally incorporates the P-S prior. In implementation, the guidance map aids in processes like splatting, blurring, and slicing. This incremental modification lacks sufficient novelty." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In Figure 4 and Figure 5, why not include VoxelMorph into comparison? VoxelMorph is the most widely-benchmarked method and has high efficiency with low number of parameters. \n2. There is a recent registration study in CVPR (CorrMLP, Meng et al. 2024), which is based on a totally conflicting motivation against this paper. CorrMLP attempted to capture long-range dependency among full-resolution image details in an efficient approach (using MLPs), while this paper suggests that only low-resolution features are sufficient. So, it’s interesting to compare with the CorrMLP: did the proposed method achieve similar registration accuracy while reducing much computational complexity?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The proposed method bridges the gap in the existing literature focusing on the balance between registration accuracy and computational efficiency, which is capable of enforcing global smoothness while respecting local discontinuities. This paper was well-written with very clear description on methodology." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a learning framework that improves the accuracy-efficiency trade-off in medical image registration by leveraging the piece-wise smooth prior. The proposed method was evaluated on two medical image datasets involving cardiac MRI and abdomen CT images. This method transforms the deformable registration problem into a keypoint detection task and shows potential for segmentation tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The major concern is the research focus of this study, which might not be of sufficient significance in the field of medical image registration. After the introduction of deep learning-based registration methods, e.g., VoxelMorph, existing methods have been very fast in registration, allowing real-time registration using GPUs. Under this situation, only a few studies have specifically focused on improving efficiency, which suggests that this topic might not be the real problem in the community. \n2. Another concern is the generalizability of the P-S assumption. In the study, this assumption was exemplified and evaluated with cardiac MRI and abdomen CT images, where there is no too many complex anatomical structures and local deformations. It’s important to evaluate the proposed method on the well-benchmarked brain MRI registration tasks, in which the P-S assumption may fail." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I encourage the authors to consider addressing as many of the points highlighted in the weaknesses section as possible. Additionally, while the paper presents an intriguing and novel approach, the clarity and quality of the presentation could benefit from further refinement." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper presents AdaWarp, a novel method that integrates the piece-wise smoothness assumption enforcing global smoothness while respecting local discontinuities in a learning framework striking a balance between complexity and accuracy. \n\nMoreover, it demonstrates connections of the adaptive filtering approach with the self attention.\n\nThe experimentation on two challenging registration tasks cardiac and inter-subject abdominal registration demonstrate that AdaWarp outperforms existing methods in accuracy-efficiency and accuracy-smoothness tradeoffs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a novel method to utilise prior knowledge (piece-wise smooth assumption) to enhance learning based registration striking a balance between computational complexity and accuracy. The performance is evaluated on a cardiac and an abdominal dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although I believe that the paper attempts to bridge a gap in the literature by incorporating a differentiable bilateral grid within a learning-based registration framework, I would like to point out several weaknesses and raise some questions regarding the experiments.\n\n[A] I would like to invite the authors to elaborate on this statement regarding iterative optimization-based methods: “As a result, these approaches tend to be time-consuming and lack the ability to incorporate contextual information effectively.”\n\n[B] “While high-dimensional filtering can project signals onto arbitrary spaces, we focus on extending by one additional dimension to account for the object boundary.”\n\nWhat is the intuition behind this approach? Is only one additional dimension sufficient? I would like to invite the authors to further elaborate and explain their choice.\n\n[C] The role of the guidance map generator component is unclear. Could the authors please explain why this component is used or needed?\n\n[D] Could the authors clarify whether the same lambda values are used for all methods or if different values are applied? How were these values tuned? Were they also tuned for the baselines?\n\n[E] The proposed method utilizes a diffeomorphic transformation model; however, it is not clear whether the baselines follow the same principle. Could the authors provide a table that explicitly lists the hyperparameters used by each of the baselines along with the transformation model?\n\n[F] The authors chose different baselines for the two datasets, which is puzzling. What is the intuition behind this decision? Is there a reason why this approach was chosen?\n\n[G] The paper presents t-tests for DICE scores but not for other metrics. Is there a reason for this choice? Could the authors extend their t-tests to cover HD95 as well?\n\n[H] “Learning-based methods generally outperform traditional ones in registration accuracy, though with slightly higher SDlogJ.”\n\nDo the authors have any intuition as to why this is the case? Normally, I would expect that iterative optimization methods achieve higher accuracy [1].\n\n[1] Hansen, L. and Heinrich, M.P., 2021. Revisiting iterative highly efficient optimization schemes in medical image registration. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part IV 24 (pp. 203-212). Springer International Publishing.\n\n[I] For the abdominal dataset, the proposed method uses Convex Adam’s framework with the same segmentation model as a feature extractor. Is there any reason for this choice? Could the model be trained from scratch? Could the authors elaborate on the design choices, including why the architecture differs depending on the dataset?\n\n[J] The code is not available. Are the authors planning to make their code publicly accessible?\n\n[K] Due to the lack of ground truth, registration is evaluated quantitatively with surrogate measures. However, to ensure the registration’s success, it is common practice to inspect the resulting transformed images qualitatively as well. I would like to invite the authors to provide qualitative results for both datasets, as this would substantially strengthen their claims." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Why were different model structures used for different datasets? What would be the result of using Ada-Res on the Abdomen CT dataset and Ada-Cost on the ACDC dataset? A comparison of these model structures across datasets could help demonstrate their generalizability and clarify why different architectures were chosen for each.\n\n2. In the Abdomen CT dataset, Ada-Cost uses “the same segmentation model for feature extraction.” Was this segmentation model pre-trained? If so, this would make Ada-Cost a semi-supervised registration model. Comparing it with other unsupervised deep learning-based methods would be unfair. Additionally, how exactly was the segmentation model integrated into your model’s structure? Does it replace the \"guidance map generator,\" or is it incorporated elsewhere in the architecture?\n\n3. More references, more baselines and visual evaluations of warped images and warped segmentation masks would be highly valuable. Providing such visual results would help demonstrate the effectiveness of your method in producing sharp boundaries, which cannot be fully illustrated through numerical metrics alone.\n\n4. I would greatly appreciate it if the paper could provide information on the inference and training time of the proposed method. This data would offer more valuable insights into the computational efficiency of the model.\n\n5. Another concern is that the authors selected \"interpretability and explainable AI\" as Primary Area. I’m not sure if this is appropriate since there is no work about interpretability of proposed method." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The integration of the differentiable bilateral grid into the deep learning framework for image registration is highly innovative. It effectively addresses the limitations of traditional smoothness constraints, enabling the model to better handle complex and localized deformations.\n\n2. The paper is well-structured, offering a clear explanation of the proposed methods. It provides detailed descriptions of the differentiable bilateral grid, encoder architecture, and adaptive filtering process. Visual aids, such as Figures 4 and 5, are particularly useful in clarifying complex comparisons.\n\n3. This method presents a promising alternative for resolving the conflict between global smoothness and local deformations, potentially offering improved solutions in certain applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents AdaWarp, a novel architecture in medical image registration. The model introduces a piece-wise smooth (P-S) assumption, which exploits the smoothness of intensity variations within anatomical regions while preserving sharp boundaries between organs. This assumption is incorporated into the network through a differentiable bilateral grid, which allows for efficient edge-preserving filtering and reduces computational complexity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The weaknesses of the paper are primarily in the literature review and experimental sections, which lack sufficient references and baseline comparisons, as well as visual results. These limitations are why I rated the paper as \"fair\" in terms of Presentation and Soundness.\n\n\n1. The paper needs more references in the literature review. The current review only discusses works that do not address the conflict between global smoothness and local deformations. However, this is not the first paper to tackle this problem. Research such as multi-scale registration and patch-wise registration also offers relevant solutions. While these methods may not explicitly incorporate the piece-wise smooth prior, they still manage local deformations while maintaining overall smoothness. The authors should include these references in the background and select baselines from this body of work to show that the proposed method offers a superior solution to the problem.\n\n2. The experiments do not adequately support the claimed advantages of the proposed method. While the paper argues that the model can generate sharp boundaries between organs by incorporating the P-S assumption, it fails to provide visual results to substantiate this key contribution. Relying solely on numerical metrics like Dice, HD95, and SDlogJ does not clearly demonstrate that the model’s output preserves sharp boundaries.\n\n3. The writing in the experiments section is somewhat disorganized. The authors employ significantly different model structures and training strategies, including both unsupervised and semi-supervised approaches (which require further clarification), depending on the dataset. This inconsistency raises concerns about the generalizability of the model across different tasks. Additionally, the experiments lack ablation studies, which are necessary to demonstrate the effectiveness of each component in the proposed methods." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A novel efficient model for medical image registration using differentiable bilateral grid" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024efficient,\ntitle={Efficient Adaptive Filtering for Deformable Image registration},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0XT3Lg6S2Q},\nnote={under review}\n}" }, "abstract": { "value": "In medical image registration tasks where targets exhibit piece-wise smooth structures, a well-designed low-resolution data structure can approximate full-resolution deformation fields with minimal accuracy loss. \nThis physical prior, though absent in current literature, can be integrated into neural networks to enhance registration. \nIn this paper, we propose AdaWarp, a novel architecture that leverages the prior for more efficient medical image registration. \nAdaWarp consists of an encoder, guidance map generator, and a differentiable bilateral grid, introducing an edge-preserving low-frequency approximation of the deformation field. \nThis approach reduces computational complexity without sacrificing accuracy.\nExperiments on two registration datasets covering different modalities and input constraints demonstrate that AdaWarp outperforms existing methods in accuracy-efficiency and accuracy-smoothness tradeoffs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Deformable image registration", "Adaptive filtering", "Bilateral Grid", "Piece-wise Smooth" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1ee3573d6953e189b620bc635ad895fbe75d3d12.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Efficient Adaptive Filtering for Deformable Image registration" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0Xc6o1HKXD
Multi-Perspective Test-Time Prompt Tuning for Global, Local Visuals, and Language
main
Active
Prompt Learning;Test Time Adaption;Vision-Language Models
applications to computer vision, audio, language, and other modalities
3;3;5
4;5;4
2;2;3
2;2;2
2;1;3
3.666667
4.333333
2.333333
2
2
-0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Intuitively, local patch features do not align with text features and therefore cannot be directly utilized, as studied in [1]. Could the authors provide more discussion or visualizations to illustrate this aspect?\n\n[1] A Closer Look at the Explainability of Contrastive Language-Image Pre-training." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The motivation to use local features is intuitive, as they may contain important details that can enhance model performance.\n\n2. The paper conducts extensive experiments, including comparisons of MP-TPT on two representative benchmarks and ablation studies." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the topic of test-time prompt tuning (TPT), and propose MP-TPT. MP-TPT introduces local patch features as additional visual augmentations, which may be crucial for classification. Additionally, it leverages local visual features to enhance text feature descriptions. Extensive experiments demonstrate the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Limited novelty and contribution**: The concept of using local features to enrich image features and image counterparts to enhance text features has already been proposed in [1]. The primary difference in this paper is the implementation of this idea in a test-time prompt tuning scenario. Surprisingly, [1] is not cited or discussed in this paper.\n\n2. **Clarity and organization**: The paper is difficult to follow due to disorganized writing, confusing formulas, and figures and tables that are not self-contained. This impacts its suitability for ICLR acceptance. I list some below:\n\n 1. Line 225: The term \"local visual representation\" is unclear. Is this referring to CLIP patch features? This needs clarification.\n 2. Line 253: Why are classification probabilities referred to as \"cross-modal information\"? Is it simply because they use features from two modalities? What specific information do they contain?\n 3. In Equation (7), The resulting shape is $\\mathbb{R}^{W H \\times d}$. How are $M$ augmented features derived?\n 4. In Equation (7), There are 5 left brackets and 3 right brackets, making the expression difficult to understand.\n 5. In Table 1, how is the inference time calculated? Are the times in seconds? Different datasets with varying classes should have different inference speeds. The table should be self-contained.\n 6. Multiple definitions of $K$: In Line 161, $K$ is defined as the number of classes, while in Line 244 and Equation (6), $K$ is the number of selected regions.\n 7. Undefined terms: $\\boldsymbol{f}^t$ in Equation (7) is not defined. Is it a set or a concatenation of $\\boldsymbol{f}^t_i$?\n 8. The definition of a set in Equation (8) is incorrect. The part “$p\\left(y_k \\mid \\tilde{\\boldsymbol{f}}_i^t\\right)$” after the colon should be removed.\n\n3. **Experimental issues**: \n\n 1. The claims in Line 28 are misleading. MP-TPT did not achieve a 1% improvement over TPT and 4.5 times faster simultaneously. These are achieved by different methods, MP-TPT-L and MP-TPT-S.\n\n 2. Some highly relevant works, such as [2] and [3], are missing from Tables 1 and 2. The performance of MP-TPT is significantly lower compared to these methods. More discussion is needed.\n\n | Methods | Cross-dataset | Domain Generalization |\n | --------------- | ------------- | --------------------- |\n | PromptAlign [2] | 66.92 | 63.55 |\n | TDA [3] | 67.53 | 63.89 |\n | MP-TPT-L | 65.66 | 62.35 |\n\n 3. The ablation study is unconvincing. Why are results provided only on 5 datasets? The proposed methods can lead to performance degradation in many cases, such as in the Flowers102 and Caltech101 datasets. The average performance gain seems to stem from the EuroSAT dataset, which only contains 10 classes and is sensitive.\n\n4. **Effectiveness of design**: The use of random masks on local features as a proxy for random cropping is questionable. I explored this idea in test-time prompt tuning tasks a year ago and found it ineffective, raising concerns about its effectiveness in MP-TPT.\n\n5. **Lack of error bar analysis**: The paper does not include an error bar analysis, which is an important aspect of experimental evaluation.\n\n[1] Task-Oriented Multi-Modal Mutual Learning for Vision-Language Models. ICCV 2023.\n\n[2] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization. NeurIPS 2023.\n\n[3] Efficient Test-Time Adaptation of Vision-Language Models. CVPR 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. In L107, how can the method enhance inference efficiency when it requires multi-perspective views, which will obviously increase computational and storage costs? Additionally, Table 1 shows that MP-TPT-S has a lower inference time than TPT. What are the different experimental settings between these two methods, and is the comparison fair? Could the authors provide a more detailed analysis of computational complexity and memory usage?\n\n2. The description in Section 3.2.3 is difficult to understand. What is the difference between test time tuning and test time inference? How to generate $\\boldsymbol{f}^{t *}$ and $\\hat{\\boldsymbol{f}}^{t *}$? Additionally, Figure 2c is confusing; how is Eq. 12 applied in Figure 2c, e.g, where is the $\\lambda$ in Figure 2c?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The motivation is clear. Existing test prompt tuning methods focus only on global visual feature augmentation, neglecting the importance of local context in images. By introducing fine-grained local visual features and their corresponding text prompt descriptions, the proposed method should contribute to improved test-time prompt tuning results. The paper is easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes utilizing local visual context and class-specific text description augmentation to improve the classification accuracy of the test-time prompt tuning of CLIP model. The local visual representation is obtained by projecting the entire visual feature to the region level and calculating the similarity with text features. The top-K high-similarity region features are selected to produce the class-specific descriptions. The prompts and the global-local visual features are further aligned through a dual interaction during the tuning phase. Experiments show some improvement." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The main weakness of this paper is that the experimental results are marginal. From Table 1, we can see that the best result of the proposed MP-TPT (65.66) is only 0.2% better than the baseline DiffTPT (65.47). Similarly, in Table 2, the MP-TPT method also shows a marginal improvement (less than 0.5%). Did the authors conduct statistical significance tests to verify the effectiveness of the proposed method? These minor differences may also stem from the randomness of the training process. Providing error bars or standard deviations would make the results more convincing. Furthermore, does the method work beyond the CoOp framework, such as on Maple[1] and PromptSRC[2]?\n\n[1] MaPLe: Multi-modal Prompt Learning\n[2] Self-regulating Prompts: Foundational Model Adaptation without Forgetting" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. It would strengthen your claims to include more comprehensive comparisons with a broader range of state-of-the-art methods in your experiments. Highlighting specific scenarios where MP-TPT excels or falls short could provide valuable insights.\n2. Can you clarify the specific roles that global, local, and language perspectives play in test-time prompt tuning? In particular, how do local and language perspectives interact, considering their apparent strong coupling.\n3. Could you provide more experiment on MPTPT+CoOP/MaPLE or other prompt tuning method in Base-to-Novel Generalization? It will help to prove MPTPT’s effectiveness as a plug-to-play prompt learning mthod. \n4. Providing detailed ablation studies that analyze trade-off between speed, accuracy and amount of parameters would enhance the understanding of the practical implications of MP-TPT." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. MPTPT addresses a critical limitation in existing methods that rely solely on global visual features. By incorporating class-specific, region-based prompts, the paper proposes an innovative way to adapt VLMs to unseen data without retraining, which is both effective and practical. \n 2. The methodology is rigorous, with extensive experiments on 15 benchmark datasets that demonstrate the model's adaptability and efficiency, especially in zero-shot and cross-dataset settings. Ablation studies add further credibility by detailing each component's contribution." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a novel method called Multi-Perspective Test-Time Prompt Tuning (MP-TPT) designed to enhance vision-language models (VLMs) during test time. Unlike prior approaches that focus solely on global visual features, MP-TPT combines global and local visual information with language prompts, offering a comprehensive view during test-time adaptation. The method enhances textual prompts with class-specific descriptions by using local visual information, which allows the model to capture diverse contextual variations. Extensive experiments across multiple benchmarks demonstrate that MP-TPT achieves notable improvements in accuracy and inference speed compared to state-of-the-art methods, particularly in zero-shot and cross-dataset generalization scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited improvement over global feature methods: results indicate that the performance gains of MP-TPT over other methods focusing on global visual features, such as DiffTPT, are not substantial, which raises questions about the effectiveness of incorporating local visuals. \n2. The paper does not sufficiently clarify the interaction between local visual features and text descriptions. A more detailed explanation of how these components integrate during optimization and inference would enhance understanding.\n3. While MP-TPT introduces local visual information to improve class-specific descriptions, the paper could benefit from a deeper analysis of how these local augmentations influence specific categories, particularly when handling complex classes." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024multiperspective,\ntitle={Multi-Perspective Test-Time Prompt Tuning for Global, Local Visuals, and Language},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0Xc6o1HKXD},\nnote={under review}\n}" }, "abstract": { "value": "Recent advances in vision-language models (VLMs) have demonstrated significant generalization across a broad range of tasks through prompt learning. However, bridging the distribution shift between training and test data remains a significant challenge. Existing researches utilize multiple augmented views of test samples for zero-shot adaptation. While effective, these approaches focus solely on global visual information, neglecting the local contextual details of test images. Moreover, simplistic, single-form textual descriptions limit the understanding of visual concepts, hindering the transfer performance of classes with similar or complex visual features. In this paper, we propose a Multi-Perspective Test-Time Prompt Tuning method, MP-TPT, building on two key insights: local visual perception and class-specific description augmentation. Specifically, we introduce local visual representations from VLMs during the optimization process to enhance the prompts' ability to perceive local context. On the other hand, we design a data augmentation method at the text feature level that imparts regional visual priors to specific class texts, thereby enriching the class-specific descriptions. Furthermore, we synchronize the multi-view concept during the inference, integrating both local and global visual representations with text features for a deeper understanding of visual concepts. Through extensive experiments across 15 benchmark datasets, we demonstrate the advantages of MP-TPT, particularly achieving a 1% improvement in state-of-the-art TPT accuracy in cross-dataset settings, along with 4.5 times acceleration in inference speed." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Prompt Learning", "Test Time Adaption", "Vision-Language Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/124bc5767b1aa0b2605b04cdfec0732301de225e.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/b852656a84e8c244354cdcba614ee69c64efbcfa.zip" }, "title": { "value": "Multi-Perspective Test-Time Prompt Tuning for Global, Local Visuals, and Language" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0Xt7uT04cQ
Uni-Sign: Toward Unified Sign Language Understanding at Scale
main
Active
Sign language understanding;Pre-training;Large-scale sign language dataset
unsupervised, self-supervised, semi-supervised, and supervised representation learning
5;5;6;6;8
2;4;4;5;3
2;2;3;2;3
3;2;3;4;3
3;3;2;4;3
6
3.6
2.4
3
3
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Although there is a difference for traditional sign language recognition methods employing such means as MLP and CTC loss, the authors propose for different tasks still use different supervision, for example, words, glosses, and sentences, and why it is still referred to as a unified paradigm.\n\n2. In Fig. 5, why the feature information of the face has to be forwarded to the left Pose Encoder after it has been encoded by the Pose Encoder is not mentioned in the paper..\n\n3. In line 479 of the paper, the authors show a boost of 1.36 on BLEU-4, but the corresponding value is not found in Table 9." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The Uni-Sign framework proposed by the authors utilizes a large-scale generative pre-training strategy and a novel fine-tuning paradigm to bridge the gap between pre-training and downstream sign language understanding tasks in traditional approaches.\n\n2. The Uni-Sign framework achieves significant performance gains on both sign language recognition and translation tasks, and experiments are conducted on multiple datasets.\n\n3. The related work of paper is adequate, investigating research on sign language tasks including pre-training strategies, dataset development, and so on, from a variety of perspectives." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a new pre-training framework that bridges the gap between pre-training and downstream sign language understanding tasks through a large-scale generative pre-training strategy and a novel fine-tuning paradigm that achieves impressive performance in multiple benchmark tests." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper is not clear and detailed enough to explain the score-aware sampling strategy, and does not give a detailed analysis of the process or a corresponding explanation in Figure 5, which could lead to potential misunderstandings or errors.\n\n2. The author omitted experimental results on several widely used datasets, such as Phoenix14, Phoenix14T, USTC-SLR 500, USTC-CSL100, etc.\n\n3. As shown in Tables 4 and 6, the proposed Uni-Sign method does not achieve the best performance on multiple datasets of continuous sign language recognition and sign language translation. It even performs worse when more modalities are introduced, which makes me worried about the performance of this work.\n\n4. The number of parameters of the model is not mentioned in the paper. This feedback highlights the importance of including these key performance metrics, as they are critical for evaluating the practicality of the model.\n\n5. It is recommended that the authors make font color changes for the tables throughout the article, due to the large amount of experimental data, while bolding may mislead the reader, especially for Tables 3 through 6." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the Weakness section above. If the authors can address these concerns, I would consider raising the rating." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Uni-Sign effectively unifies multiple SLU tasks, such as isolated sign language recognition (ISLR), continuous sign language recognition (CSLR), and sign language translation (SLT), under a single framework. \n2. The introduction of CSL-News, a substantial CSL dataset, provides a significant resource for the SLU field and addresses the limitations of prior smaller datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes Uni-Sign, a unified pre-training framework for sign language understanding (SLU) tasks, addressing the challenges in existing methods that struggle with transferring knowledge across different tasks. The framework uses a new large-scale dataset, CSL-News, which contains 1,985 hours of Chinese Sign Language (CSL) videos paired with textual annotations. Extensive experiments demonstrate that Uni-Sign achieves state-of-the-art performance across multiple SLU benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Compared to other datasets, what unique advantages or characteristics does the proposed CSL-News dataset offer besides its longer duration? Additionally, why is the vocabulary size relatively limited, and could the restricted language variety impact pre-training effectiveness?\n2. In the comparisons of downstream tasks in Section 4.3, did other methods also use the CSL-News dataset for pre-training? If not, does this raise any concerns about fairness in the comparisons?\n3. In the comparative experiments, while high-performing results are analyzed, the reasons behind lower performance should also be provided, such as in Tables 4 and 6.\n4. In Tables 3 to 6, what would the results of Uni-Sign be if it used only RGB video?\n5. How do the computational costs, inference time, and memory usage of the proposed model compare to other methods? Does Uni-Sign maintain a competitive advantage in these aspects?\n6. The manuscript includes numerous comparative results, but it lacks visualizations to intuitively demonstrate the model’s effectiveness. More visual presentations for each downstream task are recommended." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The paper in general addressed the ideas and motivations it introduces. The following question will help add more comprehensive understanding. \nGeneralization and Applicability\n1. Multilingual Evaluation: The sources primarily focus on CSL and ASL. Could the authors comment on the applicability of Uni-Sign to other sign languages? How might the model's architecture and pre-training strategies need to be adapted for multilingual SLU? This is important to assess the generalizability of Uni-Sign and its potential impact on a broader range of sign language communities\n2. Multi-signer Scenarios: How well does Uni-Sign perform in situations involving multiple signers? What challenges might arise in such scenarios, and how could the model be modified to handle them effectively? Addressing this question would provide a more realistic assessment of Uni-Sign's capabilities in real-world applications where multiple signers may be present\n\nComparison and Analysis\n1. Comparison with LLM-based SLT Methods: Recent studies like Sign2GPT and Sign-LLM have explored the use of LLMs for gloss-free SLT. Could the authors provide a comparative analysis of Uni-Sign against these LLM-based approaches? This would help clarify Uni-Sign's contributions and position it within the broader landscape of SLT research\n2. In-depth Analysis of the Unified Fine-tuning Paradigm: How does the shared objective function influence the performance of individual tasks like ISLR and CSLR? Are there any potential task-specific adaptations that could be incorporated within the unified framework to further optimize performance? This analysis would provide a more nuanced understanding of the paradigm's strengths and weaknesses" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Originality: \n1. The paper presents Uni-Sign, a novel unified pre-training framework for Sign Language Understanding (SLU) that bridges the gap between pre-training and downstream tasks by treating them as a single Sign Language Translation (SLT) task during fine-tuning. This approach deviates from previous methods that relied on indirect pretext tasks or were limited by data scale and transfer capability\n2. The authors introduce CSL-News, a large-scale Chinese Sign Language (CSL) dataset containing 1,985 hours of video with text annotations, considerably larger than existing CSL datasets. his dataset enables effective large-scale pre-training, addressing a gap in CSL resources compared to American Sign Language (ASL) and British Sign Language (BSL)\n3. The paper proposes a Prior-Guided Fusion (PGF) module that utilizes keypoint coordinates as priors to model fine-grained spatial consistency between pose and RGB modalities, going beyond simple spatial-temporal fusion techniques. This approach addresses the representational gap between modalities and leverages keypoints to enhance accuracy. \n4. A score-aware sampling strategy is introduced to address the computational challenges of RGB-pose fusion by selectively choosing RGB frames corresponding to low-confidence keypoints, balancing performance with speed\n\nQuality:\n1. The paper is well-written and presents a clear and comprehensive methodology. The authors provide detailed descriptions of their approach, including data curation, pre-training and fine-tuning strategies, and multi-modal fusion techniques\n2. The ablation studies thoroughly investigate the contribution of each key component, offering insights into the model's performance and the impact of design choices\n3. Quantitative results show that Uni-Sign surpasses previous state-of-the-art methods on multiple benchmarks, including significant improvements in BLEU4 scores for SLT tasks\n\nClarity:\n1. The paper is well-organized and easy to follow.\n2. Figures and tables effectively illustrate the framework, data distribution, and experimental results\n3. Mathematical notations and equations are clearly defined and explained\n4. Qualitative translation examples provide further insights into the model's capabilities\n\nSignificance:\n1. The introduction of the CSL-News dataset addresses a significant need for large-scale CSL resources, potentially fostering advancements in CSL research\n2. The unified pre-training and fine-tuning framework with a generative approach demonstrates a promising direction for improving SLU performance, particularly for SLT tasks\n3. The proposed PGF module and score-aware sampling strategy offer effective solutions for multi-modal fusion and computational efficiency, potentially benefiting future SLU research\n4. The paper's findings have implications for advancing sign language technologies, promoting accessibility and communication for the Deaf/Hard of Hearing community\n5. The authors' commitment to open-sourcing the code and dataset further contributes to the significance of the work, facilitating reproducibility and future research in SLU" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents Uni-Sign, a novel framework for Sign Language Understanding (SLU) that leverages large-scale generative pre-training and a unified fine-tuning paradigm. The paper presents a well-motivated and well-executed approach to SLU. The introduction of the CSL-News dataset and the innovative Uni-Sign framework are significant contributions to the field, demonstrating state-of-the-art performance across various SLU tasks. The paper is well-written and clearly explains the proposed methodology and experimental results. The authors make several notable contributions:\n•Introduction of CSL-News: The authors introduce CSL-News, a large-scale Chinese Sign Language (CSL) dataset comprising 1,985 hours of video-text pairs. This dataset significantly surpasses existing CSL datasets in size and diversity\n•Unified Pre-Training and Fine-Tuning: During fine-tuning, it treats downstream SLU tasks, such as isolated sign language recognition (ISLR), continuous sign language recognition (CSLR), and sign language translation (SLT), as a single SLT task. This unified approach facilitates seamless knowledge transfer and eliminates the need for task-specific fine-tuning methods.\n•Prior-Guided Fusion (PGF) Module: To address the limitations of inaccurate keypoints, the authors propose a PGF module that fuses pose and RGB information using keypoint coordinates as priors. \n•Score-Aware Sampling Strategy: The authors introduce a score-aware sampling strategy to improve computational efficiency. \n•Comprehensive Evaluation: The paper includes a comprehensive evaluation of Uni-Sign across various SLU benchmarks, demonstrating its superior performance in ISLR, CSLR, and SLT tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Discussion on Computational Complexity: While the authors introduce a score-aware sampling strategy to improve efficiency, a more in-depth discussion on the computational complexity of Uni-Sign would be beneficial. This could include analyzing the trade-offs between accuracy and computational cost for different sampling probabilities and exploring potential optimizations.\n2. Further Analysis of CSL-News: While the paper describes the creation of CSL-News, further analysis of the dataset's characteristics, such as vocabulary distribution and linguistic complexity, would be valuable. This would provide a more comprehensive understanding of the dataset's potential and limitations.\n3. Cross-Dataset Generalization: Evaluating Uni-Sign's performance on unseen sign language datasets would demonstrate its generalization capabilities. This could involve fine-tuning the pre-trained model on a different CSL dataset or even a dataset from another sign language, like American Sign Language (ASL). Successful cross-dataset generalization would highlight the robustness of the learned representations and the effectiveness of the unified approach.\n4. Analysis of Error Patterns: A qualitative analysis of the translation errors made by Uni-Sign would provide valuable insights into its limitations and potential areas for improvement. This could involve categorizing errors based on linguistic features, such as sentence complexity, sign ambiguity, or finger-spelling. Identifying common error patterns could guide future research directions.\n5. Exploration of Multi-Signer Scenarios: The authors mention their interest in exploring SLU tasks in complex scenarios, such as multi-signer situations. Including preliminary experiments or discussions on adapting Uni-Sign to handle such scenarios would further enhance the paper's impact and contribution to the field." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "See above" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Pros:\n1. This work proposes a unified framework to conduct pretraining and finetuning, which demostrates novelty.\n2. This work shows promising performance across a wide range of benchmarks.\n3. The paper is easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper propose Uni-Sign, a unified pre-training framework that eliminates the gap between pre-training and downstream SLU tasks through a large-scale generative pre-training strategy and a novel fine-tuning paradigm. It also introduce CSL-News, a large-scale Chinese Sign Language (CSL) dataset containing1,985 hours of videos paired with textual annotations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Questions and cons:\n1. During the data curation process, the authors use a ASR toolkit (FunASR) to convert the speech into texts as labels. There exist some problems. First, as the speech signal has time delay with the sign language expressed by the signer, how to assure that the temporally cropped clips are exactly aligned with the transcribed texts? Second, the authors have stated the averaged length of words are 40 and the averaged length of clips are 9.5s. It's very hard to express 40 words within 9.5s for a signer. Thus, it's most probably that the signer has neglected some meanings in the sentence, and only expressed a part of mearnings in the signs. In this condition, the signs are probably not aligned with the transcribed texts. Third, i observed that in the paper, the authors don't organize a double-check process for the cropped videos from the TV shows to check the alignment between texts and clips, the correctness of transcribed texts, the correctness of transcribed signs and other things. Thus, how to assure the compleness and correctness of the curated datasets?\n2. During the experiments for CSLR, PHOEXNI14 and PHOENIX14-T are also broadly used datasets. Why not report the results on these datasets? It's due to the language gap between pretraining data and downstream data? How about the performance on these two datasets?\n3. In table 3 and table 5, some other numbers are bolded except the results reported by the proposed method. The authors may clarify on this or use another way to emphsize the results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "In Table 6, the performance of the proposed method is somewhat lower than the existing baseline SSVP-SLT. Although it is not a very big issue to me, but I would like to know more about it. Why is this the only (rather large) SLT dataset where the proposed method achieves sub-optimal results?\n\nThe ablation results shown in Table 7 are rather strange as compared to Tables 8-10, because the settings are different. Table 7 runs experiments on ISLR and CSLR, Tables 8-10 run experiments on CSL-Daily for CSLR and SLT. Why are these different? Moreover, Table 7 and 8 are run in the pose-only setting while Tables 9 and 10 are in the RGB-Pose setting, why should this be the case?\n Furthermore, some of the more important experiments (Table 7 and 8 in my opinion) should be evaluated on all three different sign language understanding tasks.\n\n\nWhat is the impact of the pre-training? This crucial aspect as not been evaluated properly. For instance, what if the model is trained only using the fine-tuning stage (Stage 3), but for a longer time (i.e., matching the overall training time of the pre-train then fine-tune approach)? How does this affect the performance? This is important as it shows us the benefits of pre-training. Although some results have been provided in table 7, the results and implications are not clear to me. Furthermore, the task-specific training settings and details have not been mentioned." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Developing a unified approach to handle the various sign language understanding tasks is meaningful. In some sense, the work extends some recent LLM-based sign language understanding works by including the aspect of unifying across the sign language understanding tasks. \n\nThe authors introduce a new large-scale sign language dataset for Chinese Sign Language. This dataset could be quite useful for further progress in the field.\n\n\nThe experiment results are quite impressive, especially on the gloss-free SLT task. In my opinion, gloss-free SLT is the setting that is the closest to real applications, so this is quite good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper has two main contributions. 1) A Uni-Sign method for tackling the three sign language understanding tasks in a unified manner. The model first pre-trains on a large sign language dataset via language modeling, then is fine-tuned on each of the individual tasks separately. 2) A CSL-News dataset, which is a large-scale Chinese Sign Language dataset. Some other minor architectural designs are also proposed. Overall, the proposed method performs quite well across the three sign language understanding tasks, and particularly performs well in Sign Language Translation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The proposed method is not very novel. The proposed pre-training approach is to train the model in a language modelling manner, while also using visual features from the sign videos. Then, for the fine-tuning, the language modelling loss is again used for the various tasks. There are some minor contributions, such as a prior-guided fusion module and a score-aware sampling strategy, but these do not seem quite so substantial.\n\nI think that in the related works discussion, there should be a part discussing some other works in other fields employing language modelling (or sequence modeling) for tackling various tasks in a unified manner. For instance, this has been done for image-based tasks, and may have also been done for pose-based tasks. This will give the reader a better understanding of the developments of the “unifying via language modeling” paradigm.\n\n\nMore specific concerns and questions are in the “Questions” section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024unisign,\ntitle={Uni-Sign: Toward Unified Sign Language Understanding at Scale},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0Xt7uT04cQ},\nnote={under review}\n}" }, "abstract": { "value": "Sign language pre-training has gained increasing attention for its ability to enhance performance across various sign language understanding (SLU) tasks. However, existing methods often suffer from a gap between pre-training and fine-tuning, leading to suboptimal results. To address this, we propose Uni-Sign, a unified pre-training framework that eliminates the gap between pre-training and downstream SLU tasks through a large-scale generative pre-training strategy and a novel fine-tuning paradigm. First, we introduce CSL-News, a large-scale Chinese Sign Language (CSL) dataset containing 1,985 hours of video paired with textual annotations, which enables effective large-scale pre-training. Second, Uni-Sign unifies SLU tasks by treating downstream tasks as a single sign language translation (SLT) task during fine-tuning, ensuring seamless knowledge transfer between pre-training and fine-tuning. Furthermore, we incorporate a prior-guided fusion (PGF) module and a score-aware sampling strategy to efficiently fuse pose and RGB information, addressing keypoint inaccuracies and improving computational efficiency. Extensive experiments across multiple SLU benchmarks demonstrate that Uni-Sign achieves state-of-the-art performance across multiple downstream SLU tasks. We will release the source code and the dataset to the public." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Sign language understanding", "Pre-training", "Large-scale sign language dataset" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c74b74b55de2008288081544ead353474a559f53.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Uni-Sign: Toward Unified Sign Language Understanding at Scale" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0YXckVo7Kw
MMCOMPOSITION: Revisiting the Compositionality of Pre-trained Vision-Language Models
main
Active
Vision-Language Models;Compositionality;Benchmark
datasets and benchmarks
5;5;5;6
3;4;4;4
3;2;2;3
3;2;2;3
3;2;2;3
5.25
3.75
2.5
2.5
2.5
0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The reasoning behind the name 'MMComposition' is unclear." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper presents a comprehensive benchmark focused on compositionality, encompassing a wide range of skills from perception and reasoning to probing.\n\n- This paper provides an extensive evaluation of recent models, including both open-source and API-based models, highlighting areas where they continue to fall short of human capabilities.\n\n- The paper is well-written with clearly organized sections." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces MMComposition, a QA benchmark that evaluates the compositional capabilities of modern vision-language models. MMComposition encompasses a range of tasks, including perception, reasoning, and probing, with multiple subtasks presented in various QA formats: yes/no, multiple-choice, and indefinite-choice. The dataset is curated from numerous existing sources, with QA pairs annotated by humans. Covering 13 distinct vision-language compositionality tasks, this benchmark offers a comprehensive evaluation of both proprietary and open-source vision-language models. The paper also analyzes factors that may influence the compositional abilities of VLMs, such as the resolution of visual encoders, the scale of language decoders, and the volume of training data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Although the benchmark includes diverse skill sets and QA formats, the specific aspects that pose challenges are not clearly defined. It is also unclear what distinguishes this benchmark from other general QA datasets designed to test modern VLMs for AGI, such as MMMU, MMStar, and tons of similar benchmarks. The paper does not provide comparisons in terms of general capabilities across QA datasets; instead, it focuses on embedding-based benchmarks for comparison, as shown in Table 1. Comparing the scale of evaluation samples, such as the number of images or questions across different benchmarks, would also be valuable.\n\n\n- Related to the first weakness, one might question whether this benchmark is truly challenging. Some compositionality benchmarks or visual QA tasks could potentially be solved using only language models in an image-blind setting, due to language priors, such as coherence, grammar, and clues embedded across answer choices. As specific example, in second example in figure 3, can is often made of metal, such knowledge aids in answering correctly without relying on visual cues. It would be beneficial to examine the proportion of questions that can be solved solely using large language models. \n\n\n- Several essential details are missing regarding the benchmark construction. In the human annotation process, additional information is needed: Who annotated the dataset? How was confidence measured, and how were errors handled in finalizing the annotations? Additionally, it’s unclear how misaligned captions were manually added in the probing task (line 255). Furthermore, for reporting human performance, what was the process? It would be important to present individual human performance scores for each skill, rather than a single overall score.\n\n\n- The empirical trends concerning the scale of the visual encoder, language decoder, and training data are perhaps not surprising. The paper does not analyze whether these trends are specific to the proposed benchmark or if they also appear in other general visual QA benchmarks. Meanwhile, an additional suggested analysis could explore how the design of the visual connector (e.g., fully connected layer or Q-Former style) and the method of visual token insertion (e.g., tokens input directly into the language model or through cross-attention connections) impact performance of the proposed benchmark. \n\n\n- There are some notable clarity issues, including typographical errors such as 'MuriBench' in line 237 and 'ARC' in line 241. Additionally, there are inconsistencies in publication years for certain cited papers, particularly recent NeurIPS papers like SugarCrepe, which collectively raise concerns about professionalism. \n\n\n- Could fine-tuning VLMs on specific datasets improve performance on MMComposition?\n\n---\n\nAssessment: While the extensive evaluations across VLMs are commendable, the benchmark falls short of expected standards in terms of detailed documentation, verification, and comparisons with other QA benchmarks. Additionally, analyses of the proposed benchmark could be enhanced by comparing observed trends with those from other benchmarks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "humans involved in data curation, without reporting details on that, however I am not sure if this is a real ethical concern here" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "please see weaknesses" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* new benchmarks are always good, human curation is appreciated\n* large number of models evaluated" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new compositional reasoning benchmark that is constructed from existing benchmarks (data collection, lines 211-238) augmented with negative options retrieval by similarity, consensus filtering by several recent LMMs and further human filtering of the resulting visual QA. An extensive evaluation of recent models on the proposed benchmark is performed. Some additional ablations are attempted by grouping models trained on more data, larger LLM decoders, vis. encoder combinations etc. However, those only confirm known facts: larger data, larger decoders, or more encoders are beneficial. Some analysis of failures is provided, albeit only qualitative. Main interesting aspect seems to be a large gap reported between human performance and the models. However, no statistics of the human subjects are provided (eg how many humans were employed, how they were motivated, what was the disagreement between humans, age groups, etc.)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* no new data is introduced, built from existing benchmarks\n* no surprising conclusions\n* no statistics for the human subjects\n* error analysis is only qualitative\n* dataset construction methodology involving humans could be more interesting - eg. humans could generate questions, red-team models to generate hard negatives etc.\n* I disagree with Table 1, many benchmarks, including those listed have fine-grained questions, there are benchmarks (eg NVLRv2) involving multiple images, other benchmarks have human filtering, at least a partial subset, the only thing I indeed did not encounter before is \"multiple right answers\" (indefinite choice) - which could indeed be a contribution of the paper\n* while benchmark contributions are appreciated, it seems this paper is somewhat below what I would expect from the level of contribution of an ICLR paper" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "see weaknesses" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Strengths of MMCOMPOSITION:\n\n1. Targeted Evaluation of Compositionality for VLMs: MMCOMPOSITION provides a focused benchmark to assess compositional reasoning in Vision-Language Models, an area where existing models often fall short. By going beyond basic attribute recognition, MMCOMPOSITION evaluates tasks like multi-image reasoning, object interactions, and counting, all of which are crucial for real-world, nuanced understanding.\n\n2. Improvement upon Existing Compositional Datasets: This benchmark builds on and enhances data from existing compositional datasets, such as ARO, to create a more diverse and challenging evaluation framework. By curating tasks that move beyond traditional benchmarks, MMCOMPOSITION offers a comprehensive dataset for testing complex visual-language interactions.\n\n3. In-Depth Model Comparison and Component Analysis: MMCOMPOSITION evaluates over 50 VLMs across different architectural components, allowing a detailed comparison. This thorough assessment reveals how factors like encoder resolution, decoder size, and training data diversity impact compositional reasoning. It offers practical insights that can guide future improvements in model design." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper \"MMCOMPOSITION: Revisiting the Compositionality of Pre-Trained Vision-Language Models\" presents MMCOMPOSITION, a new benchmark focused on testing VLMs' ability to handle complex compositional tasks like object interactions, counting, and scene reasoning. With 4,342 annotated questions across 13 categories, the benchmark highlights a clear performance gap between models and humans (67.95% vs. 90.31% accuracy). Results suggest that improving high-resolution encoders, scaling language decoders, and expanding training data are key to better compositional reasoning in VLMs. MMCOMPOSITION offers a practical tool for refining future VLMs to better understand complex compositions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "typos:\ntable 4 - Relolution \n\n1. In-context multimodal compositionality: Adding tests for in-context multimodal compositionality could strengthen the benchmark, as this capability is crucial for real-world applications. Evaluating models' ability to maintain compositional understanding across multi-modal inputs, rather than isolated tasks, could enhance the dataset's relevance.\n2. Multi-hop compositional problems: The paper would benefit from including multi-hop reasoning tasks, where models must integrate multiple compositional steps to arrive at an answer. This kind of problem is essential for advanced compositionality and would make the benchmark more challenging and comprehensive.\n3. Questionable novelty: The novelty of the paper could be improved if it incorporated points 1 and 2. Adding in-context multimodal compositionality and multi-hop compositional problems would make MMCOMPOSITION a more distinctive and valuable benchmark." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1) In line 254, “we select several captions from the dense captions in Visual Genome as the correct options and write the misaligned captions manually for the image”\nWhat are the criteria for writing the misaligned captions? In terms of which characteristics do the misaligned captions differ from the original captions?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1) The dataset is human-annotated and covers a wide range of tasks in terms of compositional understanding\n2) The paper evaluates 54 representative large VLMs including open-source and proprietary ones. The benchmark is challenging and demonstrates large performance gap between human and VLMs. \n3) The analysis on model component provides valuable insight on model design." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes MMComposition - a human-annotated benchmark dataset for evaluation of the compositionality of large Vision-language models. \nThe benchmark contains 4.3K questions in three main dimensions: perception, reasoning and probing which are divided into 13 categories. There are both questions that contain a single image and multiple images. Most questions have a single correct answer. There are 459 questions with indefinite-choice.\nThe benchmark demonstrates human performance (90.31%) and state-of-the VLMs (best performance of 67.95% among 54 evaluated VLMs). \nThere is also analysis of impact of VLM architecture factor on the benchmark performance, e.g. visual encoder design, language decoder size, training data volume." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) The paper categories the questions into 4 difficulty levels based on the performance of 6 open-source models. In Figure 8, it shows that 62.09% of questions in the category “superhard” lead to the average performance on all VLMs below the average level. It would be interesting to analyze what characteristics lead to the different difficulty levels of these questions? This can shed light on how to design difficult questions for the competent VLMs. \n2) In the evaluation benchmark, for questions that contain multiple images, the images are concatenated into a big collage and fed into the model. Some of the VLMs have multiple-image samples in the training data and can perform VQA with multiple input images. Does it impede the performance of these models to feed the collage into them?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024mmcomposition,\ntitle={{MMCOMPOSITION}: Revisiting the Compositionality of Pre-trained Vision-Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0YXckVo7Kw},\nnote={under review}\n}" }, "abstract": { "value": "The advent of large Vision-Language Models (VLMs) has significantly advanced multimodal understanding, enabling more sophisticated and accurate integration of visual and textual information across various tasks, including image and video captioning, visual question answering, and cross-modal retrieval. Despite VLMs' superior capabilities, researchers lack a comprehensive understanding of their compositionality -- the ability to understand and produce novel combinations of known visual and textual components. Prior benchmarks provide only a relatively rough compositionality evaluation from the perspectives of objects, relations, and attributes while neglecting deeper reasoning about object interactions, counting, and complex compositions. However, compositionality is a critical ability that facilitates coherent reasoning and understanding across modalities for VLMs. To address this limitation, we propose MMCOMPOSITION, a novel human-annotated benchmark for comprehensively and accurately evaluating VLMs' compositionality. Our proposed benchmark serves as a complement to these earlier works. With MMCOMPOSITION, we can quantify and explore the compositionality of the mainstream VLMs. Surprisingly, we find GPT-4o's compositionality inferior to the best open-source model, and we analyze the underlying reasons. Our experimental analysis reveals the limitations of VLMs in fine-grained compositional perception and reasoning, and points to areas for improvement in VLM design and training." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Vision-Language Models", "Compositionality", "Benchmark" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1c1bc8887e836fe08a57ac7d15ee1efa86b90656.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MMCOMPOSITION: Revisiting the Compositionality of Pre-trained Vision-Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0Yfjerm9Zp
Enhancing LLM Faithfulness in Rationale Generation via Dual-Reward Probabilistic Inference
main
Active
interpretability;faithfulness;Large language model;constrained generation
interpretability and explainable AI
1;3;3;5
3;3;4;3
2;2;1;2
2;1;1;2
1;1;2;2
3
3.25
1.75
1.5
1.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. what is \"t \\wedge T\" ?\n 2. sec 3.3, what is P(s_t) a posterior over?\n 3. In what sense is \\pi_t a \"potential\" function?\n 4. I cannot make any sense of eq 2. Is w \\in V the same as w_t? why cant you simply remove the indicator function and write it as \\sum_w \\in C ? why is the indicator function in the deminator as well? is the intent to have a logit distribution that only puts mass on the tokens in C?\n 5. are the rollouts done on the backbone model or the expert model? have we considered /measured the inference time cost? this is an important consideration in a paper about mtcs type methods.\n\n6. Does q_\\phi simply reward completions of the output that have tokens in C?\n\n7. intro: \"in contrast, an expert model.....\" : this is an interesting claim (does seem plausible). is there a citation for evidence?\n\n8. line 140: tend to generate similar token....\": what does this mean?\n\n9. i am not up to date on the faithfulness literature, but the kind of interventations that the paper describe as standard ways of evaluation i.e. word inclusion and perturbation just seem to be likely to be noise-prone, leading to unreliable evals?\n\n10. GenExpert =? lookahead?\n\n11. comment: the discussion between 329-342 helped understanding a bit and should be earlier in the paper.\n\n12. sec 5.2.2: the NLI example is a bad one i think. Submergible only means it is something that can be submerged. which doesnt automatically mean it is submerged." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. MTCS type inference is a hot topic right now, and it is indeed an important frontier for LLMs to improve on.\n2. At a surface level, experimental results seem to show large gains.\n3. There" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes an approach to do faithful rationale generation in LLMs. It uses a steering-based approach to make the outputs more faithful to the reasoning of the llm in classification. The idea is to weight token logits using 2 kinds of reward models: A \"local\" one that tries to match tokens to those suggested by a domain-specific expert model and a \"lookahead\" one that does an MTCS type search and re-weights logits based on rewards from unrolled sequences. \n\nExperiments are performed on a couple of QA type datasets, demonstrating that each method makes improvements in classification accuracy and faithfulness of rationales. Some qualitative analyses are also presented." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Section 3 is pretty badly written, it is pretty hard to get the details of the approach. Instead of invoking irrelevant sophisticated-sounding terminology like \"Feynman-Kac\" formulas it would be better to describe the method in more detail. The math especially is confusing, see below. \n\nThe paper seems to show some positive experimental results, but I am concerned about whether we are looking at a meaningful comparison. The proposed methods rely on domain experts. Looking at table 8 in the appendix these are generally models that have been fine-tuned for the task in some way (and not just on the validation sets as the main section claims, some have access to external datasets). So it shouldn't be that surprising that a method that is given access to an expert which has more signal will do better than the backbone pre-trained model. A fair comparison would have to be with an approach that does vanilla fine-tuning of LLama or mixtral model. \n\nIn terms of novelty: The authors have not really cited relevant work in the controlled decoding space:\n\nhttps://arxiv.org/abs/2310.17022\n\nhttps://sea-snell.github.io/ILQL_site/\n\nThese works already do something more sophisticated than just token reweighting by a reward score. So what is the novel contribution here? 2 possibities:\n1. Focusing on the faithfulness problem.\n2. The \"lookahead\" idea of the reward model. I dont recall having seen this before, but it feels like a simplification of a full-blown MCTS. I would also call this a poor man's version of ILQL.\n\nSo we are just left with #1 then, unless I missed something. And this is something I consider of limited novelty (more like an application for a particular problem, though one with interesting implications from the steering perspective)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Could this method generalize to the setting where the rationale is generated before the answer?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Faithful rationales are important for explainability and model control, which makes this work well-motivated.\n2. The proposed method is training-free (although with reliance on trained expert models), making their method portable.\n3. A comprehensive set of experiments is conducted to showcase the effectiveness of their proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The work aims to improve the faithfulness of the LLM-generated rationales for reasoning tasks. They propose an inference-based method where an LLM is guided to generate more faithful rationales by both local and global rewards. Both rewards are provided by additional expert models which are trained on the downstream tasks. Experiments demonstrate the effectiveness of the method in achieving higher accuracy and faithfulness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The method requires the model to generate the answer prior to the rationale, which provides no guarantee that the decision is made based on the rationale. The model could still suffer from inherent biases.\n2. The method is limited to reasoning tasks with constrained answer space, limiting its generalization to more open-ended tasks.\n3. The method is poorly introduced. It would be very helpful if the authors could explain what exactly Eq.1-3 are doing in plain words." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. If the expert model is as big as the base model, how can the computational cost be similar to beam search?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The direction this paper explored has been receiving increasing interest recently: improving the quality of LLM answers at inference time without modifying the model weights directly. The proposed method improves the zero-shot accuracy and faithfulness of two strong general instruction-tuned models (Llama-3-8B and Mistral-7b-Instruct-v0.3) on three reasoning tasks. The experiment showing the benefits of going beyond local/token-level rewards and taking into account the global/lookahead reward is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes an inference-time method to improve the performance and faithfulness of general (instruction-tuned) large language models (LLMs). Specifically, the method uses expert models to provide fine-grained and lookahead rewards to search and reweight possible tokens or continuations proposed by the LLM. With the help of expert models trained on the target task or domain, the proposed method can improve both the accuracy and faithfulness of the zero-shot answers of two instruction-tuned models on three reasoning tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- There needs to be more details explaining the proposed method, the motivation of each part, the equations and variables, the relation to related work, and the implementation details. Specifically:\n - Section 3.3: how does the Feynman-Kac Formulae model inspire the faithfulness-seaking search framework? The connection is not straightforward. The notation of eq 1 is ambiguous. What does posterior P_t(st) mean exactly? How is it used in the proposed method? Also, the equation itself needs more explanations on what it is computing and why in this way.\n - Section 3.4 (Local constraint): line 179 I find it hard to follow the motivation. How \"certain attributes can be implicitly conveyed over longer spans rather than the individual token\" is connected to \"Instead, domain-specific experts tend to demonstrate better accuracy in knowledge-rich tasks.\"? If the domain expert has better accuracy why not just use the expert to predict the scores? Why bother to use them to improve the backbone LLM? In lines 180-181, it says \"we introduce a set of classification label words C from these expert models ...\", how is C constructed? What is the motivation behind token masking?\n - Section 3.4 (Lookahead Reweight): Equation 3 is hard to understand without proper explanations. $m$ and $x_i$ are not explained in the texts. $s_{t+l}=s_{t-1}||w_t$ is more confusing: $s_{t+l}$ has $t+l$ tokens while $s_{t-1}||w_t$ has $t$ tokens. What does equality mean here?\n- Many experimental details are missing, and important experiments are missing.\n - Missing baselines: the performance and faithfulness of the expert models alone. If the faithfulness or accuracy of the expert models are better than the backbone LLM, why do we even need to use the expert models to improve the backbone LLM?\n - Evaluation details: how is the original model evaluated? If it is a zero-shot evaluation. What is the exact prompt and task format used? How to extract answers from the outputs to calculate the accuracy? The backbone LLMs are state-of-the-art instruction-tuned models. However, the task performance as well as the faithfulness are quite low, so the authors need to provide more details on the evaluation.\n - What is the choice of hyperparameter n (number of rollouts) and how is it chosen?\n- The writing of the paper could be improved for better readability. First, the paper is not properly scoped. For example, in lines 16-18, it says \"... to ensure that LLM-generated rationales are logically coherent and comprehensive.\" However, there is no result discussing the logical coherence or comprehensiveness of answers in the paper. Another example is line 108: it says \"We firstly introduce the faithfulness definition in our context,\", but there is no clear definition in section 3.2." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The overall experiments are conducted on LLaMA3. I think more backbone LLMs and other sizes of LLMs are needed to justify the proposed inference paradigm.\n2. More experiments on more related datasets is needed." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper introduces a novel probabilistic inference method with a dual-reward mechanism, combining local and global reward. This is a very novel solution. \n2. The paper is well-written. I am not an expert in this domain but I can get their core contributions. \n3. The experiment design is clear: they design the ablation study in Section 5.1 to justify the local and global rewards for the final performance. Although I suggest authors could do better by choosing more LLMs in different model size to better support their experimental design." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "They tackle the rationale generation tasks in LLMs' reasoning process. Specifically, they propose a probabilistic inference paradigm that provides fine-grained and lookahead rewards to instruct LLMs to generate good rationale. The key problem addressed is that LLMs often produce unfaithful explanations, especially when they fail to incorporate essential contextual information. \n\n+ **Local Reward**: this component ensures coherence with the immediate context, often by using a domain-specific expert model.\n+ **Global reward**: This assesses the plausibility of the current token in relation to desirable future attributes\n\nThe search algorithm, especially for lookahead reweight seems interesting.\n\nPlease forgive me if I misunderstand something. I spent much time for reading the paper but to be honest, I am not an expert in this area. I will available on the rebuttal time for author's response and will read their response. I am also open to other reviewers' opinions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There are several related works that are missing or less discussed: \n + Evaluating Human Alignment and Model Faithfulness of LLM Rationale\n + On Measuring Faithfulness or Self-consistency of Natural Language Explanations\n2. Figure 2 about the distribution of domain-specific words is unclear to me. \"showing that our method can respond more actively to those domain-specific words\" Why does this part matters to the experimental results." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a probabilistic inference paradigm that provides fine-grained and lookahead rewards to ensure that LLM-generated rationales are accurate and faithful.." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024enhancing,\ntitle={Enhancing {LLM} Faithfulness in Rationale Generation via Dual-Reward Probabilistic Inference},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0Yfjerm9Zp},\nnote={under review}\n}" }, "abstract": { "value": "As large language models (LLMs) are increasingly applied to complex reasoning tasks, achieving both accurate task performance and faithful explanations becomes crucial. However, LLMs often generate unfaithful explanations, partly because they do not consistently adhere closely to the provided context. Existing approaches address this problem either rely on superficial calibration, such as decomposed Chain-of-Thought prompting, or require costly retraining to improve model faithfulness. In this work, we propose a probabilistic inference paradigm that provides fine-grained and lookahead rewards to ensure that LLM-generated rationales are logically coherent and comprehensive. These rewards are derived from a domain-specific proposal distribution, allowing for optimised sequential Monte Carlo approximations. Our evaluations across three different reasoning tasks show that this method, which allows for controllable generation during inference, improves both accuracy and faithfulness of LLMs while keeping computational costs similar to those of existing decoding techniques. This method offers a promising path towards making LLMs more reliable for reasoning tasks without sacrificing performance or efficiency." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "interpretability", "faithfulness", "Large language model", "constrained generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/12e8d9072c1ffa040599a111ca19715109539a78.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Enhancing LLM Faithfulness in Rationale Generation via Dual-Reward Probabilistic Inference" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0YkZe9nwiC
Self-Informed Generative Active Learning
main
Active
Active Learning;Large Language Model;Synthetic Data;Reinforcement Learning
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;3;3;3;5
4;4;4;3;4
2;1;2;3;2
2;2;2;2;3
2;2;3;2;3
3.4
3.8
2
2.2
2.4
0.25
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "In conclusion, the paper presents an interesting idea, but the experimental section needs significant refinement. Adding more comprehensive experiments and ablation studies would strengthen the conclusions and clarify the potential of this approach." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The approach is promising; using a generator to produce new samples is a valuable innovation for improving active learning systems. This strategy assumes a pre-trained generative model, which is reasonable for text but may not be universal across domains. The selection criterion is sensible, and directly training the generator to maximize it through RL is more robust than simple thresholding." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses active learning by leveraging a generative model to produce unlabeled examples, which are then labeled by an oracle and added to a classification model's training set. Unlike traditional methods that rely on a fixed pool of unlabeled data, this approach actively generates new, potentially more informative examples. The model prioritizes examples based on their distance from nearest neighbors and the discrepancy in predictions between the generated sample and its neighbors. To guide the generative model in producing high-quality samples, it is trained via a Reinforcement Learning algorithm (PPO), optimizing it to generate samples that best serve the classification task. The method is tested on text classification problems, showing mixed results compared to current state-of-the-art techniques." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "However, the experimental section lacks detail to fully evaluate the approach. Key hyperparameters—such as the number of samples generated per iteration and PPO settings—are not systematically analyzed, and no ablation study is provided. It would also be valuable to see a comparison of results with and without the RL approach. The current experimental section leaves significant space unexplored, making it hard to discern the model’s strengths and weaknesses.\n\nIt’s also unclear how the RL component is applied: Is the policy trained concurrently with sample generation, or is it established before the active learning phase? If the reward function evolves as new samples are generated, this could introduce non-stationarity, which would impact performance. Further clarification on this point is essential.\n\nRegarding performance, the results do not clearly outperform existing methods. Notably, the learning curves for the proposed method (Signal) appear to extend longer than others. This might be because other methods are restricted to samples in the original dataset, while Signal can generate an infinite number of examples. However, this is not entirely clear, as baseline methods don’t achieve fully supervised performance, which raises questions about their comparison criteria." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "As discussed in the weakness section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The method addresses the limitations of traditional pool-based methods by generating informative synthetic data instances, this could be beneficial when even unlabeled data is scarce.\n2. The paper is mostly well-organized.\n3. The acquisition function that combines both informativeness and relevance makes sense." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Self-Informed Generative Active Learning (SIGnAL), a RL-based approach for query-synthesizing active learning. SIGnAL generates synthetic data instances to enrich the data pool, especially when access to diverse, unlabeled real data is limited. Experimental results show SIGnAL’s performance advantage over traditional pool-based methods in text classification tasks, particularly when the data pool is very small." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed SIGnAL does not generate the most informative/beneficial data point for labeling, instead, it still requires traditional acquisition function to make the selection. I think this is a critical weakness to this paper. From my understanding, generative AL should not only generate data samples, but more importantly generate the most informative samples.\n2. The settings of this paper is king of niche, most areas that benefit from AL have abundant amount of unlabeled data, if SIGnAL simply generates more unlabeled data, I don't see it being very useful in practice.\n3. The acquisition (relevance and informativeness) is quite simple, relevance is simply the distance, with informativeness directly taken from CAL.\n4. The experiments are very limited. The only results are in Figure 3, with limited datasets, baselines, and the improvements are hardly distinguishable in my opinion. \n\nIn general I think this paper presents an interesting direction, but the details needs a bit more refinements." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "In the equation of line 186, the distribution shouldn't be p_z because there is synthetic data while p_z is defined as real data, right?\nLine 284: missing space between the and generate" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The idea is interesting and novel, the paper is easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes the Self-Informed Generative Active Learning (SIGnAL) framework, which generates synthetic data to improve active learning when real data is scarce. Using reinforcement learning, SIGnAL’s generator produces informative data guided by a reward system, ensuring relevance and usefulness for model training. An acquisition function assesses this data’s informativeness and relevance, optimizing the generator’s outputs. Experiments on text classification validate SIGnAL’s effectiveness, particularly in data-limited scenarios, offering a cost-efficient solution." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The experiments are far from sufficient for a top-tier conference, now there is only overall performance but lack of ablation study and analysis.\n2. As a method that combines active learning and synthetic data generation from LLM, the authors only compare it with active learning approaches, I think they should also compare the proposed method with synthetic data generation without active learning\n\nMissing related work:\n\n[1] Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias\n[2] ZeroGen: Efficient Zero-shot Learning via Dataset Generation\n[3] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The PPO reinforcement learning method is utilized in this paper to optimize AL strategies for larger rewards. Could you provide a detailed explanation of the state and action settings in this RL scenario? \n\nAdditionally, when considering other RL-based active learning strategies, is it worth considering adopting the classifier's accuracy as an additional reward after generating samples?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "By utilizing reinforcement learning, the approach effectively addresses the challenges posed by the dynamic and delayed nature of informativeness, treating instance informativeness as the reward signal to optimize the generative model. The method incorporates an acquisition function that evaluates both traditional informativeness and the relevance of data instances, transforming these evaluations into rewards during training." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper leverages a RL policy to guide data generation, allowing the generator to receive rewards that encourage the creation of more informative instances. Additionally, it introduces an acquisition function that evaluates both informativeness and relevance, seamlessly transforming this evaluation into rewards for optimizing the generator." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper provides a detailed analysis of the challenges faced by pool-based active learning methods; however, it lacks an introduction to existing query-synthesizing methods and a distinction between the proposed method and existing synthesizing-based methods, such as “LADA: Look-Ahead Data Acquisition via Augmentation for Deep Active Learning” and “When Active Learning Meets Implicit Semantic Data Augmentation”. However, synthesizing-based methods are one of the primary categories in the active learning scenarios.\n\nThe PPO reinforcement learning method is utilized in this paper to optimize active learning strategies for larger rewards. Could you provide a detailed explanation of the state and action settings in this reinforcement learning scenario? Additionally, is it worth considering adopting the classifier's accuracy as an additional reward after generating samples?\n\nRegarding the experiments: The baseline methods adopted in the paper are all pool-based active learning methods. To further validate the effectiveness of your method, it is suggested to compare with synthesizing-based methods as well. Moreover, according to the experimental setup, synthesizing-based methods annotated twice as much more data, which could account for their superior performance. It is recommended to include ablation studies to provide additional explanations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What is the ratio of generated samples to actual unlabeled data queried at each iteration in the active learning process? If generated samples are only queried after unlabeled data, their value seems minimal.\n2. How were the generated samples human-labeled? It’s likely some generated samples are incoherent, making labeling challenging. The paper includes no information about the human labeling process." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The proposed approach outperforms other techniques, such as CAL, BERTKIM, and BADGE.\n2. It allows performance gains by querying generated data after exhausting unlabeled data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an active learning approach for NLP tasks utilizing a generative model. It incorporates KL divergence, as proposed in CAL, to retrieve informative samples and uses inter-sample distance to avoid querying unrelated samples. The method outperforms comparable approaches and can continue the active learning process even without access to further unlabeled data by leveraging generated samples. However, the limited datasets and reliance on LLM raise questions about the necessity of the approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The utility of this approach is ambiguous. Active learning aims to efficiently query valuable samples in low-data regimes, particularly in areas with difficult labeling requirements, such as medical or legal fields.:\n- 1.1. The paper only presents general datasets (SST-2, AGNEWS, QNLI) focused on tasks like sentiment analysis and topic classification, where active learning might be unnecessary. Given that the LLM itself can achieve higher performance on such tasks, training an additional classifier via active learning seems contradictory. For this approach to be useful, the active learning-trained model should outperform the LLM.\n- 1.2. In this regard, domain-specific datasets, such as PubMed or legal datasets should be added. However, studies suggest that even specific tasks can achieve performance gains without active learning (or human labeling) through LLMs [1], raising questions about this method's utility compared to such approaches.\n2. The number of datasets and class diversity are limited, with only three datasets and two or four classes per dataset. Include datasets with more classes, like DBPEDIA with 14 classes, to address whether the proposed method benefits persist as class counts increase.\n3. The main paper lacks a definition of $\\Phi$, which can only be inferred as a text encoder model.\n4. No ethics statement is provided. An ethics statement and societal impact are mandatory for ICLR.\n5. Hyperparameters are not disclosed. Without code submission, at least hyperparameter settings or a code statement should be included.\n6. Time consumption details are missing. Given the method's reliance on LLMs and RL and the continuous dataset expansion, it likely requires considerably more time than alternative methods. Please add this information.\n\n[1] Kim et al., \"SELF-EXPERTISE: Knowledge-based Instruction Dataset Augmentation for a Legal Expert Language Model\"" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose the Self-Informed Generative Active Learning (SIGnAL) framework which actively generates and selects data instances for annotation and downstream model training." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024selfinformed,\ntitle={Self-Informed Generative Active Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0YkZe9nwiC},\nnote={under review}\n}" }, "abstract": { "value": "Active learning has been a cost-efficient approach to obtaining high-performance AI models with fewer selective annotations. In scenarios where the acquisition of original unlabeled data poses significant challenges, active learning harnessing synthesized data instances is more promising than traditional pool-based methods. In this paper, we propose the Self-Informed Generative Active Learning (SIGnAL) framework as an effective solution to actively generate and select data instances for annotation and downstream model training. In SIGnAL, we propose to guide the data generation based on a reinforcement learning policy, where the generator is self-informed by the reward to generate more informative instances. In addition, we introduce an acquisition function that measures both the informativeness and relevance of instances. Such acquisition function can be transformed to the reward seamlessly for generator optimization. Our experiments on the text classification task validate the effectiveness of our framework, especially when the original data scale is limited." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Active Learning", "Large Language Model", "Synthetic Data", "Reinforcement Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/800a939160f39a4a63ad3c3199f90eb6183f3212.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Self-Informed Generative Active Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0YxvqG9SsJ
Offline Model-Based Skill Stitching
main
Active
Skill stitching;Offline reinforcement learning;Model-based planning
reinforcement learning
3;3;5
4;4;3
2;2;2
2;2;2
2;2;3
3.666667
3.666667
2
2
2.333333
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What if the model learns inaccurately in a complex environment?\n\n2. Can you use the normalized score for the experimental results?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper is written well. The method is esay to follow.\n2. This work is evaluted on various domains." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the development of agents capable of addressing long-horizon tasks through offline model-based reinforcement learning (RL). While current RL methods excel at learning individual skills, they struggle with integrating these skills to accomplish extended tasks due to the mismatch between the termination of one skill and the initiation of another, resulting in distribution shifts. The authors propose an offline approach to skill stitching, leveraging aggregated datasets from various skills to train a dynamics model that can generalize across different skills. This model, along with an ensemble of offline dynamics models and value functions, is used to stitch adjacent skills through model predictive control (MPC). To address the overestimation issues common in offline model learning, a conservative method is introduced to penalize uncertainty in model and value predictions. The study's experimental results demonstrate the effectiveness of this approach over baseline methods in offline settings across multiple benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The originality of this work is quietly limited. The idea of stitching skills based on value functions is not new; many papers have proposed similar approaches. For example, PEX [1].\n2. A large number of baseline algorithms are missing. For example, OPAL [2] and LPD [3].\n\n\n[1] Zhang, Haichao, We Xu, and Haonan Yu. \"Policy expansion for bridging offline-to-online reinforcement learning.\" arXiv preprint arXiv:2302.00935 (2023).\n\n[2] Ajay, A., Kumar, A., Agrawal, P., Levine, S., & Nachum, O. (2020). Opal: Offline primitive discovery for accelerating offline reinforcement learning. arXiv preprint arXiv:2010.13611.\n\n[3] Yang, Y., Hu, H., Li, W., Li, S., Yang, J., Zhao, Q., & Zhang, C. (2023, June). Flow to control: Offline reinforcement learning with lossless primitive discovery. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 9, pp. 10843-10851)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- I wonder if the value function properly evaluates states that have not been visited (during stitching). As the value functions for each skill are learned distinctly, how can the value evaluation in the stitched space be accurate and reliable?\n- How might the proposed method be adapted to handle low-coverage offline datasets? \n- I wonder if the authors considered any techniques to reduce the computational burden of MPC in continuous or stochastic environment?\n- What potential strategies could be considered for improving generalization or adaptability to dynamic environments within the constraints of offline learning?\n\n- Minor Typos:\n\nline 97: over-estimate → overestimate, to match the usage elsewhere in the paper.\n\nline 215: continous actions space → continuous action space\n\nline 257: T(\\cdot|s_t,a_t) → T_{\\phi}(\\codt|s_t,a_t}" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The proposed offline skill stitching method is straightforward yet effective in certain environments with long-horizon tasks, enabling task completion by sequencing learned skills from offline datasets.\n- Skill stitching offers a practical approach in hierarchical reinforcement learning, addressing challenges in learning tasks composed of multiple sub-tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work explores a model-based approach for offline learning of skills and their sequential stitching using only individual skill datasets, without relying on online interactions with the environment. Unlike existing skill stitching techniques based on online reinforcement learning, this approach utilizes offline data to decompose long-horizon tasks into manageable skills that can be executed sequentially. The focus is on training a dynamics model with aggregated skill datasets, enabling effective model-based planning and incorporating conservative optimization objectives to ensure robust transitions between skills during planning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Lack of novelty: The proposed skill stitching method of evaluating states for stitching using the value function is not novel; it is a fundamental approach used in existing offline RL for trajectory stitching [1, 2]. A comparison with these existing offline trajectory stitching methods is required. \n\n[1] Stitching Sub-trajectories with Conditional Diffusion Model for Goal-\nConditioned Offine RL (AAAI 2024)\n\n[2] Model-based Trajectory Stitching for Improved Offline Reinforcement\nLearning (NeurIPS 2023)\n\nThe below work also uses model-based rollouts (planning) for skill-based task planning in offline settings, similar to the proposed method.\n\n[3] Offline Policy Learning via Skill-step Abstraction for Long-horizon Goal-Conditioned Tasks (IJCAI 2024)\n\n-\tThe proposed method using MPC operates by sampling possible actions and evaluating the value of the resulting states. For continuous action spaces, it requires extensive sampling and evaluation to determine the best outcome. Furthermore, in environments with stochasticity, the MPC optimization can be required at each attempt, leading to significant inefficiencies in time complexity.\n\n- The performance gain in the Kitchen appears minimal, raising questions about whether the proposed method is effective in continuous action space settings. In the Maze Runner, the discrete action space makes the MPC method feasible. However, in complex continuous tasks like the Kitchen task, the value function evaluation may be unreliable, requiring MPC to extensively search the possible action space, which may explain the minimal performance gain observed.\n\n- The method may not generalize well across diverse environments, especially those with dynamic or unpredictable conditions, as it relies solely on offline data without any consideration on real-time adaptability.\n\n- The approach's effectiveness is highly dependent on the diversity of the offline datasets, as the method relies on the learned dynamics model on the aggregated offline datasets." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. For the maze experiments, can you compare to offline goal conditioned RL for example goal-conditioned IQL?\n\n2. For the MF-stitching baseline, do you train the model-free stitching policy for each two adjacent skills?\n\n3. How does the method perform for each skills permutation? Is it better under some permutations and worse in others?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The introduction of the offline skill stitching problem is important for real-world applications\n\n2. The idea of using a model and planning to stitch the skills is interesting and seems a good direction for further research.\n\n3. The method results on the maze are strong compared to the baselines.\n\n4. The results are better than baselines in general." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces an algorithm for skill stitching from offline data,the algorithm has two phases, an offline training phase where each skill is extracted from an offline data that contains trajectories representing the skill. And a test phase, where the dynamics model is used for MPC-based skill stitching guided by the value function. The experiments demonstrate the performance of the method in comparison with some baselines; the ablations show that the quality of the data can have a significant effect on the performance of the skill changing as well as the diversity of transitions in the training distribution." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The assumption of the availability of a dataset for each skill is a strong assumption, is there a way to relax it? For example learning diverse skills from one offline dataset? Is this possible and is there any related work that focus on this problem?\n\n2. Training each skill separately via offline RL seems expensive and time-consuming.\n\n3. For some hyperparameters it is not clear to me they have been chosen, for example the maximum steps of skill execution seems very problem dependent.\n\n4. The method does not seem effective on more complicated tasks (for example in table 2 the method fails in accomplishing more than one skill regardless of the number of skills in the task), but it is still better than the baselines." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024offline,\ntitle={Offline Model-Based Skill Stitching},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0YxvqG9SsJ},\nnote={under review}\n}" }, "abstract": { "value": "We study building agents capable of solving long-horizon tasks using offline model-based reinforcement learning (RL). Existing RL methods effectively learn individual skills. However, seamlessly combining these skills to tackle long-horizon tasks presents a significant challenge, as the termination state of one skill may be unsuitable for initiating the next skill, leading to cumulative distribution shifts. Previous works have studied skill stitching through online RL, which is time-consuming and raises safety concerns when learning in the real world. In this work, we propose a fully offline approach to learn skill stitching. Given that the aggregated datasets from all skills provide diverse and exploratory data, which likely includes the necessary transitions for stitching skills, we train a dynamics model designed to generalize across skills to facilitate this process. Our method employs model predictive control (MPC) to stitch adjacent skills, using an ensemble of offline dynamics models and value functions. To mitigate overestimation issues inherent in models learned offline, we introduce a conservative approach that penalizes the uncertainty in model and value predictions. Our experimental results across various benchmarks validate the effectiveness of our approach in comparison to baseline methods under offline settings." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Skill stitching", "Offline reinforcement learning", "Model-based planning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4da014a566a23e479bb6c134d88301e9dea08e43.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Offline Model-Based Skill Stitching" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0ZcQhdyI3n
LSH Tells You What To Discard: An Adaptive Locality-Sensitive Strategy for KV Cache Compression
main
Active
kv cache;locality-sensitive hashing;compression
foundation or frontier models, including LLMs
1;3;3;5;5
4;4;4;4;4
2;2;2;2;2
1;2;1;3;2
1;1;2;3;3
3.4
4
2
1.8
2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "see questions above," }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Uses LSH to approximate attention computation for eviction (if you compare to h2o / scissorhands)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The idea is to reduce KV Cache by evicting and permanently dropping tokens at each position in the query. The heuristic used is to evict the lowest attention scored keys ( which is essentially similar to H2O / Scissorhands which preserve top attention scored keys). The difference is to use LSH to do a approximate score ranking to avoid SoftMax for exact computation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Novelty: The novelty is limited.\n- H2O / Scissorhands are known to not perform well on longbenchmark. Can we see some results on longbenchmark like passage retrieval datasets ?\n- Missing baselines --only baseline used is L2 norm. \n- Limited evaluation. can we get more results on longbenchmark at different budgets with standard baselines." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Line 52: \"However, this L2 dropout strategy only performs well on\nlong-context retrieval tasks. It is specialized to retain only those tokens with the highest attention\" -- be more specific. Why is this?\n\nLine 57: \"wide variety of tasks?\" -- how do you define this?\n\nLine 145: Formally for our setup, distd(x, y) cos θx,y, here it is more a measure of cosine similarity than distance. Misleading, perhaps?\n\nLine 419: did you mean \"LSH dimension does significantly impact performance\" --> does not?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Studies an important problem of much significance in todays LLM era. \n\nPresents a simple yet elegant approach\n\nDoes good evaluations on a range of use-cases" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents new methods to accelerate inference of auto-regressive transformers used in most modern-day decoder-based LLM architectures. Indeed, the main drawback of existing systems is the size of the \"KV Cache\" or Key-Value Cache which is used during the attention mechanism. To speed up the attention calculation, most systems have a cache which remembers the keys and values of commonly used tokens, to avoid recomputing it for each token decoding. However ,such a cache, for it to be performant at inference time, must scale quadratically with the sequence length, and linear in number of layers and attention heads. \n\n(Authors: please explain why for the uninformed reader -- this is stated in the intro, but without explanation)\n\nIn this paper, the authors present an LSH based method to evict far-away key tokens. Indeed, suppose we have an LSH which gets a binary encoding of any vector using random hyperplane projection method (SIMHASH). \nThen, we can first pre-process and compute the hamming distance between query token and all key tokens, and evict the farthest one, as this is the one least likely to affect the overall attention soft-max operation.\n\nThey implement this simple scheme and provide a range of quality vs cache size metrics comparing with one other KV-cache called L2-Dropout Cache, which drops the keys based on their magnitudes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Why is there no timing experiment, since that will be one key benefit of caching.\n\nWhy only restrict to attention-free cache policies and specifically only compare with the L2-dropout baseline?\n\nConceptually, what is the key difference with Reformer? I have not read that paper but you mention in passing that it is using LSH and simhash also. Is which cells to evaluate vs what to evict the only difference between Reformer and your work? If so, worth comparing with Reformer also in plots?\n\nWhat is the rationale of the policy? Why can't a token just evicted become relevant again? I guess is there some language-based \"locality of reference\"?\n\nDo ablation of the hardcoded bits, i.e., you mention you hard-cache the first few and last few tokens. What is the contribution of this to your overall success metrics?\n\nThe eviction policy is not clearly understandable in how it aggregates the hamming distances over time steps. Is it only based on the most recent time step, or some more complex rule?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Could you provide a plot showing the distortion error introduced by LSH compression across different levels of compression? Specifically, how does the approximation quality change as more tokens are evicted or as the quantization parameters are adjusted?\n\n- Given that LSH-E’s efficiency largely depends on its CUDA implementation, can you elaborate on any specific optimizations made within the CUDA code?\n\n- Could you clarify how LSH-E handles multi-head attention? Specifically, is each head processed separately with its own LSH compression, or is there a shared mechanism across heads?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The use of theoretical approaches such as SimHash, a highly efficient hashing method, is a valuable aspect of this work, contributing to both the effectiveness and scalability of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces LSH-E, an algorithm for compressing the key-value (KV) cache in large language models (LLMs) using locality-sensitive hashing (LSH). Despite the availability of prior work—including KDEformer, Hyperattention, SubGen, and QJL—that similarly utilizes LSH for efficient attention and memory management, these related efforts are not cited here. LSH-E leverages Hamming distance calculations in a binary space following a Quantized Johnson-Lindenstrauss (JL) transform (SimHash) to identify and evict tokens with low relevance to the current query, resulting in memory savings. This pre-attention approach provides a lightweight, GPU-efficient solution for long-context tasks, although its effectiveness ultimately depends on the algorithm’s CUDA implementation efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The term \"novel\" should not be used for LSH in this context, as it is not a new approach and has appeared in prior work. Specifically, the methods used in KDEformer, Hyperattention, QJL, and SubGen demonstrate significant overlap, yet these works are not cited here, despite their relevance.\n\n- The experimental setup lacks comprehensiveness; comparisons with alternative methods like H2O, SubGen, and other established baselines should be included to provide a more robust evaluation.\n\n- The datasets used in the experiments are not sufficiently large for evaluating performance in long-context scenarios. Given that these methods target long-sequence processing, experiments should ideally use token sizes over 50,000. LongBench or other large-scale datasets would be more appropriate for a thorough evaluation.\n\n- Additionally, runtime metrics should be reported to assess the efficiency of token generation and to substantiate the computational benefits claimed in the paper.\n\nKDEformer : https://proceedings.mlr.press/v202/zandieh23a.html\nHyperAttention : https://arxiv.org/abs/2310.05869\nSubGen : https://arxiv.org/abs/2402.06082\nQJL : https://arxiv.org/abs/2406.03482" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "See weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Strong Points\n----\nS1. The problem of the paper is well-motivated. \n\nS2. The proposed algorithm is simple and clear with illustrative example.\n\nS3. The proposed method outperforms the baseline L2." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method that uses LSH to perform kv cache eviction. The provided experiments show that the proposed method outperforms the baseline." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Weak Points\n----\nW1. Important related studies and baselines are missing:\nSinghania, P., Singh, S., He, S., Feizi, S., & Bhatele, A. (2024). Loki: Low-Rank Keys for Efficient Sparse Attention. arXiv preprint arXiv:2406.02542.\nTang, J., Zhao, Y., Zhu, K., Xiao, G., Kasikci, B., & Han, S. (2024). Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference. arXiv preprint arXiv:2406.10774.\n\nW2. The key measures of the targeted task should be have more accurate inference with lower memory footprint and latency. I do not agree with the methodology of not comparing with other \"non attention-free\" methods.\n\nW3. The presentation of experiments need to be improved: Lack of discussions and intuitions in the experiment analysis. For example, why does LSH-E outperform Full in Figure 4a; why does LSH-E become worse than L2 after 50% cache budget in Figure 4b? We have many subsubsections in the experiments, but most contents in those are barely text illustration of the figure and result while no discussion of why we would have those results.\n\nW4. The execution time of the proposed system is missing.\n\nW5. The discussion of the error introduced by the LSH is not included. I wonder what if we use cosine similarity to evict the cache instead of LSH, how will be the accuracy, latency, and memory usage?\n\nW6. In the supplementary materials, we see more experiments with more baselines that are better than L2. I wonder the reason why the authors do not include them.\n\n\nPresentation\n----\nP1. Line 180 \"heavy hitters' -> ``heavy hitters''\nP2. The axis captions of the figures are too small to be seen." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Besides the problems mentioned in Weakness,\n1 Does this method work well with quantization (KIVI, AWQ)?\n2 How long does LSH-E increase first token latency?\n\nThese two questions can be left for future work." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper applies novel LSH methods to KV cache problems. The motivations and reasons why LSH can produce a good performance are well discussed. Besides this, a static compression rate of 30% - 70% is also helpful for many LLM serving systems, given the accuracy is preserved." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a KV cache compression method based on LSH and shows that LSH-E can achieve good downstream performance on various downstream tasks with a 30%- 70% compression ratio." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There is no comparison with other static KV compression baselines, including H2O, streamingLLM, and SnapKV. If this problem is solved, I will raise my score.\n2. Only the memory compression ratio is shown. I will ask for the wall clock speedups (latency or throughput)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We develop a token eviction strategy that uses locality-sensitive hashing to locate low-attention tokens without computing attention." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024lsh,\ntitle={{LSH} Tells You What To Discard: An Adaptive Locality-Sensitive Strategy for {KV} Cache Compression},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0ZcQhdyI3n},\nnote={under review}\n}" }, "abstract": { "value": "Transformer-based large language models (LLMs) use the key-value (KV) cache to significantly accelerate inference by storing the key and value embeddings of past tokens. However, this cache consumes significant GPU memory. In this work, we introduce LSH-E, an algorithm that uses locality-sensitive hashing (LSH) to compress the KV cache. LSH-E quickly locates tokens in the cache that are cosine dissimilar to the current query token. This is achieved by computing the Hamming distance between binarized Gaussian projections of the current token query and cached token keys, with a projection length much smaller than the embedding dimension. We maintain a lightweight binary structure in GPU memory to facilitate these calculations. Unlike existing compression strategies that compute attention to determine token retention, LSH-E makes these decisions pre-attention, thereby reducing computational costs. Additionally, LSH-E is dynamic -- at every decoding step, the key and value of the current token replace the embeddings of a token expected to produce the lowest attention score. We demonstrate that LSH-E can compress the KV cache by 30\\%-70\\% while maintaining high performance across reasoning, multiple-choice, and long-context retrieval tasks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "kv cache", "locality-sensitive hashing", "compression" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b53389ae75f81b08c5d6441011b7d8c70db2349c.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/5aae791fb7a52587e5f545eb3922e3a5f4031416.zip" }, "title": { "value": "LSH Tells You What To Discard: An Adaptive Locality-Sensitive Strategy for KV Cache Compression" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0Zot73kfLB
GVFi: Learning 3D Gaussian Velocity Fields from Dynamic Videos
main
Active
Dynamic Reconstruction;Physics;Motion Extrapolation
applications to computer vision, audio, language, and other modalities
3;5;6;6
3;5;3;3
3;2;3;3
3;2;3;3
2;2;3;3
5
3.5
2.75
2.75
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The performance and visual results of DefGS and GVFi appear very similar. Could the authors specify scenarios where the translation-rotation dynamics module offers clear advantages?\n2. Could quantitative results for object segmentation be provided, and how does GVFi compare to models that rely on human annotations for this task?\n3. Could the authors highlight the novelty compare to DefGS?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. It is novel to represent the 3D points as particles, which is a well-established concept in robotics. This representation could open up further research topics to improve dynamics modeling.\n2. This model does not rely on human annotations for motion estimation. It can autonomously group meaningful objects based on motion patterns without requiring any labeled data.\n3. The authors provide both quantitative and qualitative results across multiple datasets, demonstrating GVFi’s improvements in both interpolation and extrapolation tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces GVFi, a novel approach for modeling 3D scene geometry, appearance, and dynamics from multi-view images without the need for human annotations, such as bounding boxes or segmentations. The authors highlight that previous 3D Gaussian Splatting models struggled to capture the underlying motion physics of dynamic scenes. In contrast, GVFi treats 3D points as particles in space, each with a learnable size and orientation, enabling the model to learn particle rotation and translation to represent a dynamic system effectively. Experimental results on three diverse datasets show that GVFi significantly outperforms prior 3D Gaussian Splatting models on both interpolation and extrapolation tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This model builds upon DefGS (Yang et al., 2024), with its main contribution being the translation-rotation dynamics system module. However, the novelty of this addition may be somewhat limited.\n2. The performance of DefGS (Yang et al., 2024) and GVFi is quite similar, and there appears to be no significant visual difference between the outputs of the two models. Could the authors clarify specific scenarios where the translation-rotation dynamics system module leads to performance improvements?\n3. There are no quantitative results for object segmentation. Would it be possible to evaluate this and compare it to models that rely on human annotations?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to weaknesses above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "[+] The paper is well-organized.\n\n[+] The proposed methodology of predicting translation rotation dynamics is straight-forward and well-presented.\n\n[+] The emerged behavior of rigid parts through motion clustering is interesting and show be highlighted further.\n\n[+] Extensive empirical evaluation on multiple benchmarks demonstrates superior performance, along with proper ablation study and demo video in supplementary." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors extend multi-view dynamical scene modeling by predicting motion physics parameters without additional supervision. Specifically, they directly predict a translation rotation dynamics system for each 3D particle, which gives the model capabilities in future predictions of trajectories and rigid part discovery via clustering. Quantitative and qualitative results show superior performance against prior arts on three existing and one proposed benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "[-] My main concern about this work is the assumption made (L219) that \"there is no additional force involved after $t=0$.\" Although the author give a justification that \"a rolling ball suddenly exploding is not learnable,\" I am not sure if the scope of the research is sufficiently broad given this constraint:\n- First, while some moveable objects cannot move of their own volition, many dynamical (interesting) objects do have the ability to move on their own (e.g. humans, vehicles, animals, etc). By assuming no additional forces after $t=0$, the formulation assumes the presence of no dynamical objects, which conflicts with some of the qualitative results (whale, skater and van). Are we simply modeling these objects in a time window where no force is applied? It would be great if the authors can clarify on how the assumption impacts the modeling of self-propelled objects.\n- Second, due to the strict assumption made about applied forces, the dynamical scene valid for this method would be rather simple and cannot contain more complex motion with evolving accelerations. The authors should elaborate on the types of motion that can / cannot be handled by GVFi.\n- Finally, since I do not work on this topic, I am not sure how significant is my concern above and I am happy to change my recommendation as I await to read other reviewer’s comments and the author's response to my review." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Based on the methodology, there seem to be three possible approaches for interpolation rendering: (1) directly using $f_{defo}$ to predict the deformation at the given time $t$, (2) progressively calculating the Gaussian deformation at the given time $t$ from time 0 using the motion parameters predicted by $f_{trd}$, or (3) following the steps described in lines L261-L269. Which approach was used in the experiments? Are the results consistent across these three methods?\n2. For extrapolation rendering according to lines L261-L269, it seems feasible to use either the second or third approach from question 1. Which method was actually used by the authors? If the third approach was used, how does it perform over longer extrapolation periods? Could the authors provide visual results for extrapolations that extend beyond the time span covered in the dataset?\n3. The choice of baseline methods for comparison appears limited. For a comprehensive evaluation, it would be beneficial to compare against state-of-the-art methods in dynamic scene reconstruction, such as 4D-GS[2] and more recent work like E-D3DGS [3], which both have architectures similar to Deformable3DGS but differ in their motion representation. Could the authors verify if the proposed Translation Rotation Dynamics System can be integrated into these methods and whether it would yield similar performance gains?\n4. The authors claim that their framework is a general approach for modeling motion physics in complex dynamic 3D scenes. However, the datasets used, with only 60 frames in total, limit the complexity and extent of motion. Could the authors validate this claim by testing on more challenging synthetic and real-world datasets, such as the ParticleNeRF and PanopticSports datasets, to provide a more comprehensive evaluation of the framework’s effectiveness on complex scenes?\n5. In the ablation study, the authors provide a rationale for their choice of $\\delta t$, which is somewhat reasonable. However, this conclusion is based on results from only one dataset, which may not be sufficient, as each dataset could exhibit different motion characteristics. Could the authors clarify how to select an appropriate $\\delta t$ in practice across diverse datasets?\n6. The experimental details are insufficient, particularly regarding training time, required resources, storage size, and rendering speed. Could the authors provide more comprehensive information on these aspects?\n7. Please ensure that all abbreviations and technical terms are clearly defined, with full explanations and necessary citations. In the related work section, it would be helpful to explicitly clarify the differences from relevant works wherever possible.\n\n[2] Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 4d gaussian splatting for real-time dynamic scene rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.\n\n[3] Jeongmin Bae, Seoha Kim, Youngsik Yun, Hahyun Lee, Gun Bang, and Youngjung Uh. Per- gaussian embedding-based deformation for deformable 3d gaussian splatting. In Proceedings of the European Conference on Computer Vision (ECCV), 2024." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Modeling the motion of Gaussians through a Translation Rotation Dynamics System grounded in classical mechanics, resulting in a concise and conceptually elegant framework with solid mathematical and physical foundations.\n2. Introducing an effective method to train the motion parameters of the Translation Rotation Dynamics System, enabling the accurate estimation of translation and rotation dynamics for each particle in the scene.\n4. By explicitly learning motion parameters under classical mechanics, enabling effective extrapolation to unobserved frames and presenting potential for generation tasks that require plausible future frames in dynamic 3D scenes.\n3. The proposed approach is validated on two tasks, demonstrating superior performance compared to previous methods, highlighting its effectiveness in modeling motion dynamics in 3D scenes." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces GVFi, a framework for modeling the motion physics of complex dynamic 3D scenes using multi-view RGB videos without requiring additional annotations such as object shapes, types, or masks.\nBuilding on Deformable3DGS, GVFi incorporates constraints based on the laws of classical mechanics to guide motion predictions, ensuring that the Gaussian deformation estimated by the MLP aligns more closely with physical principles. By assuming that motion adheres to the laws of classical mechanics and explicitly learning the associated motion parameters, GVFi is capable of performing effective extrapolation rendering, allowing it to predict frames beyond the observed time span. Experimental results show that GVFi significantly outperforms existing methods, particularly excelling in future frame extrapolation tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The contributions of this work are somewhat incremental, as most of the methodological design heavily overlaps with the baseline method, Deformable3DGS [1]. The key difference lies in the incorporation of dynamical principles, primarily to enable extrapolation capabilities rather than introducing fundamentally novel approaches.\n2. The proposed motion modeling framework is overly restrictive, relying on an strong assumption of no external forces, disregarding energy transfer processes, and lacking the ability to handle non-rigid or nonlinear motion. These limitations significantly reduce the model’s applicability to real-world physics.\n3. Due to its reliance on idealized assumptions and limited scope, the model struggles to handle complex, real-world motion dynamics where varied forces, interactions, and non-rigid behaviors are prevalent, limiting its utility for practical applications in diverse environments.\n\n[1] Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction. CVPR, 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- How long are each of the datasets scenes? Are they really long enough to meaningfully challenge the assumption of second order expansion?\n- The NVIDIA Dynamic Scene Dataset (Yoon 2020) contains many dynamic scenes in the 2020 paper, but this paper claims \"it consists of two real-world dynamic 3D scenes\", what are those scenes?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Method at its core is quite simple (this is a good thing)\n - Learning a second order taylor series expansion of the full trajectory\n - The quantitative results seem good, even if only minor improvements in a number of cases" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Paper proposes a method \"GVFi\" tackles the problem of estimating dynamic 3D scenes.\n\nBroadly speaking, GVFi\n - Uses an off the shelf method (3DGS) to compute gaussian splats in a canonical frame\n - Uses an off the shelf method (\"Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction\" Yang et al., CVPR 2024) to estimate a deformation field over position, rotation, and scale of each gaussian as a function of time\n - Uses these as inputs to then estimate the 3D gaussian's motion \n\nImportantly, these gaussians are parameterized as rotation around a moving rotation centerpoint, and this centerpoint's motion is described entirely by an initial position, velocity, and acceleration estimate. These estimates are then optimized against the flow field as noisy ground truth and training observation reconstruction losses." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Second order taylor series expansion seems quite limiting for arbitrary motion, or motion over non-trivial time horizons\n - Assuming I am interpreting the paper correctly, experiments seem to be only over short (~1 second) time horizons, which don't seem like they would challenge this assumption\n - Presentation quality is *extremely* poor\n - Core concept is quite simple, but it's heavily obfuscated for no apparent reason. It could be explained in 1 paragraph.\n - Core concepts seem poorly motivated; physics priors are common, but why only a second order expansion? Is this really a reasonable assumption in practice? There needs to be more motivation to this choice and more careful analysis of its limitations\n - Figure 1 and 2 are almost the same thing but not very informative. A better figure would be demonstrating the taylor series expansion of a single gaussian's trajectory\n - The math in section 3 does not feel like it was put there to be informative, but instead to intimidate the reader; after climbing through the notation its basically just saying to compose offsets together to estimate motion. If the authors feel this notational exercise is needed (don't think it is), it should go in the appendix and the main paper should have far more explanatory figures.\n - Ablations do not seem to address the core contribution, which is the assumption of the second order expansion --- what if you only do a first order expansion? Can you attempt to extend this to third order? They briefly mention replacing it with an MLP, but minimal details are provided.\n\nI'm of the opinion that the paper has a neat idea but its presentation needs to be dramatically overhauled --- its assumptions need to be clearly stated and examined as reasonable or not, and it needs to have experiments where the method is pushed. Looking at the qualitative results, these datasets are very simple partwise rigid motion and the taylor series expansion is a nice trick to force smooth non-shattering motion, but it comes at the cost of generality --- nowhere does this seem to be addressed, considering the sometimes marginal performance improvements over far more flexible prior methods.\n\nNit:\n\"Cononical\" -> Canonical misspelling is rampant" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024gvfi,\ntitle={{GVF}i: Learning 3D Gaussian Velocity Fields from Dynamic Videos},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0Zot73kfLB},\nnote={under review}\n}" }, "abstract": { "value": "In this paper, we aim to model 3D scene geometry, appearance, and physical information just from dynamic multi-view videos in the absence of any human labels. By leveraging physics-informed losses as soft constraints or integrating simple physics models into neural networks, existing works often fail to learn complex motion physics, or doing so requires additional labels such as object types or masks. In this paper, we propose a general framework named GVFi to model the motion physics of complex dynamic 3D scenes. The key novelty of our approach is that, by formulating each 3D point as a rigid particle with size and orientation in space, we choose to directly learn a translation rotation dynamics system for each particle, explicitly estimating a complete set of physical parameters to govern the particle's motion over time. Extensive experiments on three existing dynamic datasets and a newly created challenging dataset demonstrate the extraordinary performance of our method over baselines in the task of future frame extrapolation. A nice property of our framework is that multiple objects or parts can be easily segmented just by clustering the learned physical parameters." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Dynamic Reconstruction", "Physics", "Motion Extrapolation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/60dd1d0ceeb5487f803d2d90706d75c9f91fa9bb.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/ba93674e0fa0a49347edf567ec9996ff62c59526.zip" }, "title": { "value": "GVFi: Learning 3D Gaussian Velocity Fields from Dynamic Videos" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0a7TRHhhcS
Preference-Driven Spatial-Temporal Counting Process Models
main
Active
choice model;spatial-temporal counting process model
interpretability and explainable AI
3;3;6;6
4;5;4;2
2;2;3;3
2;2;3;2
2;3;3;3
4.5
3.75
2.5
2.25
2.75
-0.688247
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is methodologically sound, with a well-defined approach supported by both theoretical and empirical analyses. The experimental setup is robust, including multiple real-world datasets, and the model's performance is compared against established baselines to highlight its predictive strength. \n\nThe paper is well-structured and provides comprehensive explanations of its key components, including the latent utility functions, mixture-of-experts model, and gating function. Diagrams and formulas aid in clarifying complex concepts, making the model's framework accessible for readers. \n\nThis framework contributes significantly to spatial-temporal modeling, especially in domains where human decision-making drives event occurrences. By enabling a nuanced understanding of preference-driven behavior and offering predictive power, the model has applications in fields like criminology, urban planning, and shared mobility systems." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a new spatial-temporal counting process model that integrates choice theory and social intelligence to capture human decision-driven event occurrences, such as crime rates and bike-sharing usage. The core idea is to use latent utility functions to represent diverse decision-making factors and to apply a mixture-of-experts model with a sparse gating function for adaptive selection. The model aims to reveal underlying patterns in counting processes, providing both predictive power and interpretability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The use of mixture-of-experts and the sparse selection mechanism may raise concerns regarding computational scalability when applied to large-scale, high-dimensional spatial-temporal data. While the model performs well on mid-sized datasets, it is unclear if the sparse gating function and multiple experts could handle significantly larger spatial grids or finer temporal resolutions without substantial computational costs. A discussion on computational efficiency or optimization strategies, such as parallelization, would strengthen the model’s applicability to broader scenarios." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please answer the questions corresponding to the weaknesses mentioned above.\n1. Why use these small datasets for evaluation? What about the actual value of the proposed method when applied to large-scale datasets?\n2. How do you explain the performance improvement of the MoE module and the relation between it and the overall performance improvement?\n3. How about the performance improvement when we have fine-grained spatial grids?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The studied forecasting problem of spatio-temporal events is very important, interesting, and of high value in the real world. \n2. The presentation is overall good, and the organization makes the paper easy to read and comprehend.\n3. The authors select two representative metrics, aRMSE and MAPE, on which the proposed method achieves the best performance among all these models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper is about the prediction problem for spatial-temporal event data generated by humans. The authors introduced a framework integrating choice theory with social intelligence to model and analyze counting processes. The authors further conducted experiments on several real-world spatio-temporal datasets, and empirical evaluation of crime and bike-sharing datasets demonstrated that the proposed model could achieve the best performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The datasets are small, leading to convincing results and conclusions. Although the authors have considered three datasets, NYC Crime, Chicago Crime, and Shanghai Mobike, for evaluation, the scales of these datasets are quite limited. There are only less than 1000 events on the first two datasets, which makes us wonder whether the proposed method can be used in real-world applications where the dataset may be very huge.\n2. The technical contribution of the proposed method is questionable. The proposed method introduces a strategy of MoE, which is widely used in model ensembling and limits the contribution of the whole framework. In other words, it is very likely to improve performance by adding the MoE module. In short, the proposed solution is a bit straightforward.\n3. Figure 2, Figure 3, and Figure 4 require improvement. Observing some informative and insightful conclusions from these figures is very hard since the grids are coarse-grained." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How do hyperparameter changes, such as learning rate, regularization parameters, and the number of mixture components, affect the model's performance?\n2. In what ways can the model be tested on a variety of datasets with different spatial and temporal characteristics to assess its generalizability?\n3. How can cross-validation and out-of-sample testing be conducted to ensure the model's stability and consistency?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. **Innovative Approach**: The paper introduces an innovative framework that integrates choice theory with social intelligence to model spatial-temporal counting processes. This approach addresses the complex decision-making processes and social factors influencing human-generated event data, such as crime occurrences and bike-sharing activities.\n2. **Interpretable Insights**: The model provides interpretable insights by uncovering latent human preference patterns through utility functions. This feature helps in understanding the underlying mechanisms driving the observed event counts, which is valuable for both academic and practical purposes.\n3. **Predictive Performance**: Empirical evaluations using crime and bike-sharing datasets show that the proposed model achieves good predictive accuracy compared to existing methods. The results indicate that the model can effectively predict event patterns and offer useful insights.\n4. **Theoretical Foundation**: The paper derives a generalization bound that is independent of the number of latent classes, providing a theoretical foundation for the model's robustness and reliability. This theoretical contribution adds to the academic value of the work.\n5. **Practical Flexibility**: The model demonstrates flexibility in handling different types of spatial-temporal data and can incorporate external interventions, making it adaptable to various real-world scenarios." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel framework that integrates choice theory with social intelligence to model spatial-temporal counting processes, such as crime occurrences and bike-sharing activities. By capturing latent human preference patterns through utility functions, the model aims to provide deeper insights into the mechanisms driving these events. Empirical evaluations using crime and bike-sharing datasets show that the proposed model offers high predictive accuracy and interpretability compared to existing methods, though potential limitations and future research directions are not extensively discussed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Interpretability Validation**: While the model emphasizes interpretability, this claim is not fully supported with detailed case studies or qualitative analyses. More concrete examples and validation are needed to ensure that the insights provided are actionable and meaningful. Without such validation, the interpretability aspect, though highlighted as a strength, remains somewhat abstract and less convincing.\n2. **Computational Efficiency**: The paper does not extensively address the computational efficiency of the model. Practical applications often involve large-scale datasets, and understanding the model's scalability and resource requirements is crucial. Without this information, it is challenging to determine the feasibility of deploying the model in real-world settings, which could limit its practical utility.\n3. **Future Research Directions**:\nThe paper does not clearly outline future research directions or potential extensions of the model. Discussing these aspects would provide a clearer path for advancing the field and addressing current limitations. Identifying open questions and suggesting avenues for further investigation would enhance the paper's contribution and encourage ongoing research in this area." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to W3 to W6." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "S1. The experiments are conducted with three real-datasets.\nS2. The writing is fluent and easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims at including human decision processes and social influence to observe criminal event counts. The proposed model is ambitious to include multiple human decision-making aspects, but the details of formulation and examination are missing. The experimental setup needs further reference to show its practicality." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1. Overall, the major concerns are that the paper may not be self-contained and appears disconnected. First, although Fig. 1 visualizes the structure of the proposed model, most details explaining each part are not presented. For instance, what are the differences between spatial and position info? Which model does each expert use? Second, although the abstract and introduction state that social norms, environmental cues, and various other factors are considered, there is no corresponding formulation in Section 3. Finally, the experimental results do not validate these claims either. It is suggested to connect the claims with detailed descriptions in the methods and experiment sections.\n\nW2. Please include up-to-date related works in top journals [1][2][3]. Moreover, half of the comparative baselines in the experiment section were published more than 10 years ago, which may be too outdated for fair comparisons. It is suggested to compare with newer methods instead.\n[1] Weichao Liang, Zhiang Wu, Zhe Li, Yong Ge: CrimeTensor: Fine-Scale Crime Prediction via Tensor Learning with Spatiotemporal Consistency. ACM Trans. Intell. Syst. Technol. 13(2): 33:1-33:24 (2022)\n[2] Shuai Zhao, Ruiqiang Liu, Bo Cheng, Daxing Zhao: Classification-Labeled Continuousization and Multi-Domain Spatio-Temporal Fusion for Fine-Grained Urban Crime Prediction. IEEE Trans. Knowl. Data Eng. 35(7): 6725-6738 (2023)\n[3] Weichao Liang, Jie Cao, Lei Chen, Youquan Wang, Jia Wu, Amin Beheshti, Jiangnan Tang: Crime Prediction With Missing Data Via Spatiotemporal Regularized Tensor Decomposition. IEEE Trans. Big Data 9(5): 1392-1407 (2023)\n\nW3. The definitions of matrices A and B on line 227, page 5, and the purpose of formulating them are unclear. Specifically, what do the two matrices embed, respectively? Additionally, right before introducing these matrices, the model already includes positional, spatial, temporal, and feature embeddings. An alternative approach might be to directly feed these four embeddings to the experts, rather than combining them with the two matrices to avoid additional computational overhead. This raises questions about the necessity, purpose, and benefit of the intermediate matrix decomposition-based embedding method compared to a straightforward alternative.\n\nW4. Please clarify the “ranking” concept in the gating function, starting from line 251 on page 5. Equations 7, 8, and the loss function at line 274 resemble a cross-entropy formulation, which is a classification-based metric rather than a ranking one. Additionally, I am uncertain whether ranking is appropriate in this scenario. Specifically, while predicting the time and place of a crime, a top-1 ranking for occurrence may not directly indicate that a crime is happening, as the probability could still be low. Therefore, relying on ranking rather than probability prediction may lead to false alarms and overreactions.\n\nW5. The practicality of the experimental setup is questionable. In the New York Crime and Chicago Crime datasets, each city is divided into 100 areas, and daytime is segmented into 4 time slots. However, it is unclear how large each area is after division. Is there evidence or a reference supporting that the 100-block granularity is beneficial for real-world law enforcement? Similarly, dividing daytime into four 6-hour slots may not be sufficiently granular. Is there a reference justifying this setup? Furthermore, it would be interesting to see the model’s performance at finer granularities, with smaller areas and shorter time slots.\n\nW6. The experimental results may not fully examine the authors' claims. While modeling the “human decision process” is a key focus, it is unclear how this is tested in the experiments. Are there specific sequential criminal events in the datasets? If so, does the proposed method successfully retrieve these sequences? How does the model demonstrate that its improvements are due to modeling the human decision process? Otherwise, if each crime is independent, how are the datasets suitable for examining causal relationships? In this context, could simple statistics identify criminal hotspots at specific time slots to yield similar results to those in Fig. 2? It is recommended to elaborate further on human decision modeling in the experiments." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024preferencedriven,\ntitle={Preference-Driven Spatial-Temporal Counting Process Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0a7TRHhhcS},\nnote={under review}\n}" }, "abstract": { "value": "Traditional spatial-temporal models often overlook the complex decision-making processes and social factors that shape spatial-temporal event data generated by humans. This paper introduces a novel framework that integrates choice theory with social intelligence to model and analyze counting processes, such as crime occurrences or bike-sharing activity, where the observed discrete events result from individual decisions influenced by social dynamics. \nOur approach aims to uncover latent human preference patterns, represented by utility functions, to capture the diverse decision-making factors within a population that result in the observed event counts. These latent factors help explain how choices—such as where and when to commit a crime—are shaped by personal preferences, environmental conditions, and social influences. By modeling the aggregate outcomes of these individual choices, we can better understand and predict patterns in counting processes. The proposed model adopts a preference-driven approach to counting data, providing interpretable insights at a detailed level. It also enables in-depth analysis of how external interventions, like law enforcement actions or policy changes, influence individual decisions and how these effects spread through the system. Empirical evaluation of crime and bike-sharing datasets demonstrates our model's ability to offer clear insights and achieve high predictive accuracy." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "choice model", "spatial-temporal counting process model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0eebdce162a74af0ed30ff858758c2f72e00851c.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Preference-Driven Spatial-Temporal Counting Process Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0aTIvSJ83I
Agnostic Sharpness-Aware Minimization
main
Active
sharpness-aware;agnostic model;optimizer;MAML;SAM
optimization
3;3;3;3
5;5;4;3
2;2;2;2
1;2;1;2
2;2;2;2
3
4.25
2
1.5
2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "besides the above, I am curious why the perturbation radii are set the way they are, ie. inner rho twice the outer rho? Was it grid -searched? is there some intuition behind this setting?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The motivation to combine the element of sharpness-minimization in meta-learning for better generalization makes sense. This is the operationalized well in the form of an algorithm that is shown to perform slightly better with the baselines. \n\n- The method seems to be extensively tested in supervised learning setups, meta-learning scenarios, as well as those with label noise." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work aims to combine Sharpness-Aware Minimization with Model-Agnostic Meta-Learning, by having worst-case robustified versions of the loss in both the inner and outer loop of meta-learning. This is then tested in the usual supervised learning setups in vision and some meta-learning benchmarks, where the method is shown to outperform the baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Difference wrt Abbas et al. 2022: When compared with this prior work, it is unclear what is the novelty here. The authors mention this paper, but don't bother to explain the similarities or differences. The method here looks **eerily similar** to the two-year old prior work, which is arguably better written and presented and a lot more richer. Except for little bits of analysis on congruence between gradients, I can't spot much of a methodological difference. \n\n- Supervised learning experiments: In departure to their motivation, the authors start presenting results on supervised learning. I understand that this can be simulated in the meta-learning setup as well, but it comes across as confusing. Then their method involves 4 gradient computations per step, while SGD and SAM will involve 1 and 2 gradient computations respectively. So for a fairer comparison, the authors should have reported results with letting the baselines have more compute. Thus, given the excessive runtime, the method does not seem worth the effort of obtaining marginal gains. \n\n- Ablation studies on the relevance for SAM in inner/outer stages of meta-learning would have been insightful: Which part benefits more from SAM? Can the authors run an ablation study?\n\n- Momentum hyperparameter and it's ablation: The Table 8 would suggest that having no momentum results in better performance, but it is bizarre that the authors continue to keep using a momentum in all their results, despite of that. Especially when the improvements they report are not infrequently of the similar range." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Can authors state exactly what contributions to claim from Theorems 1 and 2?\n- (on the first Imagenet experiments) the top-1/top-5 accuracies seem quite low, why is that the case? can the authors also provide Resnet50 results? how many runs are these results? can authors provide standard errors?\n- It is unclear the exact difference between two versions of Agnostic-SAM (Table 2 and Table 5); if it means using different base SAM (i.e., SAM or ASAM), it appears that Agnostic-SAM (with ASAM) often underperforms Agnostic-SAM (with SAM), why is it the case?\n- How did author come up with the rules to set perturbation bounds originally?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- SAM and MAML are both found to be effective for enhancing generalization performance, and that the paper is attempting to explore the intersection of these is encouraging.\n- The paper follows a standard procedure to evaluate the proposed method (Agnostic-SAM) and shows its effectiveness in experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper combines MAML into SAM, proposing a new optimization scheme to improve generalization performance. The paper provides a theoretical propositions on generalization bound and gradients alignments, but it is regarded that the paper mainly focuses on verifying its generalization effectiveness numerically measured on some deep learning tasks. The paper also provides additional ablation results to support gradient alignments and momentum." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are several concerns on this paper summarized as follows.\n\nMethod\n- The main idea and motivation of this work, as its current form, remain quite random. They are two of many potential ways to improve generalization performance, but without clearly justifying why these two, the paper simply combine the two approaches and end up providing experimental results. This diminishes the technical contributions and novelty.\n- The authors also claim that it is a \"framework\", but with it being the simple combination of SAM and MAML, it has not been rigorously evaluated to be a framework as to whether this can serve as a general scheme so it remains as an initial idea. There have been many advancements since the original SAM and MAML, but the paper only takes a proof-of-concept approach, limiting its potential.\n- This idea requires additional computations (validation set, additional forward-backward, hyperparameter tuning) but it is unclear whether this is worth, in particular, compared to other potential ways to improve generalization.\n\nExperiments\n- The experiments are also a bit bland without being tailored to specifically analyze any aspect of SAM and MAML simply evaluating the final performances, lacking novelty and interesting insights.\n- The proposed scheme is only compared to naive baselines, and it is seen that the improvements are very marginal across many experiments. It is a bit critical in the sense that Agnostic-SAM makes use of the additional validation set and more computations to get validation gradients, which leaves a question that whether Agnostic-SAM is really the best possible choice for generalization." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the concerns raised in the weaknesses section above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper provides a comprehensive evaluation of Agnostic-SAM across a wide range of tasks, including image classification, transfer learning, training with label noise, and meta-learning." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Agnostic-SAM, an optimization method that integrates insights from MAML into SAM. The approach seeks to update the model to a region that not only minimizes sharpness on the training set but also implicitly ensures strong performance on the validation set." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe motivation for the problem formulation in Equation 3 is not convincingly justified. It would benefit from a clearer explanation of why this specific formulation was chosen and how it directly leads to generalization.\n2.\tThe paper does not sufficiently clarify how the integration of MAML’s insights with the proposed problem formulation and algorithm specifically aids generalization. A deeper theoretical or empirical justification is needed.\n3.\tThe proposed algorithm assumes the existence of a held-out validation set. However, in practice, the training set is used as the validation set, which diverges from the theoretical framework. This discrepancy is particularly problematic in datasets like CIFAR-10 and CIFAR-100, where the training loss converges to zero, a behavior not typically observed with a true validation set.\n4.\tFrom Algorithm 1, it appears that Agnostic-SAM requires double the computational time compared to SAM. In the experiments, SAM is compared by allowing SGD to run for double the iterations for fair comparison [1]. It would be fairer to allow SAM and ASAM to run for twice the iterations of Agnostic-SAM in the experiment.\n\n[1] Foret, P., Kleiner, A., Mobahi, H., and Neyshabur, B. Sharpness-aware minimization for efficiently improving generalization. In International Conference on Learning Representations, 2021" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "According to Section 3.3 the goal is to align the perturbed gradients of the validation and train batches. Why do the authors then report the alignment of the unperturbed gradient of the train batch with the perturbed gradient of the validation batch in Section 5.1?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The proposed method requires an additional hyperparameter, but the authors found a way of setting it consistently throughout their experiments: $ \\rho_{1} = 2 \\rho_{2} $.\n\n- Agnostic-SAM improves over baselines in most cases (even though I have doubts about the setups, see below)\n\n- Combining ideas from MAML and SAM is a creative approach" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose Agnostic-SAM, a new variant of the Sharpness-Aware Minimization (SAM) algorithm. Instead of only the batchwise gradient ascent perturbation step of SAM, Agnostic-SAM additionally performs a descent step on a validation batch, before computing the gradient for the final update. The authors motivate their work from a PAC-Bayes bound and report experimental results on image classification tasks (vanilla classification, noisy labels, meta-learning)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Comparison to Baselines**\n\nAt its core, Agnostic-SAM changes the perturbation step of SAM by adding an additional perturbation based on gradients from a separate, smaller data batch, and the authors claim improved generalization performance. However, several methods have proposed adjustments to SAM’s perturbation model with improved generalization performance. Most similar to Agnostic-SAM, [1] adds random perturbations to the gradient-based perturbation, while [2] and [3] perform multi-step perturbations. Many more methods exist ([6,7,8,...]), but none of those appear as baselines in the experiments. The only other standard baselines are in Table 2 (ASAM), and in Table 5 (ASAM and FSAM). In the MAML experiment (Tables 6 and 7) the authors report improved performance over [9]. However, they only show the Sharp-MAML-low version from [9], even though other versions exist. In particular, the Sharp-MAML-both variant from [9] outperforms Agnostic-SAM in three out of the four reported cases. It is unclear why this is not reported or explained. \n\n\n**Training time**\n\nThe proposed method requires two additional forward-backward passes on the validation batch, which leads to increased computational cost compared to SAM (roughly 27\\% wall clock time according to Table 9). While the authors briefly mention this in the conclusion, a more thorough discussion and evaluation is needed, as this affects the fairness of comparisons in the main paper.\n\n\n**Hyperparameters and train settings**\n\nThe authors report some baseline values from the original papers, and others are reproduced with the $\\rho$ values from those papers. For instance, for WRN28-10 on CIFAR100 the SGD number is taken from the SAM paper [5], and the SAM number is reproduced with the same $\\rho$ value, but is lower than the number from the SAM paper (83.0\\% vs 83.5\\% in the SAM paper, which would outperform the reported Agnostic-SAM number). Similar observations hold for ASAM, and CIFAR100. Lower reproduced numbers can be due to different training settings and are not a problem per se if the comparison is fair, but here I have certain doubts because the optimal $\\rho$ value can be sensitive to the training settings and was taken from the reference papers. Further, some choices, like e.g. $\\rho=0.05$ for SAM in ImageNet transfer learning while $\\rho=0.1$ for ImageNet training from scratch look just arbitrary. Some $\\rho$ tuning must have taken place, since the authors even claim that _accuracies tend to decrease when reducing $\\rho$_. Further, it is unclear how exactly the authors came up with the choice $\\rho_{1}=2\\rho_{2}$ and if it is based on the ablation in A2, purely by intuition or additional experiments and tuning. Finally, the scope of the experiments is somewhat limited. In particular, there are no experiments with VisionTransfomers, no experiments on text data, and the only larger-scale experiments (ImageNet) are with fairly weak models (at most ResNet-34 for training from scratch).\n\n\n**Theorem 1**\n\nThe authors present Theorem 1 as a central motivation for their method. However, this theorem is nearly identical to Theorem 1 and its proof in the original SAM paper [5], with minimal modification. As with [5], this theorem would theoretically motivate a version of SAM based on average-case rather than worst-case perturbations. The presented generalization bound implies an average-case sharpness bound, which is only subsequently upper-bounded by a worst-case sharpness bound. This limitation was already present in [5] and has since been highlighted, for example, in [4]. Furthermore, the conclusions from this theorem, i.e. why exactly it would motivate equation (3) and the final algorithm, are not understandable to me. \n\n\n**Clarity**\n\nApart from the disconnect between Theorem 1 and the method, it is not well justified why exactly the alignment of the gradients of the perturbed points from train and validation batches would be beneficial for generalization, especially since in the experiments both batches are from the train set. Overall, the MAML perspective is unclear to me, since in the practical algorithm, train and validation batches are both sampled from the train set, and there is only one task to solve in almost all experiments. Additional confusion arises from unclear terminology (e.g. the notation $\\theta^{*}(\\theta)$ wasn’t introduced, the Taylor expansion in (7) is presented as an exact equality, etc.)\n\n\n\n[1] Yong Liu, Siqi Mai, Minhao Cheng, Xiangning Chen, Cho-Jui Hsieh, & Yang You (2022). Random Sharpness-Aware Minimization. In Advances in Neural Information Processing Systems.\n\n[2] Kim, H., Park, J., Choi, Y., Lee, W., and Lee, J. Exploring the effect of multi-step ascent in sharpness-aware minimization\n\n[3] Goncalo Mordido, Pranshu Malviya, Aristide Baratin, & Sarath Chandar (2024). Lookbehind-SAM: k steps back, 1 step forward. In Forty-first International Conference on Machine Learning.\n[4] Maksym Andriushchenko and Nicolas Flammarion (2022). Towards Understanding Sharpness-Aware Minimization. ICML 2022\n\n[5] Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In ICLR, 2021\n\n[6] Minyoung Kim, Da Li, Shell X Hu, and Timothy Hospedales. Fisher SAM: Information geometry and sharpness aware minimisation. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning\n\n[7] Mi, P.; Shen, L.; Ren, T.; Zhou, Y.; Sun, X.; Ji, R.; and Tao,D. 2022. Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach\n\n[8] Jiawei Du, Hanshu Yan, Jiashi Feng, Joey Tianyi Zhou, Liangli Zhen, Rick Siow Mong Goh, & Vincent Tan (2022). Efficient Sharpness-aware Minimization for Improved Training of Neural Networks. In International Conference on Learning Representations.\n\n[9] Momin Abbas, Quan Xiao, Lisha Chen, Pin-Yu Chen, and Tianyi Chen. Sharp-maml: Sharpness-aware model-agnostic meta learning" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce a novel optimizer Agnostic-SAM that adapts the core idea of SAM by optimizing the model toward wider local minima using training data, while concurrently maintaining low loss values on validation data." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024agnostic,\ntitle={Agnostic Sharpness-Aware Minimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0aTIvSJ83I},\nnote={under review}\n}" }, "abstract": { "value": "Sharpness-aware minimization (SAM) has been instrumental in improving deep neural network training by minimizing both the training loss and the sharpness of the loss landscape, leading the model into flatter minima that are associated with better generalization properties. In another aspect, Model-Agnostic Meta-Learning (MAML) is a framework designed to improve the adaptability of models. MAML optimizes a set of meta-models that are specifically tailored for quick adaptation to multiple tasks with minimal fine-tuning steps and can generalize well with limited data. In this work, we explore the connection between SAM and MAML in enhancing model generalization. We introduce Agnostic-SAM, a novel approach that combines the principles of both SAM and MAML. Agnostic-SAM adapts the core idea of SAM by optimizing the model toward wider local minima using training data, while concurrently maintaining low loss values on validation data. By doing so, it seeks flatter minima that are not only robust to small perturbations but also less vulnerable to data distributional shift problems. Our experimental results demonstrate that Agnostic-SAM significantly improves generalization over baselines across a range of datasets and under challenging conditions such as noisy labels or data limitation." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "sharpness-aware", "agnostic model", "optimizer", "MAML", "SAM" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/049b2dbe838e5e43b39454e757744ec0a45b5adf.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Agnostic Sharpness-Aware Minimization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0aaaM31hLB
Learning Symmetries through Loss Landscape
main
Active
Unconstrained models;equivariant models;symmetries.
learning on graphs and other geometries & topologies
3;3;3;3
5;4;4;3
1;2;2;3
1;2;2;2
3;3;2;3
3
4
2
1.75
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "None" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please address my concerns in the weakness part, especially the novelty of the proposed method and the theoretical foundations." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well-written and presents the core ideas in a clear and accessible manner.\n- Using a \"landscape\" to describe the benefits of unconstrained models is particularly novel and insightful." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates unconstrained models for handling data symmetry. The authors demonstrate that by designing a loss function specifically tailored for learning equivariance, unconstrained models can approximate data symmetry by minimizing this equivariant loss. This approach allows the models to efficiently control the level of equivariance while maintaining flexibility." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The study introduces this equivariant loss without providing a strong theoretical foundation for the proposed approach. Also, the proposed method is quite straightforward, and its distinction from data augmentation is unclear. It essentially computes the loss on a larger augmented dataset by sampling transformed data. I suspect this method is already well-known within the community, which limits the novelty of the contribution.\n\n- The experimental comparisons are performed on a limited set of classic models rather than state-of-the-art models, raising concerns about the practical applicability of the method to more advanced techniques." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. To address fundamental concerns, I recommend a more comprehensive background and analysis, alongside broader empirical comparisons. Specifically, the study should include a wider range of architectures that go beyond strictly equivariant and non-equivariant models, particularly for the motion capture task, which is inherently non-equivariant.\n\n2. In practice, if a single randomly sampled group element is used per sample in each training step (as mentioned at the end of Section 3.1), this should be explicitly stated in Section 3.2 where the sampling procedure is discussed and the number of samples M is introduced.\n\n3. The paper lacks heuristic, theoretical and/or empirical justification for the choice of one group element per sample per training epoch sufficient. As a result the particular choice of measure is understudied. Moreover, it remains unclear how the equivariance error is measured in Section 6. How many samples are used for this computation?\n\n4. It's entirely unclear if the difference in the loss landscape is a result of the augmented loss function or a result of the architectural difference. I would strongly recommend performing comparisons with a fixed architecture.\n\n5. The MD17 dataset includes two regression targets: energy and forces. Please clarify in the text which target is being used (likely force regression) and how the results are generated.\n\n6. Near the end of Section 6.2, the statement \"Best performance is observed at an intermediate level of equivariance...\" is confusing. Since the paper modifies the loss function, not the architecture, this needs further explanation for supporting the proposed methodology. Otherwise, the conclusion is simply to not utilize strict equivariant architectures for non-equivariant tasks." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The augmented loss function is generalizable across various architectures.\n\n2. The augmented loss function requires relatively few samples to work effectively making it computationally efficient." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces an augmented loss function that incorporates a measure of average equivariance, aiming to enhance the prediction of approximately equivariant information. The authors validate this approach on both equivariant and non-equivariant tasks, utilizing transformer and graph neural network architectures. Furthermore, they conduct a visual analysis of the loss landscape, comparing two different architectures: Transformer with the augmented loss and GATr without." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The proposed methodology can be described as approximate equivariance, but the paper lacks an adequate background and comparative analysis against existing works on approximate equivariance. This raises concerns about both the novelty and the empirical validation of the approach.\n\n1. Novelty: Augmented loss functions enforcing approximate equivariance have been studied (e.g. [1]) including an average measure (e.g. [2]).\n\n2. Empirical Support: The paper does not benchmark against other methods that address approximate equivariance (e.g., [1]), nor does it consider theoretically grounded approaches to symmetry breaking (e.g., [3], [4]) or simpler strategies like combining SE3Transformer with MLPs.\n\n[1] Kim, Hyunsu, Hyungi Lee, Hongseok Yang, and Juho Lee. \"Regularizing towards Soft Equivariance under Mixed Symmetries.\" Proceedings of the 40th International Conference on Machine Learning, ICML'23, 2023, pp. 686, JMLR.org.\n\n[2] K. Lin, B. Huang, L. M. Collins, K. Bradbury and J. M. Malof, \"A simple rotational equivariance loss for generic convolutional segmentation networks: preliminary results,\" IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 2019.\n\n[3] Wang, Rui, Robin Walters, and Tess Smidt. \"Relaxed Octahedral Group Convolution for Learning Symmetry Breaking in 3D Physical Systems.\" NeurIPS 2023 AI for Science Workshop, 2023, https://openreview.net/forum?id=B8EpSHEp9j.\n\n[4] Lawrence, Hannah, Vasco Portilheiro, Yan Zhang, and Sékou-Oumar Kaba. \"Improving Equivariant Networks with Probabilistic Symmetry Breaking.\" ICML 2024 Workshop on Geometry-grounded Representation Learning and Generative Modeling, 2024, https://openreview.net/forum?id=1VlRaXNMWO." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How will your algorithm fare when there are data constraints? Equivariant models are inherently data efficient, but your algorithm does not seem to be.\n- The loss landscape plots depend on the selected directions - so how can we infer from just two random directions that the loss landscape is better for Transformers or GATr? The optimization paths should affect, and although it is discussed to some extent in limitations, it would be better if there is more discussion on this.\n\nMost of the other questions I had are listed in the Weaknesses section. I will be happy to improve the score if the authors address the questions and weaknesses with supportive evidence during the discussion phase." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper is well-motivated and clearly written (particularly the sections on background and methods).\n- The limitation section discusses an important limitation of the interplay between optimization paths and loss landscape.\n- The experiments are conducted in different domains and examine several essential aspects of the algorithm, giving more insights into the method and how levels of equivariance can affect downstream task performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper attempts to build equivariance into unconstrained models by posing equivariance as a constrained optimization problem, which can, in turn, also control the level of approximate equivariance in the models. The authors demonstrate results in N-body dynamical systems, motion capture, and molecule dynamics, and they analyze the effect of the level of approximate equivariance on task performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Related work**:\n - Although the paper uses equivariance as a constrained optimization problem and discusses it in the context of unconstrained models, it misses several crucial relevant works. Discussion of these works would help to place the submission in the literature and give a view of how this work differs from and compares to existing works.\n - Learning equivariance from data [1, 2], approximate/soft equivariance [3, 4, 5, 6], equivariance as a constrained optimization problem [7, 8], and equivariance with unconstrained models [9, 10, 11, 12].\n - Can the authors highlight the differences from Sec 3.1 and Sec. 3.2 of [10]?\n\n- **$\\beta$ and $\\alpha$ as hyperparameters**:\n - The authors suggest that the level or extent of equivariance can be controlled with $\\beta$ and $\\alpha$ - is there a formal way to define this \"level\" of equivariance or is it an intuition tied to the loss itself, i.e., higher $\\frac{\\beta}{\\alpha}$ indicates more equivariant? \n - Next, how would someone know the optimal level of equivariance while using your proposed algorithm - $\\beta$ is not learned, and the results indicate that the optimal $\\beta$ can be identified from the test data results, which is not ideal. Rephrasing this, how do you know how much equivariance is required for the task, and thus what values of $\\alpha$ and $\\beta$ to set?\n\n\n- **Methodology**:\n - How will your algorithm work if group $G$ is unknown?\n - How can your method reasonably approximate equivariance if $G$ is very large and the duration of training is not enough?\n - The highest level of equivariance is when $\\alpha = 0$ and $\\beta=1$. However, this is equivalent to data augmentation, which does not guarantee exact equivariance. Can your algorithm guarantee exact equivariance?\n - While the trends are consistent for both metrics, as reported in the paper, it might be helpful to discuss which metric - Eq. 9 or Eq. 10 is better suited for evaluation. How does Equation 9 work (or make sense) when $f(x)$ is non-scalar?\n - For Motion Capture, if the symmetry constraints are already known, instead of complete SE(3) equivariant baselines, why didn't the authors select appropriate equivariant models that are equivariant to the required SE(3) subgroup or consider symmetry breaking [14, 15]? What $G$ did your algorithm use? If it is the subgroup of SE(3), then it is an unfair comparison.\n\n- **Minor spelling errors**:\n - L156 \"requiring equivariant into\" should be \"requiring equivariance in\"\n - L396 \"it is\" should be \"its\"\n\n\n**References**:\n1. Equivariance Discovery by Learned Parameter-Sharing. Yeh et al., AISTATS 2022.\n2. Learning Equivariances and Partial Equivariances from Data. Romero et al., NeurIPS 2022.\n3. Learning Layer-wise Equivariances Automatically using Gradients. Ouderaa et al., 2023.\n4. Residual Pathway Priors for Soft Equivariance Constraints. Finzi et al., NeurIPS 2021.\n5. Almost Equivariance via Lie Algebra Convolutions. McNeela et al., 2024.\n6. Regularizing Towards Soft Equivariance Under Mixed Symmetries. Kim et al., ICML 2023.\n7. Improved Canonicalization for Model Agnostic Equivariance. Panigrahi et al., 2024.\n8. Structuring Representations Using Group Invariants. Shakerinava et al., NeurIPS 2022.\n9. Equivariance with Learned Canonicalization Functions. Kaba et al., ICML 2023.\n10. Equivariant adaptation of large pretrained models. Mondal et al., NeurIPS 2023.\n11. Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models. Basu et al., AAAI 2023.\n12. Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance. Kim et al., NeurIPS 2023.\n13. Steerable Equivariant Representation Learning. Bhardwaj et al., 2023\n14. Symmetry breaking and equivariant neural networks. Kaba et al., NeurIPS NeuReps workshop 2023\n15. Improving Equivariant Networks with Probabilistic Symmetry Breaking. Lawrence et al., ICML GRaM workshop 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weekness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "Relaxing equivariance is a valuable research direction that can break through the constraints on generalization or expressive power caused by strictly equivariant operations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors analyze the application of unconstrained models in equivariant tasks by conducting a comprehensive analysis of unconstrained models, comparing their performance and computational efficiency against equivariant models. Besided, the authors introduce a novel, simple loss function that enables these models to approximate symmetries, which can be optimized during training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The primary concern is the authors' motivation. The idea of using group transformations for data augmentation is native, but for many equivariant tasks, it is challenging to obtain a general model through data sampling due to the bias introduced by limited sampling. For instance, for point clouds or molecules, sampling across all angles would expand the dataset by hundreds of times and still struggle to enable the model to effectively learn fine-grained rotation equivariance. I suggest the authors validate their approach on common 3D datasets such as QM9 or ModelNet40.\n\n2. The authors base their introduction in the first three sections on general equivariance, yet the impact of different equivariance groups on algorithms varies. For example, permutation equivariance and translation equivariance can be directly covered by simple operations, making the paper's method inapplicable. The authors should specify which equivariant tasks their method focus on.\n\n3. Relaxing equivariance is a widely discussed topic, and the authors lack relevant citations and analysis [1] [2] [3] [4]. Moreover, the main advantage of unconstrained models is their ability to learn more complex features. It is worth noting that strictly equivariant operations can limit the expressive power of GNNs [5] [6], but unconstrained models may surpass these limitations. Additionally, some tasks are not strictly equivariant, allowing unconstrained models to be applicable. The authors' emphasis on the lower computational complexity of unconstrained operations is incorrect. In the 3D domain, models like torchmd are strictly equivariant yet have low complexity.\n\n [1] Residual pathway priors for soft equivariance constraints, Finzi, et al.\n\n [2] Approximately equivariant networks for imperfectly symmetric dynamics. Wang, et al.\n\n [3] Relaxing equivariance constraints with non-stationary continuous filters. van der Ouderaa, et al.\n\n [4] Learning Partial Equivariances from Data. Romero, et al.\n\n [5] On the Universality of Rotation Equivariant Point Cloud Networks. Nadav Dym, Haggai Maron. \n\n [6] On the Expressive Power of Geometric Graph Neural Networks. Chaitanya K. Joshi, Cristian Bodnar, Simon V. Mathis, Taco Cohen, Pietro Liò.\n\n4. I do not understand how the loss surface in Figure 1 was created and why it demonstrates the advantages of unconstrained models.\n\n5. There are numerous issues with the paper's presentation:\n\n (a) The equations in lines 218, 224, and 227 lack numbering.\n\n (b) In line 215, the definition of G is finite, which is problematic for integrals where the group size can be infinite. Most groups mentioned in the paper are infinite, and I do not understand why the authors restrict groups to being finite in their initial definition.\n\n (c) All the references have formatting issues because none of them specify the source of the papers. For instance, \"Equivariant Graph Hierarchy-Based Neural Networks\" in your paper is accepted in NeurIPS 2022, not arxiv.\n\n (d) Appendix B is incomplete; several titles are clustered together without any explanatory text." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning Symmetries through Loss Landscape},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0aaaM31hLB},\nnote={under review}\n}" }, "abstract": { "value": "Incorporating equivariance as an inductive bias into deep learning architectures, to take advantage of the data symmetry, has been successful in multiple applications such as chemistry and dynamical systems. The build of equivariance architecture, particularly w.r.t. roto-translations, is crucial for effectively modeling geometric graphs and molecules, where the understanding of 3D structures enhances generalization. However, despite their potential, equivariant models often pose challenges due to their high computational complexity. In this paper, we study the capabilities of unconstrained models (which do not build equivariance into the architecture) and how they generalize compared to equivariant models. We show that unconstrained models can learn approximate symmetries by minimizing additional simple equivariance loss. By formulating equivariance as a new learning objective, we can control the level of approximate equivariance in the model. Our method achieves competitive performance compared to equivariant baselines while being 10x faster at inference and 2.5x at training." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Unconstrained models", "equivariant models", "symmetries." ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d4b77b8f4bd303b6beaa3c81a10b3e38e7e05e45.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Learning Symmetries through Loss Landscape" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0bcRCD7YUx
VALL-E 2: Neural Codec Language Models are Human Parity Zero-Shot Text to Speech Synthesizers
main
Active
Zero-shot Text to Speech Synthesis;Speech Generation;Voice Cloning;Language Modeling;In-Context Learning
applications to computer vision, audio, language, and other modalities
3;3;6;8
4;5;4;5
3;2;3;4
2;2;3;4
3;2;3;4
5
4.5
3
2.75
3
0.235702
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- **Why was Vocos selected?** Given that other Vocoders with potentially better performance are available, why was Vocos specifically chosen? Additionally, what was the necessity of switching the decoder from Encodec’s original model?\n\n- **Reason for the Significant Improvement in SIM**: It is understandable that Repetition Aware Sampling could lead to an improvement in WER; however, it is less clear how this would directly impact SIM. Furthermore, why does the subjective evaluation show significant improvement despite relatively poor objective metrics?\n\n- **Lack of Evaluation on Difficult Cases**: The introduction references challenging cases, yet no evaluation related to these cases is provided. Why is this evaluation absent?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is written in a clear and accessible manner, enhancing readability and comprehension. Notably, VALL-E 2 demonstrates superior performance over ground truth in subjective evaluations, achieving higher scores in both CMOS and SMOS on both the LibriSpeech test-clean and VCTK datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "VALL-E 2 is an LM-based TTS model based on VALL-E. It proposes two new methods:\n\n1. **Repetition Aware Sampling**: In this method, during the sampling process, the repetition ratio is calculated based on the number of times a token has been generated. If this value exceeds a threshold, tokens are generated randomly from the original distribution.\\\\\n2. **Grouped Code Modeling**: This method reduces sequence length by grouping adjacent tokens into fixed-size groups.\n\nThanks to these contributions, VALL-E 2 achieves significantly higher performance than the baseline VALL-E, particularly yielding better subjective evaluation results than the ground truth on LibriSpeech test-clean and VCTK." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The authors have made an effort to present this work as a promising study; however, upon closer examination, there are numerous concerns that require substantial improvement.\n\n- **On Subjective Evaluation Results**: \n The results discussed as a strength in Figure 1 are fundamentally flawed due to the dataset disparity with other studies (e.g., NaturalSpeech3 [1] uses the Librilight dataset). This undermines fairness in comparison. To ensure meaningful comparison, the authors should replicate NaturalSpeech3, which currently appears to be a good model, and train on the same dataset.\n\n- **On Grouped Code Modeling**: \n While this approach has some merit as a method to reduce the sequence length given the codec model’s high 75Hz frequency, it is rather naïve and cannot be considered innovative. In fact, similar efforts have already been undertaken in existing research, such as [2], which the authors should have cited at minimum. Additionally, the method does not lead to significant improvements in either objective or subjective evaluations, suggesting that further refinements are needed.\n\n- **On Repetition Aware Sampling**: \n Although this method appears to address the traditional issue of repetition in models like VALL-E effectively, it is not particularly innovative. In language modeling (LLM) contexts, penalties for repetition have long been in use [3], making the lack of reference to these approaches surprising. While the authors’ method differs slightly from these established approaches, it would be necessary to compare with them to clarify the method’s effectiveness. Moreover, the existing application of repetition penalties in TTS contexts, as seen in [4], further accentuates this concern.\n\n- **On Ablation Studies**: \n There is a significant lack of ablation studies. The paper includes excessive unnecessary information; for example, the equations related to the model are redundant, and condensing this information would allow the inclusion of ablation studies directly in the main text. The limited experiments in the appendix also lack relevance. Ablations such as the presence or absence of prompts and dataset size variations are not particularly noteworthy, and their results are self-evident. More critical studies, such as comparisons with traditional repetition penalties or ablations involving Vocos (a major change from VALL-E), would have been more appropriate.\n\n- **On Baseline Comparisons**: \n Changing the decoder from VALL-E’s original to Vocos represents a major shift and warrants stronger emphasis in comparative experiments. Additionally, the fact that subjective evaluation is best when the group size is 1 makes it very challenging to establish differentiation from the baseline.\n\n- **Contribution to the Field**: \n The lack of code and weight release significantly diminishes the contribution of this study to the field.\n\n\n[1]: Ju, Zeqian, et al. \"Naturalspeech 3: Zero-shot speech synthesis with factorized codec and diffusion models.\" ICML 2024.\\\n[2]: Tang, Changli, et al. \"Salmonn: Towards generic hearing abilities for large language models.\" ICLR 2024.\\\n[3]: Keskar, Nitish Shirish, et al. \"Ctrl: A conditional transformer language model for controllable generation.\" arXiv preprint arXiv:1909.05858 (2019).\\\n[4]: Casanova, Edresson, et al. \"XTTS: a Massively Multilingual Zero-Shot Text-to-Speech Model.\" INTERSPEECH 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "[Q1. Comparison with Low-bitrate Codec] Have you compared the grouped Code Modeling with low-bitrate Codec?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "They enhanced the baseline model, VALL-E, the first neural codec language model, by introducing repetition-aware sampling and grouped code modeling. While the baseline models are prone to issues like word repetition or omission, the proposed methods mitigate these problems and further improve model efficiency." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors proposed a neural codec language model for zero-shot text-to-speech, enhancing robustness by refining sampled tokens through repetition-aware sampling. They further improved robustness and efficiency by applying grouped code modeling, effectively reducing sequence length." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\t[Grouped Code Modeling1] From an engineering perspective for neural codec language models, the proposed grouped code modeling could improve model performance and efficiency. However, grouped code modeling is already a well-known technique in language models, as seen in works like [MegaByte], [RQ-Transformer], and [Block Transformer].\n\n[MegaByte] Yu, Lili, et al. \"Megabyte: Predicting million-byte sequences with multiscale transformers.\" Advances in Neural Information Processing Systems 36 (2023): 78808-78823.\n\n[RQ-Transformer] Lee, Doyup, et al. \"Autoregressive image generation using residual quantization.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n[Block Transformer] Ho, Namgyu, et al. \"Block Transformer: Global-to-Local Language Modeling for Fast Inference.\" arXiv preprint arXiv:2406.02657 (2024).\n\n2.\t[Grouped Code Modeling2] Additionally, [UniAudio] and [GPST] have already adopted a similar structure to sample RVQ tokens more efficiently. While there may be slight differences in implementation, they have the same goal\n\n[UniAudio] Yang, Dongchao, et al. \"UniAudio: Towards Universal Audio Generation with Large Language Models.\" Forty-first International Conference on Machine Learning.\n\n[GPST] Zhu, Yongxin, et al. \"Generative Pre-trained Speech Language Model with Efficient Hierarchical Transformer.\" ACL, 2024.\n\n3.\t[Grouped Code Modeling3] Notably, the classic sequence-to-sequence text-to-speech Tacotron also utilized the grouped spectrogram sampling through a reduction factor.\n\n[Tacotron] Wang, Yuxuan, et al. \"Tacotron: Towards end-to-end speech synthesis.\" arXiv preprint arXiv:1703.10135 (2017).\n\n4.\t[Grouped Code Modeling4] The recently proposed model [MELLE] also claims to predict multiple frames per step, accelerating inference and mitigating robustness issues associated with long-sequence modeling while maintaining strong performance. Moreover, MELLE has been shown to outperform VALL-E2. This hurts the contribution of the proposed method.\n\n5.\t[Repetition Aware Sampling] Recently, Flow-matching and MaskGIT-based text-to-speech models have adopted iterative sampling methods similar to repetition-aware sampling. It would be beneficial for the authors to discuss and compare repetition-aware sampling with these iterative methods, particularly against models like VoiceBox and E2-TTS.Specifically, I hope to see the comparison with VoiceBox and E2-TTS. \n\n6.\t[Weak Baseline] The authors only compared the model with VALL-E. However, VALL-E underperforms compared to VoiceBox, E2-TTS, DiTTo-TTS, and CosyVoice. \n\nI sincerely acknowledge the novel contribution of VALL-E in opening the door for neural codec language models; however, the novelty of VALL-E2 does not meet the standards expected for ICLR." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- Why do this paper choose Byte-Pair Encoding (BPE) for text tokenization instead of using phonemes? How many BPE tokens are used in the model? Given that large datasets like LibriHeavy typically require thousands of BPE classes, while phoneme-based tokenization usually involves only a few dozen classes, how do you anticipate this choice impacts the model’s performance?\n\n- Including punctuation marks in modeling units could benefit text-to-speech (TTS) systems. I’m interested to know if the BPE units in this work incorporate punctuation marks. How might this decision impact the model’s performance?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- VALL-E 2 achieving human parity in zero-shot TTS is a promising advancement, marking a new benchmark for text-to-speech systems. Its potential applications are particularly promising in assistive technologies for individuals with speech impairments.\n\n- The introduction of repetition-aware sampling and grouped code modeling is a simple but effective approach that enhances the model's stability and efficiency in generating speech. These methods could be easily adapted to speech-to-speech language models.\n\n- The paper demonstrates strong experimental validation with comprehensive evaluations on datasets including LibriSpeech and VCTK, showing clear improvements in robustness, naturalness, and speaker similarity​.\n\n- The technical explanations and results are clearly presented, making the contributions and performance enhancements easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "VALL-E represents a breakthrough in neural codec language modeling for zero-shot text-to-speech synthesis. It can synthesize personalized speech from just a 3-second recording, while preserving the speaker's voice, emotion, and acoustic environment. VALL-E uses an autoregressive transformer to model coarse codec codes (1st group of EnCodec) and a non-autoregressive transformer to generate fine codec codes (2nd-8th groups of EnCodec). However, VALL-E faces two key limitations: 1) Stability: Random sampling during inference can cause instability, while small top-p nucleus sampling risks infinite loops. 2) Efficiency: Its autoregressive architecture is constrained by a fixed high frame rate, slowing inference.\n\nThe paper introduces VALL-E 2, which addresses the aforementioned issues with two innovations: Repetition Aware Sampling, which stabilizes decoding without increasing computational costs, and Grouped Code Modeling, which reduces sequence length and speeds up inference. These improvements make VALL-E 2 more robust, natural, and efficient in zero-shot TTS, achieving human parity for the first time on benchmarks including LibriSpeech and VCTK. VALL-E 2 can stably generate high-quality speech for complex sentences that are hard to read or contain many repeated phrases." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While VALL-E 2 delivers remarkable improvements in stability and efficiency, it doesn't introduce the same level of paradigm-shifting innovation as the original VALL-E, which opened a new avenue for zero-shot TTS. VALL-E 2 focuses on refining and optimizing the existing framework, building on established concepts." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "NA" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "see above" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work is an extension of VALL-E, which aims to solve its two problems. 1. inference repetitions --> degrade performance 2. too long codec sequence for modeling --> degrade speed.\n\nSpecifically, they propose 1. Repetition Aware Sampling to remove repititions by accounting for token repetition in the decoding history, thus improve the synthesis quality, 2. Grouped Code Modeling to re-organize codec sequence into groups to shorten the length, thus improve the modeling efficiency.\n\nFrom my side, it is a good extension of VALL-E series and solve its practical issues. But from research perpective, this work does not convey much novelty or insights, especially given the high requirement of ICLR conference." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "see above" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024valle,\ntitle={{VALL}-E 2: Neural Codec Language Models are Human Parity Zero-Shot Text to Speech Synthesizers},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0bcRCD7YUx},\nnote={under review}\n}" }, "abstract": { "value": "This paper introduces VALL-E 2, the latest advancement in neural codec language models that marks a milestone in zero-shot text-to-speech synthesis (TTS), achieving human parity for the first time. Based on its predecessor, VALL-E, this work introduces two significant enhancements: Repetition Aware Sampling refines the original nucleus sampling process by accounting for token repetition in the decoding history. It not only stabilizes the decoding but also circumvents the infinite loop issue. Grouped Code Modeling organizes codec codes into groups to effectively shorten the sequence length, which not only boosts inference speed but also addresses the challenges of long sequence modeling. Our experiments on the LibriSpeech and VCTK datasets show that VALL-E 2 surpasses previous systems in speech robustness, naturalness, and speaker similarity. It is the first of its kind to reach human parity on these benchmarks. Moreover, VALL-E 2 consistently synthesizes high-quality speech, even for sentences that are traditionally challenging due to their complexity or repetitive phrases. The advantages of this work could contribute to valuable endeavors, such as generating speech for individuals with aphasia or people with amyotrophic lateral sclerosis. See https://anonymous/valle2 for demos of VALL-E 2." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Zero-shot Text to Speech Synthesis", "Speech Generation", "Voice Cloning", "Language Modeling", "In-Context Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/579fe66d05f6a3deb73edd5cbcfb8b8a4acf66e6.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/5c1e23d71a23cebe5d23d6627c3332837403d792.zip" }, "title": { "value": "VALL-E 2: Neural Codec Language Models are Human Parity Zero-Shot Text to Speech Synthesizers" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0bcUyy2vdY
Multi-play Multi-armed Bandit Model with Scarce Sharable Arm Capacities
main
Active
Multi-play multi-armed bandit;scarce sharable arm capacity;regret bounds
reinforcement learning
3;5;6;8
4;3;3;4
3;3;3;4
3;2;3;3
2;3;1;3
5.5
3.5
3.25
2.75
2.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. should we assume that $\\mu_k\\ge c$ for all $k$? The authors state that the optimal action is always $(m_1,\\dots, m_K)$ in line 211. It seems that this only holds when $\\mu_k\\ge c$ for all $k$.\n2. what is \\mu in Theorem 1. I did not find the definition.\n3. I am curious about how could the sample complexity in Theorem 2 gets rid of the dependence of N. Intuitively, even there is no noise (sigma = 0), for any algorithm, it still need at least $\\log N$ rounds to find the true $m_k$ by binary search. Is the dependence on $N$ hidden in $\\xi$?\n\n\nTypos:\nline 224: $a_{k,t}$ instead of $a_k$\nline 289: for large probability -> with high probability\nline 383: to played with -> to play with" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The work closes the sample complexity gap and narrow the regret gap for the MP-MAB problem. Although the techniques used in the proof are not particularly unique (mostly based on regular UCB and LCB), the conclusions are still very interesting and make sense.\n2. The work propose numerical simulation to show the advantages of their algorithms." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper revisits multi-play multi-armed bandit with shareable arm capacities problem. Improved on previous work Wang et al. (2022a), the paper proposes refined lower and upper bounds for both sample complexity and regret. For sample complexity, the authors propose a minmax lower bound, and give an algorithm that matches the bound. For regret, the authors provide both instance dependent and instance independent regret lower bounds, and find algorithms that match the bounds up to some model-dependent factors." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The writing is a bit poor. The paper contains many colloquial expressions, i.e., line 383 \"But if\", line 390, 403, 405 \"And furthermore\" \"And this\". \n2. The author states in the introduction that the algorithm has applications to LLM inference serving. I believe it’s necessary to provide some LLM-related experiments to support this statement." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\t$m_k$ is deterministic and well known beforehand as to how many pulls can be made in a round. However, there is a constant movement cost $c$ associated to an arm. In case of LLM query, the number of pulls is associated to the amount of query a server instance can handle.\n\nAre $m_k$ and moving cost $c$ dependent in this scenario? If so, how do this implication sit with all the theoretical proof, or do they have to be independent? A clarification on this would help the readers utilize the developed algorithms on many scenarios where the dependencies are crucial. \n\n2.\tWhy is ordering of plays of arm selection important in $a_t$, providing some details on it will avoid ambiguity around its objective of whether to maximize the resource utilization or to maximum capacity of the arm?\n\n3.\tAlso, with respect to movement cost, in the experiment setting, it has been assigned to an arbitrary value of 0.1 . Is there any fundamental reason for that? Also, how can they be evaluated in a practical scenario when they are also coupled with reward formulation? Adding some details around them can greatly improve the clarity of the work. \n\n4.\tIt would be nice to see how the experiments scale up with varying parameters like changing the $m_k$ and changing the movement cost etc ? This will help us understand the empirical performance of the algorithm much better.\n\nReference:\n [A] Xuchuang Wang, Hong Xie, and John C. S. Lui. Multiple-play stochastic bandits with shareable finite-capacity arms. International Conference on Machine Learning, ICML 2022." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "•\tThe problem of Multi play multi-armed bandit problem is an interesting setting to study and improve the foundation of it as it pertains to main real-world settings including LLM inference serving. The work re-establishes that with emphasis on theoretical guarantees.\n\n•\tThe work provides theoretical improvements in sample complexity compared to the existing work on MP-MAB-SAC. It tends to close the sample complexity gap found in the previous work in Reference A\n\n•\tThe authors also provide a new Improved algorithm, PC-CapUL that performs much better than other existing algorithms and have a solid theoretical backing to it with proved theoretical Regret bound guarantees.\n\n•\tThe experiments cover the regimes where the number of arms is larger which predominantly requires more exploration to take place. The developed algorithm provides much better performance in terms of regret compared to other existing algorithms in this experimental setting.\n\nReference:\n [A] Xuchuang Wang, Hong Xie, and John C. S. Lui. Multiple-play stochastic bandits with shareable finite-capacity arms. International Conference on Machine Learning, ICML 2022." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the multi play multi-armed bandit problem having shared arm capacity where the in each round, the learner gets to select the arm for a number of pulls capped by the capacity limit with the goal of maximizing the total reward at the end of the play. The authors propose a new reward function and develop a new algorithm PC-CapUL for this problem setting. The developed algorithm provides tighter bounds on sample complexity and regret in comparison to the existing works, efficiently balances exploration and exploitation. The work is applicable in resource allocation problem with capacity constraint scenarios such as LLM inference and many other real world scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "•\tThe experimentation design could have been done much better with the inclusion of better baseline comparison in addition to the algorithm found in Reference A . Also, utilizing a real-world dataset for evaluation would have further complemented these theoretical results.\n\n•\tThe readability of the paper could be much improved. Also, a brief intuitive explanation like a proof sketch could be added in the main text to help the reader get the intuitive logic and understanding of the proof techniques. \n\n•\tA more detailed theoretical comparative analysis like how regret fares against the regret of other algorithms would make the argument much stronger for the developed PC-CapUL algorithm. Moreover, having such a discussion would also help us uncover insights like how the regret bound behaves in different regimes.\n\nReference:\n [A] Xuchuang Wang, Hong Xie, and John C. S. Lui. Multiple-play stochastic bandits with shareable finite-capacity arms. International Conference on Machine Learning, ICML 2022." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "None." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See \"Weakness\" Section for questions.\n\n- I wonder whether the dependency on the cost parameter $c$ can be improved for the regret lower bound." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This paper first considers this problem with scarce shareable arm capacities and proposes both lower and upper bound for both sample complexity and the regret bound.\n- Based on the parts that I checked, the proofs look correct to me.\n- The experiments are also conducted to show superior performance compared to the previous work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper considers the problem of multi-play multi-armed bandits with scarce shareable arm capacities. Specifically, different from [Wang et al., 2022a], this paper considers the problem where $N\\geq \\sum_k m_k$ where $m_k$ is the capacity of action $k$. With a modification on the reward function, this paper proposes new sample complexity lower/upper bound that is tight as well as regret lower/upper bound for this problem. Specifically, the author claims that the sample complexity lower bound proven in this paper improves upon the one shown in [Wang et al., 2022a]. Empirical results are also shown to strengthen their theoretical findings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- One main concern is the motivation of this paper to consider the case where $N\\geq \\sum m_k$. In this case, the problem seems to be easier (in the sense of algorithm design) since you will definitely explore each action sufficiently enough to figure out the exact $m_k$ while in the opposite case $N< \\sum m_k$, the problem seems to be harder since you need to decide the exploration amount for the suboptimal $k$. Can the authors explicitly justify the choice of studying the $N\\geq \\sum m_k$ case and why it is challenging compared to the previous case?\n- This also leads to the question about the comparison between the lower/upper bound shown in this paper and [Wang et al., 2022a]. While the authors claim better lower bound, I wonder whether the upper/lower bound are comparable in these two cases? Can the algorithm that is derived in this setting adapted to the other? Moreover, I am not sure why equation (5) is more reasonable since it makes sense to me to have the noise's variance larger when $m_k$ or $a_k$ is large.\n- As for the upper bound, the bounds in Theorem 5 seems to be suboptimal since it seems to be dependent on $\\frac{\\max_i \\mu_i}{\\min_i \\mu_i}$, which can be large.\n- I do not understand the lower bound argument shown in Theorem 4. When the cost $c=0$, then this ratio becomes 0, which is surely not informative. In addition, why is the ratio independent of $m_k$? Can the authors explain more on this?\n- Typos:\n - Line 223: it -> if\n - Line 224: a_k -> a_{t,k}?\n - Line 471: depended -> dependent \n - Line 751: missing lemma reference.\n - missing periods at the end of many theorem statements (e.g. Theorem 4,5,6..)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please try to solve the problems in weaknesses. In addition, since the improvement in results achieved in this paper mainly comes from the careful selection of UCB, I would like to know what kind of inspiration it will bring to future work? This is not necessary as the theoretical improvement itself is interesting." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The theoretical contributions are nontrivial. This paper shows tighter lower bounds, and then proposes new algorithms to match them. Furthermore, the experiments verified the theories." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper discusses the problem of the Multi-play Multi-armed Bandit Model with Shareable Arm Capacities (MP-MAB-SAC). It tightens the lower bounds for both sample complexity and the cumulative regret compared to the previous work. Besides, this paper proposes corresponding algorithms to match the lower bounds. Finally, the numerical experiments show that the proposed algorithms outperform the existing ones." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I have the following concerns: \n\n1. The writing quality of this paper falls below the standards required for publication in ICLR. Issues such as clarity, rigor, and basic grammatical correctness are prevalent. It appears that the authors did not thoroughly review the paper before submission. From a writing perspective, the paper remains in draft form: numerous typos, confusing notations, and grammatical errors hinder readability. For example, (1) in Lemma 2, $\\epsilon^{uE}$ should be $\\epsilon^{UE}$; (2) in the proof of Lemma 2 “Bourel et al., 2020” is even not cited; (3) in the proof of Theorem 1, which lemma is used here? Besides, this theorem should be proved more formally; (4) What is the first baseline “MP-MAB-SA” in the experiments?\n\n2. The explanations provided in the paper are insufficient. (1) In Section 1, more concrete examples of the model's practical applications are needed. (2) The claim that certain changes in settings make the model more suitable for LLMs requires stronger evidence. For instance, the movement cost $c$ (which is known to the learner) seems irrelevant. (3) The paper should provide a more in-depth analysis of the experimental results, going beyond mere statements of fact.\n\n3. The comparison with the previous work seems not fair. (1) Since $N \\ge M$ makes the learner only need to learn the capacity $m_k$, without needing to learn the rank of the arms, the learning task seems easier. (2) In lines 307~310, is there any evidence to show stability is getting better? Besides, I’m kind of confused about this result because the robustness v.s. regret usually has some trade-off, which means the increasing of stability may (not always) lead to the decreasing of performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024multiplay,\ntitle={Multi-play Multi-armed Bandit Model with Scarce Sharable Arm Capacities},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0bcUyy2vdY},\nnote={under review}\n}" }, "abstract": { "value": "This paper revisits multi-play multi-armed bandit with shareable arm capacities problem (MP-MAB-SAC), for the purpose of \nrevealing fundamental insights on the statistical limits and data efficient learning. The MP-MAB-SAC is tailored for resource allocation problems arising from LLM inference serving, edge intelligence, etc. It consists of $K$ arms and each arm $k$ is associated with an unknown but deterministic capacity $m_k$ and per-unit capacity reward with mean $\\mu_k$ and $\\sigma$ sub-Gaussian noise. The aggregate reward mean of an arm scales linearly with the number of plays assigned to it until the number of plays hit the capacity limit $m_k$, and then the aggregate reward mean is fixed to $m_k \\mu_k$. At each round only the aggregate reward is revealed to the learner. \nOur contributions are three folds. 1) \\textit{Sample complexity:} we prove a minmax lower bound for the sample complexity of learning the arm capacity $\\Omega(\\frac{\\sigma^2}{\\mu^2_k} \\log \\delta^{-1})$, and propose an algorithm to exactly match this lower bound. \nThis result closes the sample complexity gap of Wang et al. (2022a), whose lower and upper bounds are $\\Omega(\\log \\delta^{-1})$ and $O (\\frac{m^2_k \\sigma^2}{\\mu^2_k} \\log \\delta^{-1})$ respectively. 2) \\textit{Regret lower bounds:} we prove an instance-independent regret lower bound $\\Omega( \\sigma \\sqrt{TK} )$ and instance-dependent regret lower bound $\\Omega(\\sum_{k=1}^K\\frac{c\\sigma^2}{\\mu_k^2} \\log T)$. This result provides the first instance-independent regret lower bound and strengths the instance-dependent regret lower bound of Wang et al. (2022a) $\\Omega(\\sum_{k=1}^K \\log T)$. 3) \\textit{Data efficient exploration:}we propose an algorithm named \\texttt{PC-CapUL}, in which we use prioritized coordination of arm capacities upper/lower confidence bound (UCB/LCB) to efficiently balance the exploration vs. exploitation trade-off. We prove both instance-dependent and instance-independent upper bounds for \\texttt{PC-CapUL}, which match the lower bounds up to some acceptable model-dependent factors. This result provides the first instance-independent upper bound, and has the same dependency on $m_k$ and $\\mu_k$ as Wang et al. (2022a) with respect to instance-dependent upper bound.But there is less information about arm capacity in our aggregate reward setting. Numerical experiments validate the data efficiency of \\texttt{PC-CapUL}." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multi-play multi-armed bandit", "scarce sharable arm capacity", "regret bounds" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1d82406c145f097170a10408c7d2e4d3de679e72.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Multi-play Multi-armed Bandit Model with Scarce Sharable Arm Capacities" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0bmGL4q7vJ
Multi-modal Agent Tuning: Building a VLM-Driven Agent for Efficient Tool Usage
main
Active
Multimodal Agents;Vision-language Model;Tool usage
applications to computer vision, audio, language, and other modalities
5;5;8;8
3;4;3;3
2;3;3;3
2;3;3;3
3;3;3;4
6.5
3.25
2.75
2.75
3.25
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please address the concerns raised in the weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1) The idea of using an LLM to separately generate queries, files, and trajectories, followed by a query-file verifier and trajectory verifier is neat.\n2) The paper addresses the problem of using the appropriate array of tools corresponding to the information relevant to a query well. \n3) The experiments are thorough." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method for multi-modal agent tuning for tool usage and presents a dataset designed to train an agent for tool usage. The authors claim that their T3-Agent achieves significant improvements." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) Verifying the output of an LLM by the LLM itself does not seem accurate. I am skeptical about the quality of the generated MM-Traj dataset. \n2) You need quantitative verification for the dataset. (It is not clear whether a user study involving a few people on 100 data points out of 15K would provide sufficient confidence.)\n3) Experimental results on the GTA benchmark are more promising than those on the GAIA benchmark. However, the overall performance of T3-agent is not superior to that of the other agents in comparison. Specifically, on the GAIA benchmark, the HF agent with GPT-4o performs twice as well as the T3-Agent.\n4) If the T3-Agent’s performance were clearly superior, I would be more optimistic about the generated dataset. However, the results seem to support my doubts about the dataset.\n\nMinor comment: In Tables 2 and 3, the best results should be highlighted in bold." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Q: Could you please elaborate on how not using the final answer A aligns with your goal? Specifically, how does this choice benefit your approach to enhancing tool-usage capability?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Innovation in Multi-Modal Interaction: The approach shows potential in pushing the boundaries of how agents interact with cross-modal data sources. By focusing on practical applications of tool usage, this work could offer useful insights into building agents that understand and respond to complex queries across various media.\n2. Comprehensive Methodology: The paper describes a well-structured experimental setup and provides a clear description of the multi-modal tuning process. This includes thoughtful considerations on data processing, model architecture, and task-specific tuning steps, making it easy to follow.\n3. Evaluation Metrics: The authors employ a diverse set of metrics to evaluate agent performance. This choice not only validates the model’s accuracy in the tasks but also emphasizes the practical utility of the proposed framework in real-world applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel approach for multi-modal agent tuning, aimed at enhancing agent performance in complex environments through better utilization of multiple data modalities. The authors propose a tuning framework designed to leverage cross-modal data (e.g., visual and text info) to improve agent task performance, with specific emphasis on tool usage within the agent's capabilities. Their T3 agent is multi-modal agent that can efficiently use tools to solve practical tasks by tuning a VLM as the controller. Evaluations over the various dataset show the significant improvement using their agent with closed as well as open source model. Additionally, they curated dataset using multimodal info having trajectory of various length for broader study. In this work focus is on the correct tool selection and code is given more importance than the widely used JSON schema. In summary, they generate data, tune the VLM and create dataset followed by leverage of tool agent to make use of tool in a better way." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Interpretability: While the approach demonstrates performance gains, the interpretability of results remains limited. Additional analyses, such as ablation studies or attention maps, would be beneficial to understand how each modality contributes to the decision-making process. For example paper first generate queries first without files before relevant queries, what is the impact if we don't do that, or this is based on some past work/observations?\n\n2. Scalability: The paper does not thoroughly address the scalability of the proposed method, particularly as the number of modalities or the dataset size increases. It would be beneficial to test how the method's performance and computational requirements scale with additional modalities or larger datasets. For example, experiments that measure latency, memory usage, and accuracy as more data is introduced could illustrate the framework's robustness and its viability in resource-constrained or high-throughput environments.\n\n3. User Study: To evaluate the practical usability of the framework, a small user study or qualitative feedback from users would provide valuable insights into the query handling experience. Specifically, gathering feedback on aspects like ease of use, perceived accuracy, responsiveness to complex queries, and the intuitiveness of the tool-usage process could highlight areas for refinement in real-world settings.\n\nMinor hints:\n1. sec5.6 typo.. \"wen based\"--> web based.\n2. sec 3.4- Author(s) mention details can be found in but missed cross-referencing it.\n3. cross reference missing at end of sec 3.4" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I lean towards acceptance because the paper introduces a promising approach to enhancing the reasoning capabilities of multi-modal agents. The proposed data synthesis pipeline and the resulting MM-Traj dataset are significant contributions to the field, potentially advancing the state of multi-modal learning. However, the paper lacks a discussion on the T3-Agent's robust programming capabilities and does not explore how these might be improved. I would like the authors to comment on this aspect." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The data synthesis pipeline introduces a scalable, automated approach to generating diverse, complex multi-modal data for tool usage scenarios.\n- Verification mechanisms embedded within the pipeline enhance data quality, resulting in a robust, comprehensive dataset.\n- With training on the MM-Traj dataset, the T3-Agent demonstrates significant performance gains, surpassing agents built on closed-source models like GPT-4 in certain benchmarks.\n- Ablation studies underscore the critical role of data verification in achieving top performance.\n- The paper includes detailed visualizations of the T3-Agent’s reasoning process." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new approach for improving tool usage in multi-modal agents by fine-tuning a vision-language model (VLM) controller with synthesized tool-usage data. To overcome the limitations of traditional LLM-driven agents—such as reliance on prompt engineering and limited reasoning for tool usage across modalities—the authors create a three-step data synthesis pipeline. First, *Query Generation* uses GPT-4o mini to generate diverse, tool-aware prompts. Next, *File Generation* retrieves images from similar datasets and creates other files programmatically. Finally, *Trajectory Generation* employs a zero-shot agent using GPT-4o mini to solve tasks, capturing thought processes, code, and observations for each step. Quality is controlled through query-file and trajectory verifiers, also based on GPT-4o mini, producing a dataset called MM-Traj. The resulting agent, T3-Agent, uses the ReAct framework and the MiniCPM-V model trained on MM-Traj, enabling versatile tool usage across categories like web search, visual perception, image editing, and file handling. Benchmarks on GTA and GAIA demonstrate the T3-Agent’s significant improvements in tool usage over both untrained VLMs and other state-of-the-art LLM-driven agents." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The T3-Agent exhibits a gap in programming capabilities, which leads to lower accuracy in predicted answers compared to GPT-4o. \n- While the paper acknowledges the T3-Agent’s limited programming capabilities, it does not suggest potential improvements or outline future directions to strengthen this aspect.\n- The reliance on GPT-4o mini throughout the pipeline raises questions about biases and limitations from this closed-source model. Exploring alternative methods or open-source models could enhance transparency and address these limitations.\n- What is $p_i$ in Equation 2?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In Figure 3, the sum of trajectories—214 + 14,273 + 8,740 + 2,520 + 1,242 + 697 + 199 + 202—totals 28,087, which exceeds the stated 20k tasks in the abstract. In addition, the paper mentions that only 15k files remain after passing the query-file and trajectory verifiers. What is the final size of the generated dataset?.\n2. Why does GPT-4o mini outperform GPT-4o in Table 2, specifically in the row with HF Agents on AnsACC and CodeExec, given that GPT-4o is expected to be more powerful than GPT-4o mini?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The proposed data generation pipeline first generates queries independently of any specific files, followed by producing relevant files to align with these queries. This approach allows the pipeline to create more diverse and expressive tasks, unrestricted by file format or quantity limitations.\n2. This work introduces the novel MM-Traj dataset, containing 20k diverse tool-usage data points, supported by a human study to demonstrate dataset quality. \n3. This work also performs an in-depth statistical analysis of the dataset and shows that MM-Traj offers broad coverage across various data modalities and knowledge requirements, as shown in Figure 3." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces a multi-modal tool-usage data generation pipeline designed to finetune vision-language models (VLMs) for tasks requiring tool-usage reasoning. The pipeline consists of three primary steps. First, a large language model (LLM) is prompted to generate query tasks. Next, relevant images or files are retrieved or generated based on the specified query tasks. Finally, ReAct agents are employed to generate trajectories that address the query task problem, followed by an additional LLM prompt to verify the generated data.\n\nThis study also introduces the MM-Traj dataset generated through the proposed scheme and uses it to finetune the MiniCPM-V model to create the T3-agent. The T3-agent's effectiveness is subsequently assessed on the GTA and GAIA benchmarks, showcasing a 20% improvement in performance over untrained VLMs and achieving results comparable to other baseline agents, such as the Lego Agent, Warm-up Act Agent, and HF Agent." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In Section 3.3, this work states, “for other needed files, we prompt GPT-4o mini to extend the file content and generate Python code to produce the files.” However, the methodology for file generation remains unclear. For example, if a file is required to be an MP4 video of Taylor Swift’s latest album, it’s uncertain how this content could be generated through Python code alone. Furthermore, if GPT-4o mini generates the Python code to produce such files, it raises concerns about data quality and how the model ensures that the generated content is not hallucinated.\n2. While including a human study to assess dataset quality is commendable, having only five experienced raters for a subset of the data may be too limited, potentially introducing biases based on individual preferences. Gathering feedback from a **larger pool of participants**, even with fewer data points per person, could strengthen claims about the dataset's effectiveness. Additionally, comparing MM-Traj to filtered-out data may not yield meaningful insight. Instead, comparisons with other established tool-usage datasets would likely provide more meaningful insights.\n3. The evaluation results in Table 3 reveal mixed outcomes, with the T3-agent performing significantly worse than other methods, such as HF Agent and Sibyl Agent, on the GAIA benchmark. What could lead to this performance discrepancy on the GAIA benchmark?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024multimodal,\ntitle={Multi-modal Agent Tuning: Building a {VLM}-Driven Agent for Efficient Tool Usage},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0bmGL4q7vJ},\nnote={under review}\n}" }, "abstract": { "value": "The advancement of large language models (LLMs) prompts the development of multi-modal agents, providing a feasible way to solve practical tasks by using tools. In this paper, we propose a multi-modal agent tuning method that automatically generates multi-modal tool-usage data and tunes a vision-language model (VLM) as the controller for powerful tool-usage reasoning. To preserve the data quality, we prompt the GPT-4o model to separately generate queries, files, and trajectories, followed by a query-file verifier and trajectory verifier. Based on the data synthesis pipeline, we collect the MM-traj dataset with 20k tasks using 10 tools. Then, we build the T3-agent that uses MiniCPM-V as the controller Trajectory Tuning for Tool usage using MM-Traj. Evaluations on the GTA and GAIA benchmarks show that the T3-agent has achieved remarkable improvements and outperforms GPT-4 driven agents by 10%, showing the effectiveness of the proposed data synthesis pipeline that leads to better reasoning capabilities in tool usage." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multimodal Agents", "Vision-language Model", "Tool usage" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/bd65e7681d49d1ec78af6ccfa3a9cf75786a47ac.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Multi-modal Agent Tuning: Building a VLM-Driven Agent for Efficient Tool Usage" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0bswm093Yl
GeneBench: Systematic Evaluation of Genomic Foundation Models and Beyond
main
Active
genetic foundation model;benchmark;hybrid model
datasets and benchmarks
3;5;5;6
4;3;4;4
2;3;3;3
2;3;2;3
1;3;3;3
4.75
3.75
2.75
2.5
2.5
-0.132453
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Thank you for your response. To clarify, while I was able to reproduce the results in the current paper using the provided scripts, the result was different from the original DeepSTARR paper. Otherwise, I encountered no additional issues." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Dear friend,\n\nThank you for your feedback. We’re sorry to hear about the difficulties reproducing DeepSTARR’s results. Could you provide more details regarding the specific issues or errors encountered during your experiments? This will help us identify potential discrepancies and support you more effectively. For reference, our experiments were conducted primarily on NVIDIA A40 GPUs, so any information on your setup and configurations might also be helpful in diagnosing the issue." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": { "value": "Thanks for your feedback" }, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Fail to reproduce the results of Deepstarr using the provided scripts." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What’s the rationale behind the collection of tasks used in this study? they seem to be very similar in terms of task type and it will be more interesting to see more variation in tasks such as zero-shot mutational effect prediction and generative sequence modeling.\n2. The paper is presented as a benchmark suite but the introduction of GenHybrid seems to be the main emphasis throughout the results section. However, the details of such model is missing from the main text of the paper. What’s the main focus of this paper? \n3. Why not include ab initio models trained on tasks specific data and naive benchmark in this suite? The numers presented in the paper is without context can hardly can be used as a standardized benchmark for future methods." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Through and comprehensive dataset curation for gLM evaluation that covers both short and long range tasks.\n2. A wide range on gFMs are benchmarked across various model architectures and parameter sizes. \n3. The introduction of new hybrid method that leverages both attention based models and state-space models and outperformed existing models in most of the datasets evaluated. \n4. In-depth analysis of the benchmarking results and provided insights into the current state of gFMs and their performance differential in various tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduced a benchmark suite for genomic foundation models(gFMs) called GeneBench that systematically evaluates gFMs on an wide array of datasets across a range of tasks for both short and long range sequence prediction. This work also presented a new method called GenHybrid that leverages both SSM and attention based model architectures." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Lacking on tasks beyond classification and regression. One of the most promising application of gFMs is to predict zero-shot mutation effects and generative modeling of genomic sequences. This benchmark effort misses in both aspects. Many mutation effect databases are available and a comprehensive curation of a benchmarking dataset will be of vast interest to the community. \n2. Lacking vertical comparison of different model architectures on model sizes and pre-training schemes. I understand this will be computationally costly but as a benchmarking effort, this is necessary to paint a more complete picture of the model landscape. \n3. Missing naive benchmarks and ab initio models for comparison. It has been shown in many recent studies that gFMs do not outperform ab initio models trained on task specific datasets. Adding both ab initio models and native benchmarks will be very important as a benchmark suite." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "See weakness" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Benchmarked large number of tasks. \n\n- Compared all major genomic foundation models developed recently." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a benchmark framework for evaluating genomic foundation models. The authors gathered a large number of tasks from multiple existing papers for benchmarking. The tasks are classified into either long-range tasks or short-range tasks. A study that compares several existing genomic foundation models using the gathered tasks was performed. In addition, the authors proposed a hybrid approach that is supposed to work well for both short-range tasks and long-range tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The training in Pre-training is not clear. Should the optimization in Eq. (1) be actually involved in pre-training? The two categories of targets described in fine-tuning do not appear in Eq. (1) at all. \n\n- The manuscript appears to be prepared in a hurry, needing major cleaning up. For example, there is no mentioning what the abbreviations used in Figure 4 stand for and in the caption of the same figure, it is mentioned (a), (b), and (c), but based on the content I believe only (c) is there. As another example, in line 406, the authors mentioned they studied NT with different choices of number of parameters. However, there is no description of the results and respective discussion to offer any insights. As the last example, in Table 7, Caduceus is the second performing model, but the author said it was HyenaDNA in the text. \n\n- There is no description on how the proposed Genhybrid was trained. \n\n- The value of their main findings may be limited. Since attention-based models were trained using short-length sequences while convolution-based models trained using long sequences (to consider long context), it is expected to see the former is better in short-range tasks and the later has potential advantage in long-range tasks. \n\n- The effort in data curation is minimal. It looks to me simply pulling from previous works. Could the authors clarify if there is any additional processing or validation of the data. \n\n- Due to the limitation of the work pointed by the authors themselves, I do not see they can answer the last two questions summarized in the second paragraph in their introduction. Could the authors explain?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Could the authors discuss how GeneBench differs from or improves upon existing genomics benchmarking?\n- Could the authors provide detailed input-output descriptions for some tasks (e.g. Genomic Structure Prediction)? \n- For visualization in Figure 8, it would be helpful if the authors add x-axis and y-axis labels to the heatmaps. Similarly, for Figure 10 and 11, what's the range of y-axis?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper offers a wide-range and detailed evaluation of GFMs and also provides concrete guidance for users on how to select models based on different tasks.\n- The paper provides clear classification of GFMs and benchmarking tasks.\n- Beyond benchmarking, the paper proposes a new model GeneBench based on the insights from the experiments, and achieves the best performance on most tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces GeneBench, a benchmarking suite specifically designed for evaluating Genomic Foundation Models across a wide range of genomic tasks. GeneBench includes evaluations of eleven GFMs on forty-four datasets, with tasks spanning various genomic regions and functions. This systematic benchmarking reveals insights based on the performance of GFMs across short- and long-range tasks. Furthermore, the paper proposes a new model that incorporates advantages from two types of models and demonstrates effective performance across all tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper belongs to a benchmarking paper, but it does not include comparisons with other existing genomic benchmarks (e.g., the length of input sequences, types of benchmarked methods, etc.). This limits the motivation why the research area needs this new benchmarking.\n- While the benchmark focuses on GFMs, it's better to have simpler baselines without pretraining, (e.g. CNNs). Including such models would provide a deeper understanding of the advantages or limitations of GFMs relative to classical models.\n- The paper doesn't provide sufficient descriptions on several tasks (e.g. Genomic Structure Prediction)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The authors are suggested to state the advantages of their method. For example, why a bioinformatian should use their proposed method instead of others?\n2. The authors should provide some biological insights.\n3. Some case studies can be provided. For example, how the proposed method can be used to facilitate biological findings." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The manuscript is well-organised and the experiments are relatively comprehensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This study introduces a comprehensive benchmark suite, GeneBench, for evaluating the efficacy of Genomics Foundation Models. The authors systematically evaluated several DNA tasks including coding region, non-coding region, and genome structure. They also provided some insights into the model design and model training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Lack of biological insights" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce GenBench, a benchmark suite for evaluating Genomic Foundation Models (GFMs). Based on our experimental insights, we propose GenHybrid, an effective SSM-attention hybrid model suitable for all tasks." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024genebench,\ntitle={GeneBench: Systematic Evaluation of Genomic Foundation Models and Beyond},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0bswm093Yl},\nnote={under review}\n}" }, "abstract": { "value": "The Genomic Foundation Model (GFM) paradigm is expected to facilitate the extraction of generalizable representations from massive genomic data, thereby enabling their application across a spectrum of downstream applications. Despite advancements, a lack of evaluation framework makes it difficult to ensure equitable assessment due to experimental settings, model intricacy, benchmark datasets, and reproducibility challenges. In the absence of standardization, comparative analyses risk becoming biased and unreliable. To surmount this impasse, we introduce GeneBench, a comprehensive benchmarking suite specifically tailored for evaluating the efficacy of Genomic Foundation Models. GeneBench offers a modular and expandable framework that encapsulates a variety of state-of-the-art methodologies. Through systematic evaluations of datasets spanning diverse biological domains with a particular emphasis on both short-range and long-range genomic tasks, firstly including the three most important DNA tasks covering Coding Region, Non-Coding Region, Genome Structure, etc. Our results on GenBench has led to an interesting discovery: regardless of the number of parameters, the noticeable variation in preference between attention-based and convolution-based models for short- and long-range tasks could offer valuable insights for the future development of GFM. As a result, we propose a straightforward modified model called Genhybrid, which is an effective and efficient convolution-attention hybrid model suitable for all tasks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "genetic foundation model", "benchmark", "hybrid model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/af90bfafa536d15615226019c9450f07a06e60fc.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "GeneBench: Systematic Evaluation of Genomic Foundation Models and Beyond" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0cBttXaOUK
Multi-aspect Knowledge Distillation with Large Language Model
main
Active
Multi-aspect Knowledge Distillation;LLM;MLLM
transfer learning, meta learning, and lifelong learning
3;5;5;5
4;4;3;4
2;3;3;2
2;2;3;2
3;3;3;3
4.5
3.75
2.5
2.25
3
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The paper develops a simple way to distill the multi-aspect knowledge of MLLM to perform image classification using a student model. The experiments show some improvements however I still believe that the contributions of this paper are quite limited. Additionally, the baseline models selected in this paper are quite outdated. In summary, the overall technical novelty of the direct injection of knowledge from large models seems incremental.\n\n1. As shown in Tab 1, MLLMs perform badly in zero-shot classification on fine-grained image test datasets, how do we ensure that MLLMs provide correct answers across multiple aspects? \n2. There are questions regarding the task details when extending to object detection: should the input to the MLLM be the object within the box or the entire image? The entire image may contain multiple objects, and the MLLM's response may not be accurate." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.This paper is written in a clear and straightforward manner, making it easy to quickly grasp the method's approach.\n2.The paper conducted a lot of experiments, and the figures and tables are well-organized.\n3.The authors claimed they are the first to offer a novel perspective on distilling multi-aspect knowledge regarding abstract and complex concepts. I have seen the author's efforts in the design of knowledge transfer." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper starts from the perspective of how humans classify images, where humans typically consider multi-aspects such as context, shape, color, and other features. Motivated by this, the author proposes a multi-aspect knowledge distillation method that utilizes Multimodal Large Language Models (MLLMs) to improve image classification performance. By querying, extracting relevant logits, and expanding the model's output dimensions, the method achieves the knowledge learning of visual aspects and abstract knowledge. This method enhances the performance of baseline models across lots of experiments, demonstrating the potential of multi-aspect knowledge distillation in computer vision and other tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed method shows some improvement on some classic CNN-based models but lacks experiments on ViT-based models. \n2. In the knowledge distillation task, the comparison is only done with KD, lacking comparisons with other knowledge distillation methods [1,2].\n3. The improvement in object detection tasks is very limited in Tab7, and there is no comparison done on currently well-performing object detection methods. Object detection is inherently a more fine-grained visual task than classification. Still, the experiments in this paper do not demonstrate the method's effectiveness of multi-aspect knowledge distillation in detection.\n4. The explanation for the poor zero-shot classification performance of MLLMs is missing in Tab 1. Incorrect knowledge could also be distilled to the student model.\n5. Missing the training curve of MaKD Loss with the number of iterations. The visualization of t-SNE embeddings and the model's multi-aspect responses to a single image are presented in Fig 4 and 5. There is no overall evaluation of the model's responses to multi-aspect on the test dataset.\n\n[1] Decoupled Knowledge Distillation\n[2] Logit Standardization in Knowledge Distillation" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I hope the author can better explain the novelty of the paper and the principle of how the algorithm is really effective." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is well-written, with a clear and logical flow from the introduction through to the conclusion. The authors present simple ideas in a straightforward manner, making the paper accessible to readers from diverse backgrounds. The experimental setup is meticulously organized, with each step of the process described in a way that facilitates reproducibility. The authors outline the methodologies, datasets, and evaluation metrics in clear subsections, allowing readers to follow the experimental design intuitively." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new knowledge distillation method. It performs multi-aspect knowledge distillation with the LLM and the MLLM. LLM is utilized to generate multi-aspect questions by using the class and prompt. It further adopts the MLLM to extract the logit for multi-aspect questions and obtain the probabilities corresponding to yes token. The student is optimized by the original cross-entropy loss and the distilled binary cross-entropy loss. Extensive experiments are conducted to demonstrates the effectiveness of the proposed method. It also extends to object detection task to evaluate the great potential." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1、\tLimited Experimental Setting:The experimental setting is narrow, which restricts the generalizability of the findings. The scale of datasets is small and may not be sufficient to demonstrate the robustness of the proposed method across different scenarios. Expanding the experimental scope to include more varied or challenging datasets such as the full ImageNet would significantly strengthen the paper.\n\n2、\tLack of novelty: The proposed method directly adopts the MLLM’s output logit to perform distillation. The principle behind this design is not fully demonstrated. Why MLLM can help improve the performance of student and what features support this? \n\n3、\tSome details are missing, and some experimental comparisons are not fair. The parameter number of the MLLM is larger than the teacher model in the traditional KD. It is questionable whether the improvement is due to the large number of parameters or the inherent properties of the MLLM itself. What will happen to the performance of the student model if only adopting the large vision encoder in MLLM. Some comparison to the traditional methods is not fair. The basic KD adopted in experiment in classification is too old and many improved versions should be used.\n\n4、\tThere is not the comparison to SOTA KD method in object detection and the baseline should also adopt the powerful setting." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I noticed that even using random logits leads to performance improvements (Table 3(b)). Could you clarify the underlying reason for this result?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The core idea is simple but looks effective.\n2. The paper writing is fluent and easy to follow.\n3. The paper conducts experiments on six different fine-grained datasets and two different coarse-grained datasets. The results show that the proposed method achieves stable performance improvement, especially on the fine-grained datasets.\n4. The ablation studies and related visualization are comprehensive and insightful." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a multi-aspect knowledge distillation framework that uses MLLMs to improve model performance in visual understanding and detection tasks. By expanding the model’s output dimensions, the method distills multi-aspect logits that encapsulate diverse visual and contextual features beyond standard class labels. Extensive experiments on various image classification datasets, complemented by thorough ablation studies, underscore the framework's effectiveness and robustness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The evaluation datasets in the paper are relatively small, and the model parameters appear insufficient in 2024 . Using ResNet18/34 as the primary model limits the assessment of the framework’s scalability. It would be valuable to test the framework on a larger dataset, such as ImageNet, and with a more complex model like ResNet101, to assess its effectiveness in a more challenging setting.\n\n2. The paper lacks comparisons with other knowledge distillation (KD) baselines, which would provide a clearer benchmark for evaluating the proposed method’s relative performance.\n\n3. The framework could explore additional ways to leverage the knowledge in MLLMs. For instance, distilling logits from the last token output by the MLLM after processing the input image may capture different aspects of visual representation.\n\n4. While Section 5.5 discusses training time and computational cost, the analysis might be incomplete. The time required for MLLMs to annotate the training dataset should also be considered to provide a more comprehensive assessment of computational demands." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The approach to utilizing knowledge distillation is somewhat unclear. When are the multi-aspect logits extracted from the MLLM, and how are they incorporated into the model's training or inference objective?\n\n2. Given that GPT-4o generates the multi-aspect questions and that the MLLM has not seen images from each category, especially considering these categories are often long-tailed and fine-grained. Do you have any validation or filtering steps in place for the generated questions and responses, or have you considered comparing the generated questions to human-curated ones? Which types of generated questions contribute most to performance improvements.\n\n3. When generating responses to the multi-aspect questions for each image, has the potential hallucination issue within the MLLM been considered? How accurately can the MLLM (InternVL) answer these generated questions, and to what extent does the hallucination issue in InternVL affect the accuracy of its responses?\n\n4. For the object detection task, have you attempted to use other datasets, such as the larger-scale LVIS?\n\nI may reconsider my score based on your response to these issues." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper combines traditional network models, such as ResNet, with MLLMs to enhance accuracy in classification and detection tasks. \n\n2. It uses multi-aspect questions to extract knowledge from MLLMs, leveraging this knowledge to support classification. \n\n3. The experiments are comprehensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel approach to solving computer vision tasks, such as image classification and object detection, by enhancing conventional models' classification capabilities through knowledge distillation from Multimodal Large Language Models (MLLMs). The method involves expanding the dimensionality of the model's original logits, which improves classification accuracy. The paper provides numerous ablation experiments and conducts a thorough analysis of the results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The approach to utilizing knowledge distillation is a bit unclear—are you applying this strategy during training, or is it only used in inference? Additionally, there seems to be a lack of consideration for hallucination issues that may arise with GPT-4o during the generation of questions and responses." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024multiaspect,\ntitle={Multi-aspect Knowledge Distillation with Large Language Model},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0cBttXaOUK},\nnote={under review}\n}" }, "abstract": { "value": "Recent advancements in deep learning have significantly improved performance on computer vision tasks. Previous image classification methods primarily modify model architectures or add features, and they optimize models using cross-entropy loss on class logits. Since they focus on classifying images with considering class labels, these methods may struggle to learn various aspects of classes (e.g., natural positions and shape changes). In contrast, humans classify images by naturally referring to multi-aspects such as context, shape, color, and other features. Inspired by this, rethinking the previous approach from a novel view, we propose a multi-aspect knowledge distillation method using Multimodal Large Language Models (MLLMs). Our approach involves: 1) querying Large Language Model with multi-aspect questions relevant to the knowledge we want to transfer to the model, 2) extracting corresponding logits from MLLM, and 3) expanding the model's output dimensions to distill these multi-aspect logits. We then apply cross-entropy loss to class logits and binary cross-entropy loss to multi-aspect logits. Through our method, the model can learn not only the knowledge about visual aspects but also the abstract and complex aspects that require a deeper understanding. We primarily apply our method to image classification, and to explore the potential for extending our model, we expand it to other tasks, such as object detection. In all experimental results, our method improves the performance of the baselines. Additionally, we analyze the effect of multi-aspect knowledge distillation. These results demonstrate that our method can transfer knowledge about various aspects to the model and the aspect knowledge can enhance model performance in computer vision tasks. This paper demonstrates the great potential of multi-aspect knowledge distillation, and we believe it offers a promising direction for future research in computer vision and beyond." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multi-aspect Knowledge Distillation", "LLM", "MLLM" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1f3ea1a0729b4006e2948a7a22aa1bbe8967403f.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/86c44b4be3af16d50dbb0cae78b18c60d2df1ada.pdf" }, "title": { "value": "Multi-aspect Knowledge Distillation with Large Language Model" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0cadcLKbt7
TPI-LLM: Serving 70B-scale LLMs Efficiently on Low-resource Edge Devices
main
Active
DML Systems;Edge LLM Serving;Tensor Parallelism;Memory Scheduling
infrastructure, software libraries, hardware, systems, etc.
1;5;5;5
5;4;4;4
1;3;2;2
1;2;3;2
1;3;2;2
4
4.25
2
2
2
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. What is the use-case for this work if it will take minutes to hours to generate an output?\n2. How do these results change if you have a faster edge device compared to a CPU?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The paper aims to solve a very important problem, how to run very large models with billions of parameters on CPUs or machines with no CUDA." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a technique to run 70B LLM on CPU based (edge) devices. The system uses a tensor parallel framework to distribute attention heads across multiple nodes. The authors then perform an All-Reduce latency analysis, claiming that latency, not bandwidth, is the main bottleneck for all-reduce. The authors then describe a sliding memory scheduler, followed by experiments where they show the performance of their system." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Thank you for submitting your work to ICLR. I believe the current version of the paper has many shortcomings which I will describe here in detail:\n\n1. The paper needs thorough proof-reading. Some examples:\n- by adaptively partitioning model--->a model or models\n- with token latency increases-->increasing\n- Write in Active voice to avoid odd sentence structures such as :\n- \"Constrained by the high link latency, a star-based allreduce algorithm is implemented\"\n- \"We design a TPI-LLM\"--->We design TPI-LLM\n- \"power collaborate\"--->to collaborate\n- \"which dominate inference time\"---> \"what dominate\"\n\n\n2. I did not really understand the example on p.5 \"For example, the path from device h2 to h1 via h8\nfollows the route h2 → r2 → r9 → r8 → h8 → r8 → r9 → r1 → h1, resulting in a total link latency of 16τ , where τ is the per-hop link latency\"\n\n\n3. The Sliding Window Memory Scheduling is very similar to PipeSwitch (https://www.usenix.org/conference/osdi20/presentation/bai). The only main difference being that you are swapping in/out from Disk to/from device memory. This is in many ways also similar to memory page to disk swapping.\n\n4. Starting with the results for the swapping. Having an OOM is probably better than having 26.1s/token. For a 100 tokens output, you need to wait for roughly 30 minutes. This is an output of about 75 words as per OpenAI (https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them), i.e., one paragraph. Why would a user want to tolerate this? If the user chooses to use the fastest model (the Llama2-3B), they will wait for a bit less, about 3.3 minutes. I am not sure if there is a use-case for such a slow running LLM.\n\n5. Moving to the networking results in 4.2, I think the authors are drawing the wrong conclusions. The computations are just super slow in this case that the network is not really a bottleneck in the computations. I think a better experiment would be to try the system with proper edge GPU/TPUs, e.g., Google's Corel, NVidia's Orin, or NVidia's Nano. Right now, I believe that what we are seeing is the result of just a very slow computation. You can already see that in a real world scenario, things are much worse in the real case study where the latency in Table-3 is multiple times compared to Table-1." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "When the author discusses opportunities, I have several points of confusion:\n\n1. Why does the author believe that the communication proportion of tensor parallelism is high, but the overall inference time is reduced due to parallel computation? Is this conclusion drawn from the results in Figure 1-(b)? Has the author considered the synchronization issues among multiple edge devices in tensor parallelism?\n \n2. Why is the total time of tensor parallelism less than 100% in Figure 1-(b)? Does the author want to use the pipeline parallelism as a baseline to illustrate the superiority of tensor parallelism among edge devices?\n \n3. In Figure 1-(c), why is the memory footprint of each device in the TPI-LLM framework proposed in this paper the same? If tensor parallelism is used, it should decrease as the number of devices increases." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper addresses the emerging challenge of deploying large language models (LLMs) on edge devices, which is a novel and increasingly relevant problem in the era of edge computing and privacy concerns. It proposes a new approach to tensor parallelism that is specifically tailored for low-resource environments, which is an original contribution to the field.\n\nThe authors combine concepts from distributed computing, memory management, and parallelism to create a system that is both memory and compute-efficient. The sliding window memory scheduler and the star-based allreduce algorithm are creative solutions that address specific pain points in edge device inference.\n\nThe work has significant implications for the future of edge computing, as it enables the deployment of LLMs on devices that were previously considered incapable due to resource constraints." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel tensor parallel inference system named TPI-LLM, designed to efficiently serve large-scale language models (LLMs) on low-resource edge devices. The system addresses the challenges of limited computing power, memory, and bandwidth on edge devices by introducing a sliding window memory scheduler that dynamically manages layer weights during inference, overlapping disk I/O latency with computation and communication. This approach allows larger models to run smoothly on memory-limited devices while keeping sensitive raw data local to the user's devices, enhancing privacy.\n\nThe key contributions of the paper are as follows:\n\n1. Tensor Parallelism on Edge Devices: The paper argues for the effectiveness of tensor parallelism over pipeline parallelism on low-resource edge devices and presents TPI-LLM as a compute- and memory-efficient solution for serving 70B-scale models.\n \n2. Sliding Window Memory Scheduler: A memory scheduler is introduced to asynchronously load and unload layer weights, which enables the inference of larger models on devices with limited memory by overlapping disk I/O with computations and communications.\n \n3. Star-Based AllReduce Algorithm: The paper identifies link latency as the main issue in communication and implements a star-based allreduce algorithm to reduce latency, outperforming ring- and tree-based methods commonly used in high-latency networks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I think that although this paper is a bold attempt at the edge of LLM serving, most of the solutions provided are based on the application of past works. Firstly, whether it is tensor parallelism, sliding window, or the star-based algorithm, they are all proposed in existing and relatively mature works. The author's approach to using these methods to optimize edge LLM serving is similar to theirs, hence the paper's contribution lacks a certain level of innovation.\n\nFurthermore, I think there are some flaws in the author's logic when explaining the motivation and opportunities for the research. I only learned from the first sentence of the abstract that the significance of serving large models at the network edge lies in data privacy-preserving. In the first paragraph of the introduction, I learned that if edge devices must be used for LLM serving to ensure user privacy and security, then more edge devices will have to be used in a distributed collaborative serving due to the limitations of computing and storage resources. However, the title of the second paragraph is \"Observations and Motivations,\" but the content only contains observations (which should only contain them without motivations). Therefore, I suggest that the author optimize the logic of explaining the research motivation and opportunities. The scope of privacy security issues is too large, so that the research motivation seems slightly pale. Can it be combined with some more specific downstream tasks or application scenarios?\n\nFinally, the experimental results shown in Table 1 shows that the latency of serving large models on edge devices is much higher than that in the cloud, with TTFT reaching above the second level, and the throughput is far from comparable to that of the cloud. Does this indicate that the research motivation of the paper only considered privacy protection and neglected performance issues? Although the experimental results are already much better than Galaxy. This issue is also a huge challenge that all edge LLM serving inevitably faces." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- This technique might also be useful even in the cloud server settings, especially when there is not enough GPUs, can the sliding window memory management can help avoid the OOMs. Any thoughts in that direction or future recommendations maybe?\n\nHowever, following are the concerns/questions.\n\n### Major concerns\n- The proposed framework has two major operational changes that are applied on the pre-trained LLM, that are 1) distribution of attention heads across nodes, ii) the memory management through the sliding window approach. Given these two changes, the paper does not discuss the performance implications with and without the proposed framework. It is important to guarantee that the performance remains same. \n - Given that, it is recommended to show that the performance (atleast in the best case settings on atleast one model) remains unchanged with or without the TPI-LLM.\n- Figure 5 is more concerning in the following perspectives.\n - No impact on bandwidth: There is no surprise on the lack of impact of increasing the bandwidth since the sliding window size is defaulted to 2. However, we can only learn about the impact by increasing the window size. There is no such ablation in the paper that shows a best combination of the number of devices, maximum possible sliding window size, bandwidth, available memory on each of the devices. \n - Recommendation is to provide a thorough feasibility study on the combination of the above variables to clearly understand the impact of the bandwidth. In fact on that note, there is no clear analysis on the maximum possible sliding window size for a given hardware configuration of the master/worker nodes. Therefore, the feasibility study can be preceded by maximum window size in order to limit the number of combinations to be studied.\n - Increasing the number of devices/cores reduces the token latency (from first two sub-figures in Figure5) is not a true statement and is really vague. Those plots are shown simply for 8 devices, if the number of devices are kept increasing, after a point we see diminishing returns of parallelism. That is where the communication dominates and hence to diminishing returns. Without having proper study `increasing the devices reduces the latency` are not valid claims. \n - The recommendation is to conduct a quick roofline analysis to substantiate the claims or remove the controversial parts. \n- There are a few limitations on what the proposed framework can offer. They are as follows.\n - There is a security and privacy concern, this framework should not run any device connected in the same Wi-Fi network unless there is a prior consent. It is not stated in addressed in the paper and hence please clearly state the limitation or the constraints under which the framework can operate.\n - The star configuration can not scale to large number of nodes/devices. It probably can be extended to scale in a hierarchical-star configuration etc, but that is not the scope of the paper and hence this needs to be state clearly as a limitation. There are real-time use-cases in the resource constrained edge scenarios where the number of devices is high, which leads to failures of a single master node in star config.\n - There is probably an unstated assumption in the paper that the data gets generated on the master or stays centrally on the master node/device. However, there is a high chance of the worker nodes/devices having user specific data. It appears that the proposed framework does not handle data that is generated on all the other devices. If that is not the case, please clarify how that data is handled on each of the workers? Otherwise, this limitation should be stated. \n\n### Minor concerns\n- In Figure 4, Time steps 7 and 8 have the same memory window, why is it the case, it is a bit confusing to understand. Is the memory peaked and hence the window does not slide or something else? Please provide clarifications or amend the figure to make it clear.\n- Tables 1 & 2 provide comparisons for TTFT, latency and memory usage, etc between with and without the use of memory scheduler. However, when the memory scheduler is disabled, it is not clear how those numbers are attained. How were they measured? by using accelerate, vanilla Klonet or Galaxy or llama.cpp or others?\n - Please add those details on how the stats were measured and the underlying frameworks. Ideally benchmark comparisons against the best possible frameworks/methods is a common practice?\n- Section 4.3 states the `comparison with benchmarks` Ideally those (a. Standalone, b. Model Parallelism (MP) and c. Galaxy) are SOTA methods/frameworks (in this case). Are they not? Why are they called benchmarks? Clearly there is benchmarking of TPI-LLM against those other things. \n - Recommendation is to please rephrase in order to convey the message so that the confusing the reader can be avoided." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Following are the strengths:\n- The paper is well-written, easy to follow and understand the concepts presented.\n- The paper tries to addresses the critical problem of LLM inference on the edge devices.\n- The paper discusses the existing frameworks in this space and positions within the body of knowledge.\n- Achieving the faster TTFT latency using 3k LOC and for large models is always interesting and of use for broader audience.\n- Although the paper uses multiple devices connected in the same network (Wi-Fi/cable connected), it is understandable that there are use-cases where a house will have multiple edge devices all of which might operate in-tandem, there maybe business orgs that have edge devices that don't have permission issues to run LLMs on multiple of such devices. For such use-cases this is a significant contribution.\n- The paper proposed an easy to use technique of overlaying the data fetch during communication step in LLMs in the proposed sliding window strategy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new framework called TPI-LLM for model serving on low resource edge devices using tensor parallelism and memory scheduling. The proposed frameworks shows better performance over the SOTA frameworks in terms of latency to predict the first token and the overall token latency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Please follow the questions section, there is a cohesive list of weaknesses and the corresponding questions that are to be addressed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see questions from weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1 - The paper provides extensive theoretical analysis.\n\n2 - The proposed approach is evaluated in both emulations and real-world testbeds against state-of-the-art baselines.\n\n3 - The performance is significant, especially on edge devices with limited resources." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes TPI-LLM, a tensor parallelism-based distributed LLM inference system tailored to low-resource edge devices. Unlike inference tasks on servers, TPI-LLM keeps user data securely on user (edge) devices during inference. By analyzing the network latency bottleneck and memory limits, TPI-LLM leverages a star-based all-reduce algorithm to facilitate distributed inference and employs a sliding window-based memory scheduler to reduce inference memory footprints. Experiments on both emulations and real-world testbeds show that TPI-LLM outperforms existing baselines by providing lower TTFT, token latency, and peak memory footprints." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1 - In Section 2, Q1 is somewhat ambiguous. First, isn't tensor parallelism already a form of model parallelism? And second, even on low-resource edge devices, can we combine and use both parallelism techniques instead of simply abandoning one?\n\n2 - In Section 3.2, the example network topology here is star-based (Appendix A.7). Given this star-based topology, a star-based all-reduce scheme indeed would be most efficient. Is this a common network topology for all edge scenarios?\n\n3 - Figure 4 may need further detailed descriptions. The sliding window is not very clear in the figure. For example, in Time 7 and 8, why would you prefetch Blocks 6, 7, and 8 so early when it's still far away from actually using them? Isn't that prefetching too early causing memory waste?\n\n4 - Note that the layer-wise parameter offloading is not new. Many popular frameworks, such as DeepSpeed-Inference, Mixtral Offload, and Llama.cpp, support this offloading scheme. How does the proposed TPI-LLM differ from existing offloading techniques?\n\n5 - Evaluation lacks sensitivity analysis on the memory sliding window size. Why would you pick a window size of 2?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This work can serve 70B-scale LLMs efficiently using multiple edge devices with limited computing power, memory, and bandwidth." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024tpillm,\ntitle={{TPI}-{LLM}: Serving 70B-scale {LLM}s Efficiently on Low-resource Edge Devices},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0cadcLKbt7},\nnote={under review}\n}" }, "abstract": { "value": "Large model inference is shifting from cloud to edge due to concerns about the privacy of user interaction data. However, edge devices often struggle with limited computing power, memory, and bandwidth, requiring collaboration across multiple devices to run and speed up LLM inference. Pipeline parallelism, the mainstream solution, is inefficient for single-user scenarios, while tensor parallelism struggles with frequent communications. In this paper, we argue that tensor parallelism can be more effective than pipeline on low-resource devices, and present a compute- and memory-efficient tensor parallel inference system, named TPI-LLM, to serve 70B-scale models. TPI-LLM keeps sensitive raw data local in the users' devices and introduces a sliding window memory scheduler to dynamically manage layer weights during inference, with disk I/O latency overlapped with the computation and communication. This allows larger models to run smoothly on memory-limited devices. We analyze the communication bottleneck and find that link latency, not bandwidth, emerges as the main issue, so a star-based allreduce algorithm is implemented. Through extensive experiments on both emulated and real testbeds, TPI-LLM demonstrated over 80\\% less time-to-first-token and token latency compared to Accelerate, and over 90\\% compared to Transformers and Galaxy, while cutting the peak memory footprint of Llama 2-70B by 90\\%, requiring only 3.1 GB of memory for 70B-scale models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "DML Systems", "Edge LLM Serving", "Tensor Parallelism", "Memory Scheduling" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/06e1b42fa6edd155c508793b0d6c1310d7624497.pdf" }, "presentation": null, "primary_area": { "value": "infrastructure, software libraries, hardware, systems, etc." }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "TPI-LLM: Serving 70B-scale LLMs Efficiently on Low-resource Edge Devices" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0ctvBgKFgc
ProtComposer: Compositional Protein Structure Generation with 3D Ellipsoids
main
Active
protein design;diffusion model;controllable generation;drug discovery;proteins;biology
applications to physical sciences (physics, chemistry, biology, etc.)
5;8;8;8
3;4;4;3
2;3;4;4
3;3;4;3
3;3;4;4
7.25
3.5
3.25
3.25
3.5
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Is it possible to set the order of ellipsoids? Or how complicated would it be to extend this framework to allow user to set the order of ellipsoids?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "This paper is well motivated, formulated, written, and evaluated. \n\n1. The injection of ellipsoid information is achieved through cross attention. This allows the ellipsoids to be unordered set. i.e. user doesn’t have to specify the order or ellipsoids; model also decides the order of ellipsoids. \n2. The formulation of ellipsoid token is effective and is easy to be extended to different conditions other than secondary structure. (such as hydrophobicity)\n3. Thorough analysis of ellipsoid consistency, including both geometric and probabilistic metrics.\n4. Fig4: The comparison on designability/diversity with baseline methods are done as comparing Pareto frontiers with varying sampling temperatures/guidances. This clearly shows the tradeoff between the methods and the performance improvement from baselines. I found this analysis insightful and believe other papers arguing increased performance could benefit from a similar evaluation scheme.\n5. The practical use-case of the method is shown in Section4.3 flexible conditioning.\n6. The controllability is greatly improved from Chroma (Table1). Accuracy and coverage is very impressive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a framework to generate protein structures by conditioning on layouts specified through 3D ellipsoids. The conditions include location, size, orientation and secondary structure. These conditions are injected to flow-base protein generative models via proposed cross-attention and update modules. It shows greatly improved controllability and designability over baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "How is sequence design performance? What I understand is that the designability is solely based on the generated structure (i.e. generated sequence is discarded). Can you also present co-design designability value as in MultiFlow paper?\n\nOther than that, I did not find any major weaknesses from the paper. However, the ablation study can be improved. Most of the model ablations are based on guidance strength, but I am also curious about ablation study on (i) ellipsoid segementation cutoff (5A currently), (ii) allow residue token to update ellipsoid token or not. For (ii), explaining the reasoning behind the design choice may suffice." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. **Ellipsoid Representation**\n\n a. Choosing the Number of Ellipsoids (k):\n - *Training:* Is k determined by the structure of the training protein? If so, what is the distribution of k in natural proteins?\n - *Evaluation based on the statistical model:* The authors appear to have used a fixed k=5 in the experiments. How was this number chosen? Have the authors tested other k values?\n\n b. Number of Residues per Ellipsoid:\n - The current representation specifies the number of residues in each ellipsoid, but the authors show that this number directly depends on ellipsoid volume. Could specifying the number of residues be redundant, and might removing this constraint provide the model more flexibility in generation? Have the authors examined the impact of residue count on amino acid (AA) prediction?\n\n2. **Invariance Cross-Attention (ICA) Layer Design**\n\n a. The authors separately model the SE3 features (E_k) and scalar features (e_k) of ellipsoids using the proposed ICA and transformer to achieve SE3 invariance in the local frame. Has the team considered alternative approaches, such as modeling ellipsoids as “pseudo frames” with SE3 and scalar features and simply using IPA to update ellipsoid and residue features together?\n\n b. In Algorithm 1:\n - Could the authors clarify the *PosEmbed* used in line 223? Does it include distance, angles, or local coordinates?\n - Could they also explain why the query uses un-updated $s$, while the key and value use $a$, which incorporates current ellipsoid information?\n\n3. **Results in Table 2 (natural proteins):** The model, even with the strongest guidance, tends to overestimate helices in proteins. Additionally, the authors did not present designability results, which, based on Figure 16, may be compromised with strong guidance. Could the authors elaborate on the model's performance in addressing the “overrepresented helix problem,” the trade-offs with other metrics, and its overall comparison to models like *RFDiffusion*?\n\n4. In self-conditioning (line 290), they propose supplying interpolated conditions to both conditional and unconditional models, suggesting this improves “designability and ellipsoid adherence for all $\\lambda$ values.” However, no ablation studies were provided to verify this claim." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "**[Clarity & Quality]**\n- The manuscript is well-written, with a thorough introduction and background information on protein structure generation, spatial conditioning, and flow matching for data generation. Overall, it provides a smooth reading experience.\n- The paper is of good quality, presenting clear mathematical foundations grounded in current techniques for protein modeling, diffusion-based generation, and guided sampling.\n- The problem is well-defined, and the authors designed several experiments to evaluate model performance in 1) following conditioning, 2) improving general performance, and 3) demonstrating practical use in flexible conditioning, with both quantitative and qualitative comparisons.\n\n**[Significance]**\n- The spatial conditioning approach using ellipsoids is intuitive for practical applications and has potential implications for the utility of protein generation models.\n- The proposed methods appear generalizable to various spatial conditioning scenarios. (*However, this paper focuses solely on secondary-structure conditioning.*)\n\n**[Originality]**\n- The authors introduce a novel conditioning modality using spatial ellipsoids for protein generation, along with a new layer, \"invariant cross-attention,\" to integrate this information." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work extends Multiflow to accept spatial conditioning of secondary structures via 3D ellipsoids, aiming to improve control in protein generation and reduce the overrepresentation of alpha-helices in current generative models. Building on Multiflow’s architecture, the authors address two main challenges:\n\n1. Integrating and updating ellipsoid conditioning with structure embeddings with minimal modifications: they introduced an *Invariant Cross Attention* module to update residue embeddings while preserving local SE3 invariance.\n2. Implementing an effective conditioning approach for flow-matching models: they used classifier-free guidance to interpolate flow vectors across translation, rotation, and amino acid spaces, and employ self-conditioning to refine predicted structures\n \nExtensive experiments show that the proposed model can faithfully follow spatial conditioning, resulting in greater diversity, novelty, and improved secondary structure composition. This improves Multiflow by generating proteins with secondary structures more similar to natural proteins. \nOverall, this work presents a straightforward approach to control protein generation, enhancing Multiflow's diversity, novelty, and secondary structure accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**[Clarity]**\n- Some design choices and model details lack clear explanations or in-depth examination (see Q1, 2, and 4).\n\n**[Soundness]**\n- Performance on Natural Proteins: While the authors demonstrate high designability at fixed helicity levels on synthetic data, it isn't as clear if these benefits hold for natural proteins (see Q3).\n\n**[Significance]**\n- Scope: The current methods are examined only on Multiflow and for secondary structure guidance. Their practical impact on other protein generation models and types of spatial conditioning (e.g., domains, hydrophobic cores) is not extensively explored." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- Line 355: How is the length between ellipsoids determined/sampled? Also, consider an ellipsoid with beta strand annotation, how is the length between each stranded segment determined (particularly if a strand ellipsoid is formed by segments that are distant in sequence but close in structure)? \n- It would be helpful if the authors could provide ablation study results on the effect of self-conditioning, particularly the two self-conditioning schemes described in line 290.\n- Figure 9: What is the linear fit and statistical significance in both cases?\n- During training, is the ellipsoid conditioning information always provided, or only provided for a percentage of time?\n- Line 367: Why is a structured residue considered as “covered” if it is inside at least one ellipsoid instead of inside the ellipsoid it is assigned? \n- Table 1: Could the authors provide some intuition on why over-guidance (\\lambda > 1) performs better than the conditional model itself (\\lambda = 1)?\n- Table 2: Does “PDB proteins” correspond to the validation dataset or does it include the training dataset?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The paper is well written with clean visualizations demonstrating methods and results. The concept of utilizing ellipsoids as conditions for protein generation is interesting and novel, providing a bridge between protein-level conditioning and atom-level conditioning. In addition, the authors propose an effective Invariant Cross Attention module for integrating ellipsoid conditioning and demonstrate success at achieving SOTA performance on the Pareto frontier of designability and diversity." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents ProtComposer, which is a fine-tuned model from MultiFlow to support conditional generation based on 3D ellipsoids, showing success at controllable design and achieving SOTA on the Pareto frontier of designability and diversity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I have no major concerns about this paper. However, it would be helpful if the authors could elaborate on the applicability of the ellipsoid-based conditioning approach on practical protein design tasks. How would it help with or facilitate the protein design process?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I am assuming that the user can redefine K during each trial as they see fit, though studies on guidelines in choosing K would be helpful. In situations where the user has only a vague idea of what they are looking for (an unfortunately common occurrence), having a guide on where to start would be beneficial." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "It addresses an important and relevant area. The results are overall quite impressive as well. Additionally, showing the ability to work with handcrafted ellipsoid frames or the more easily scalable ML generated frames shows the practicality of ProtComposer.\n\nI really like the point about using simple ML models for ellipsoid sampling as compared to NNs. I would like to see some stronger analytical or experimental justification of the claim though." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Introduces ProtComposer, a generative model for proteins. ProtComposer seeks better control over the shape of generated proteins, as well as allow for greater novelty in the generation of proteins. Notably, the user has to choose between control or novelty, ProtComposer does not achieve both simultaneously. These tasks are accomplished by the introduction of 3D ellipsoid frames to guide the generation of proteins by the pre-existing Multiflow algorithm. \n\nWithout a pre-existing metric the authors feel sufficiently quantifies compositionality, they introduce their own metric. It is shown that ProtComposer outperforms existing methods in this metric." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Since a custom metric is introduced in this paper and then used as justification for the performance of the model, a section (either in the main paper or the appendix) justifying this metric would be nice. Showing comparisons to other pre-existing metrics, performance of many other algorithms under this metric, or stronger domain justification would strengthen the meaning of the results. I did not reject over this since the metric intuitively looks good, but further empirical or analytical support would be nice.\n\nThe formatting of the extra results in the appendices results in very odd page layouts, where some pages are blank, others have odd whitespace gaps, etc. Adjustment to make these pages more presentable would be nice." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We develop a framework to generate protein structures conditioned on spatial protein layouts that are specified via a set of 3D ellipsoids." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024protcomposer,\ntitle={ProtComposer: Compositional Protein Structure Generation with 3D Ellipsoids},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0ctvBgKFgc},\nnote={under review}\n}" }, "abstract": { "value": "We develop ProtComposer to generate protein structures conditioned on spatial protein layouts that are specified via a set of 3D ellipsoids capturing substructure shapes and semantics. At inference time, we condition on ellipsoids that are hand-constructed, extracted from existing proteins, or from a statistical model, with each option unlocking new capabilities. Hand-specifying ellipsoids enables users to control the location, size, orientation, secondary structure, and approximate shape of protein substructures. Conditioning on ellipsoids of existing proteins enables redesigning their substructure's connectivity or editing substructure properties. By conditioning on novel and diverse ellipsoid layouts from a simple statistical model, we improve protein generation with expanded Pareto frontiers between designability, novelty, and diversity. Further, this enables sampling designable proteins with a helix-fraction that matches natural proteins, unlike existing generative models that commonly oversample conceptually simple helix bundles." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "protein design", "diffusion model", "controllable generation", "drug discovery", "proteins", "biology" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/150a497084667927c27cc7887351d629a94d394c.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "ProtComposer: Compositional Protein Structure Generation with 3D Ellipsoids" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0dELcFHig2
Multi-modal brain encoding models for multi-modal stimuli
main
Active
brain encoding;fMRI;multi-modal models;multi-modal stimuli;Transformers;videos;speech;language
applications to neuroscience & cognitive science
5;5;6
4;3;4
2;3;3
2;3;3
3;2;3
5.333333
3.666667
2.666667
2.666667
2.666667
0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- For the procedure described in the Figure 1b caption on removing unimodal influence: why subtract out the unimodal contribution? Why not learn a regression directly from the unimodal contribution itself, i.e., the predictions of $r$?\n- Can it be said that the IB-audio and IB-video unimodal representations described on lines 223-224 are not truly unimodal, since they are extracted from a model that was trained with multimodal inputs? Then, they reflect correspondences between language and vision.\n- Figure 3 second row, middle two columns: Why does the green bar not have $\\wedge *$ for SV? It seems significantly higher than both light green bars. Why does the green bar have $\\wedge *$ for MT? It only seems significantly higher than on light green bar." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This work is novel in that it is the first to use fMRI. But other works also take similar approaches to identifying multimodal processing (see weaknesses).\n- Findings of the difference between cross-modal and jointly-trained models with respect to regions is novel\n- These findings have neuroscientific significance\n- Appropriate random baselines are used to give context to alignment numbers is given" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "- Existing work uses unimodal models to identify language and vision processing pathways in the brain. This paper studies multimodal processing in the brain. Multimodal networks are aligned to the brain, and regions with better alignment are identified as the sites of multimodal processing. To verify that the alignment is actually due to multimodality, unimodal influence is removed from multimodal representations by subtracting out the multimodal target, as predicted by the unimodal input, from the multimodal features." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Relationship with previous work [1] and similar concurrent work [2,3] that uses multimodal models to probe for multimodal processing in the brain should be discussed. [1] studies cross-modal and jointly trained networks and uses a naturalistic multi-modal stimuli.\n- The random baseline described in 6.1 is good to make sure that the trained model weights matter. To better get a sense of whether the alignment actually reflects processing in the brain, another good sanity check would be to run a permutation test in which you keep the trained weights, but give the movie stimulus inputs to the model in scrambled order. This would give a floor for the scale of alignments that we see in subsequent results.\n\n## Small things\n- Line 276 typo: wmploy -> employ\n\n## References\n[1] Subramaniam, V., et al. \"Revealing Vision-Language Integration in the Brain with Multimodal Networks.\" International Conference on Machine Learning. International Conference on Machine Learning (ICML), 2024.\n\n[2] Kewenig, Viktor, et al. \"Evidence of Human-Like Visual-Linguistic Integration in Multimodal Large Language Models During Predictive Language Processing.\" arXiv preprint arXiv:2308.06035 (2023).\n\n[3] Dong, Dota Tianai, and Mariya Toneva. \"Vision-Language Integration in Multimodal Video Transformers (Partially) Aligns with the Brain.\" arXiv preprint arXiv:2311.07766 (2023)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "It would help to have additional methodological details in some sections:\n-\tWas cross-subject prediction generated by using all of the predictor subjects voxels, to predict the target subject’s voxel-wise responses?\n-\tWhat are the six different modalities in the image-bind modality? I thought it was an audio-visual model (which I would consider two modalities)\n-\tThe layer-wise predictions for models are shown in the appendix, but do the main text figures use all layers or best layer? If the latter, how is this layer selected.\n-\tAre the whole brain results averaged across all cortical voxels? Or only those with above-threshold cross-subject prediction? Or some other criteria?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "There are many interesting and novel aspects to this paper. First, while there have been extensive encoding model studies on visual or audio stimuli, few have looked at model comparison to multimodal movies. The comparison of audiovisual models to this data is particularly novel. Further the comparison of different types of multimodal models is interesting (though it is difficult to draw strong conclusions about what their comparison tells you about multimodal processing in the brain, see below), and particularly the attempts to quantify what additional explanatory power a multimodal model has over unimodal models. The encoding analyses were also robust, particularly the cross-movie train/test splits." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper compares different multimodal AI models to human brain responses while participants view audiovisual movies. They compare two different multimodal models, one cross-modal model that learns separate visual/audio embeddings and projects them into a shared representational space, and one jointly pretrained multimodal model, and three unimodal (vision or speech) models. The results show that in most brain regions multimodal training improves encoding model performance of voxel activity, particularly compared to unimodal speech models. The authors do additional residual analysis to understand the unique contribution of multimodal models (over unimodal models) to brain predictivity.\n\nOverall, this is an interesting approach and the paper has many strengths. The link between results and overall conclusions was not always clear. In particular, the small number of highly varied models makes it difficult to draw strong conclusions about parallels between multimodal processing in the models and brains. Finally, there were several clarification/presentation questions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The biggest issues stem from the comparison of the performance of a relatively small number of models that differ along many factors. This limitation makes it difficult to attribute any model differences to multimodality or different cross-modal training schemes, as the model architecture and training sets vary from model to model. \n\nThe fMRI dataset uses a small-n, data-rich design, which is good, but given this, it would help to see the results at the individual subjects’ level (in the appendix). On bar plots, it would be nice to plot each of the six subjects as a point overlaid on the average bar (rather than error bars which can obscure differences across the small number of subjects).\n\nThe residual analyses and results are somewhat confusing. The residual correlation with unimodal features was 0.56, which is still quite high. Given this, it is not clear that unimodal information was removed from multimodal models. Alternatively, the authors could do the residual analysis on the brain instead of the models (i.e., fit a model with both unimodal and cross modal and predict with just cross modal). Relatedly they could calculate the product measure or unique variance explained by multimodal models above unimodal models from this joint model. \n\nOverall, the language and visual region responses look largely the same in most analyses. There are some quantitative/statistical differences, but the pattern is extremely similar. The authors should address this.\n\nPerhaps related to the above point, all regions of interest are defined anatomically, but there is a fair amount of subject-wise variability in the anatomy of high-level functional regions, such as language. The authors should address this limitation.\n\nAcronyms and plots were difficult to follow. Would help to spell out throughout vision versus lang regions for example, and clarify the legend in figure 4 (e.g., it was not clear on first read that “-“ indicates “minus”).\n\nFigure 5 is difficult to read and interpret. In terms of clarification, the authors should specify hemispheres/views (it looks like a lateral view on top, medial on bottom, but I’m not certain). The results look extremely noisy and seem to show random patterns, with as much red as blue. Blue are areas the unimodal models perform better? How should that be interpreted? The legend says the colorbar indicates “percentage increase/decrease. Does this mean 0.5% or 50%? If the former, these are very tiny differences, which perhaps explains the above confusion, but I believe makes it difficult to draw any strong conclusions about these results. \n\nI had questions about two of the conclusions listed in the discussion. I was unsure what the second part of conclusion 2 (“This variance can be partially attributed to the removal of video features alone, rather than auditory features.”) meant. I am also unconvinced of conclusion #4 given the overall similarity between language and visual brain regions described above.\n\nTypos:\n-\tLine 224: “audo” --> “audio”\n-\tLine 276: “wmploy” --> “employ”" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Why did the authors choose to use parametric statistics? To my knowledge non-parametric statistics are more common in NeuroAI to estimate a baseline performance rather than assuming one. \n\nAre the brains on the bottom in Figure 5 the medial view? It is difficult to see why there are four brains in each box. Outside of labeling, showing the sulci on the inflated brains would help to orient the reader to what is being shown." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "For the most part, the authors use standard and well validated methods to answer their question. I particularly like the approach of evaluating the performance on entire held-out videos and using the residual analysis to investing multi-modal effects above and beyond unimodal effects." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors aim to address whether there is a difference in brain-alignment based on whether multimodal models had cross-modality pretraining (separate encoders for two modalities) or joint pretraining (shared encoder across modalities). They compare one model of each type to video and speech models and evaluate the prediction in a large-scale open access movie dataset. Using residual analyses, they investigate whether there are multi-modal representations in visual and language regions of the brain." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The authors only evaluate two multi-modal models and a small number of unimodal models. However, the chosen models differ on many factors (e.g. architecture, training data) in addition to the input modality and as a result, it is premature to draw conclusions about semantic representations in the brain that may be attributable to any of these factors. To my mind, there are two ways to mitigate this concern: 1) controlled model training and evaluation so that only one factor varies at a time, or 2) testing many different models of a given class such that even across significant model variations there is a robust effect of modality regardless of particular model factors. I think that this is a serious concern because not all prior work has found that multi-modal models are more brain aligned. In controlled comparisons between visual models with the same architecture and training data, there was no performance increase as a result of contrastive image-language training (Conwell et al., 2023). These authors suggest that the higher alignment of CLIP relative to unimodal models in other work may be training set size. \n\nConwell, C., Prince, J. S., Kay, K. N., Alvarez, G. A., & Konkle, T. (2023). What can 1.8 billion regressions tell us about the pressures shaping high-level visual representation in brains and machines? (p. 2022.03.28.485868). bioRxiv. https://doi.org/10.1101/2022.03.28.485868\n\nA minor weakness of the paper is that the authors use custom, non-standard acronyms and names for brain regions (e.g., scene visual area, SV, and object visual processing region, OV). It is confusing as a reader, but more critically, it difficult to understand what has been found for particular regions across the field, making the status of the literature more tenuous. I would suggest that the authors adopt standard acronyms throughout (e.g., PPA instead of SV). \n\nAlthough the paper overall is fairly clear, section 6.3 and the corresponding figures (4, 9, and 10) are difficult to follow. I welcome clarification because, outside of a few lines in the discussion, I am having a hard time understanding which regions do show a multi-modal effect. Additionally, I think that the authors should emphasize whether they uncover expected unimodal effects in primary sensory cortices. In particular, we would expect that EVC would be predicted by visual models with no additional multi-modal contribution and similarly in AC but for auditory models. I am having trouble determining whether that is the case from the figures, but if it is not the case, it would lend more weight to my major concern about differences between models beyond modality." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024multimodal,\ntitle={Multi-modal brain encoding models for multi-modal stimuli},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0dELcFHig2},\nnote={under review}\n}" }, "abstract": { "value": "Despite participants engaging in unimodal stimuli, such as watching images or silent videos, recent work has demonstrated that multi-modal Transformer models can predict visual brain activity impressively well, even with incongruent modality representations. This raises the question of how accurately these multi-modal models can predict brain activity when participants are engaged in multi-modal stimuli. As these models grow increasingly popular, their use in studying neural activity provides insights into how our brains respond to such multi-modal naturalistic stimuli, i.e., where it separates and integrates information across modalities through a hierarchy of early sensory regions to higher cognition (language regions). We investigate this question by using multiple unimodal and two types of multi-modal models—cross-modal and jointly pretrained—to determine which type of models is more relevant to fMRI brain activity when participants are engaged in watching movies (videos with audio). We observe that both types of multi-modal models show improved alignment in several language and visual regions. This study also helps in identifying which brain regions process unimodal versus multi-modal information. We further investigate the contribution of each modality to multi-modal alignment by carefully removing unimodal features one by one from multi-modal representations, and find that there is additional information beyond the unimodal embeddings that is processed in the visual and language regions. Based on this investigation, we find that while for cross-modal models, their brain alignment is partially attributed to the video modality; for jointly pretrained models, it is partially attributed to both the video and audio modalities. These findings serve as strong motivation for the neuro-science community to investigate the interpretability of these models for deepening our understanding of multi-modal information processing in brain." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "brain encoding", "fMRI", "multi-modal models", "multi-modal stimuli", "Transformers", "videos", "speech", "language" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3f4478206dfab03a2644be4c503226053835f10a.pdf" }, "presentation": null, "primary_area": { "value": "applications to neuroscience & cognitive science" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Multi-modal brain encoding models for multi-modal stimuli" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0e26yMOCbd
CHARGE DIRICHLET ENERGY: Geometric Perspectives on Over-smoothing in Deep Graph Neural Networks
main
Active
Graph Neural Network;Over-smoothing;Dirichlet energy
learning on graphs and other geometries & topologies
3;3;3;3;5
5;3;5;4;4
2;2;2;2;3
1;3;1;1;2
2;1;2;2;2
3.4
4.2
2.2
1.6
1.8
-0.133631
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The layer propagation rule shows strong similarity to EGNN's Lower-bounded Residual Connection [1]. The paper needs to better elaborate on the key differences between these approaches.\n\n2. While EGNN appears as a baseline in Table 2, it is missing from the comprehensive comparison in Table 1. This makes it difficult to fully assess CDE-GNN's performance against this closely related method.\n\n3. Figure 1 analyzes the Dirichlet energy and edge space length for GAT, but lacks a corresponding visualization showing how CDE-GNN's Dirichlet energy behaves across different layers. Adding this visualization would help demonstrate the effectiveness of the proposed method in preventing energy decay.\n\n[1] Zhou et al. Dirichlet energy constrained learning for deep graph neural networks. Advances in Neural Information Processing Systems, 34, 2021." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Well-structured presentation progressing from problem motivation to theoretical analysis to practical solution.\n2. Comprehensive empirical validation across multiple datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the problem of over-smoothing in deep GNNs, where node embeddings become indistinguishable as network depth increases, leading to degraded performance. The authors present a novel geometric perspective on this issue and propose a method called Charge Dirichlet Energy (CDE-GNN). The authors validate their method through comprehensive experiments across various datasets and network depths, showing consistent performance improvements over baseline models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited Novelty: using Dirichlet energy to overcome oversmoothness has been extensively studied.\n2. lack of detailed analysis of computational overhead compared to baseline methods" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How does the model perform on large graph datasets, such as ogbn-products and ogbn-proteins?\n\n2. Can the authors provide validation on whether the over-smoothing problem is indeed addressed in practice on the experimental datasets?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well motivated and studies an important problem from an interesting perspective\n\n2. The paper is generally well written and easy to follow" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the over-smoothing problem of deep graph neural networks and proposes a geometric perspective for addressing over-smoothing. Specifically, the authors analyze the Dirichlet energy minimized by the feed-forward computation process of GCNs and propose a new method based on Dirichlet energy for resolving over-smoothing when the layer number increases. Experiments on small datasets demonstrate the effectiveness of the model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed method has limited novelty given existing works that have explored similar ideas and model designs, e.g., [1,2]. Adding self-loop or residual link or strengthening the information of the centered nodes have been extensively used by existing GNN models, such as the well-known ones [1, 2] \n\n2. The theoretical results are not new and have been derived in the literature, e.g., [3, 4]. The result of Lemma 1 has been proved in [3] and [4]. Besides, the analysis presented in this paper only shows the result that is already well-known, i.e., over-smoothing will happen when the layer increases. There lacks analysis in why and how the propose model can address the over-smoothing.\n\n3. The experimental evaluation is limited in small datasets and comparison with state-of-the-arts is insufficient. More experiments on large datasets such as ogbn-products and ogbn-proteins are suggested. More comparison with state-of-the-art GNNs, especially the ones that can overcome over-smoothing, e.g., GCNII, are needed.\n\n[1] Simple and Deep Graph Convolutional Networks, ICML 2020\n\n[2] Predict then Propagate: Graph Neural Networks meet Personalized PageRank, ICLR 2019\n\n[4] A note on over-smoothing for graph neural networks. Arxiv 2020\n\n[5] Dirichlet Energy Constrained Learning for Deep Graph Neural Networks, NeurIPS 2022" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How Eq. (8) could help preserve Dirichlet energy of each layer? By simply multiplying $X^{(l)}$ with a scalar and adding that to before applying nonlinearity, how would it guarantee Dirichlet energy is not vanishing?\n\n2. How is the approach compared with others like GCNII in terms of preserving DIrichlet energy? Intuitively adding initial residuals can already effectively preserve Dirichlet energy. Why the proposed approach adds the additional term before the nonlinearity and why the embedding of the previous layer instead of the initial layer is used?\n\n3. Why is the performance of all models on ogbn-arxiv much lower than officially reported results?\n\n4. Are there any plots of the layer-wise Dirichlet energy of the proposed model as well as some baselines (e.g., GCNII [1]) on these benchmarks? How does Dirichlet energy connect to actual performance? This would be important to help readers gain more intuition and also help understand the efficacy of the proposed approach.\n\n5. What is the rationale of introducing edge space (Definition 3.2) and how does it play a role in justifying the proposed method?\n\n\n[1] Chen et al. Simple and Deep Graph Convolutional Networks. ICML'20." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The problem tackled is important and this paper approaches it through the concept of Dirichlet energy.\n\n2. The method seems to offer empirical enhancements on various benchmarks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper discusses an approach to alleviate over-smoothing of deep graph neural networks through the lens of Dirichlet energy. The idea lies in adding one additional term in the layer-wise propagation which takes into account the Dirichlet energy of the initial graph. Experiments have been conducted on several node classification benchmarks showing that the model can yield better performance with increasing depth of the graph neural network." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Though a lot of efforts have been paid on discussing DIrichlet energy, how the proposed approach is linked to preserving Dirichlet energy still remains very unclear. Eq. (8) was introduced alone while more theoretical investigations and experimental observations should be incorporated.\n\n2. The presentation needs improvement. Wordy sentences present at times with many of them constantly repeated, e.g., the contents in Sec. 4.1 has been discussed multiple times in the previous sections and should be simplified for more informative contents.\n\n3. Some concepts were introduced with confusion and did not exhibit strong correlation to the proposed approach. For instance, how Proposition 3.2 is related to Eq. (8) (e.g., how Eq. (8) help address the vanishing problem) is unclear. There is also no clear reason of introducing the edge space (Definition 3.2) and Corollary 3.1 conveys limited information.\n\n4. Experiment settings are not convincing. For example, the reported performance of the baselines are remarkably lower than the official leaderboard of obgn-arxiv (https://ogb.stanford.edu/docs/leader_nodeprop/#ogbn-arxiv).\n\nMinor:\n\nMisuse of citet vs citep in multiple places hinders the readability. The authors are encouraged to correct these presentation issues.\n\nOverall, the paper in its current shape is unsatisfactory in justifying the rationale of the proposed approach, which is simply Eq. (8), and relevant discussions in both theory and experiment are missing, making it less self-consistent." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. In Equation (5), why summing over the edges twice?\n\n2. In Line 229-230, should Equation 9 actually refer to Equation 7?\n\n3. In Line 080, the authors state that the energy lower bound is learnable. However, as in Section 4.2.1, it is fixed as the initial energy. How is it learnable?\n\n4. In Line 304, it says the initial energy is multiplied by the initial embeddings, whereas in Equation 8 it is multiplied by the embeddings per layer. Which one is correct?\n\n5. In Table 1, why are there two bolded results for Citeseer and Film? Also, are those results for the semi-supervised or fully-supervised setting? It says in Line 334 and 370 that both settings’ results are in Table 1 but there are no marks for different settings.\n\n6. Several results in Table 2 and 3 don’t match. For example, on the Physics dataset with 32 layers, the optimal accuracy is 94.2 and 94.4, respectively. Why?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper proposed a very intuitive and generally applicable idea to effectively alleviate the over-smoothing problem for GNNs.\n2. The proposed idea is well-motivated by solid theoretical insights and results.\n3. The paper is easy to read." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Inspired by previous work on over-smoothing and Dirichlet energy, the authors propose a simple, intuitive, and generally applicable method named CDE-GNN to address the over-smoothing problem in Graph Neural Networks. The proposed approach is validated by theoretical and empirical results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Some theoretical terms in Section 3 are not sufficiently introduced and explained. The authors are suggested to provide more detailed background and solid definition of the quantities involved.\n\n2. Several parts are repetitive or even inconsistent. For example, Section 4.1 is redundant and repetitive of the earlier content, as well as the Hyperparameter Analysis with its three following paragraphs in Section 5.2. Please refer to the Questions for details.\n\n3. The hyperparameter analysis is not insightful. The study of different activation functions, hidden dimensions, and dropout rates is old-fashioned and not unique to the proposed approach." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "**General Concerns**\n\n- Is $\\alpha$ in Equation(8) a learnable parameter or a manually defined hyper-parameter? The authors did not clarify this.\n- What are the exact values of $\\alpha$ and $E_{init}$ for each dataset in the experiments? These details are not mentioned in either the main text or the supplementary materials.\n\n- If $\\alpha$ is a hyper-parameter, the model's performance under different $\\alpha$ settings should be reported. Additionally, a discussion on how to select an appropriate $\\alpha$ would provide valuable insights for the community.\n- Why not include a comparison with EGNN in Table 1?\n- Why not use SReLU as the activation function in Section 5.2, given that it has been demonstrated in EGNN[1] for preserving Dirichlet energy, especially on large datasets like OGBN-arxiv?\n- It would be helpful if the authors could clarify the source of the baseline model performances in Table 1. Specifically, did they conduct all the experiments themselves, or were some of the results sourced from other papers?\n- Given that the benchmark results are based on 10 random splits, would it be possible to provide the standard deviation in addition to the mean? This could offer a more comprehensive understanding of the results.\n- It would be better to include publicly available code to ensure reproducibility;\n\n\\[1\\] K. Zhou et al., “Dirichlet energy constrained learning for deep graph neural networks,” in Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P. S. Liang, and J. W. Vaughan, Eds., Curran Associates, Inc., 2021, pp. 21834–21846.\n\n\\[2\\] J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra, “Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs,” in *Advances in Neural Information Processing Systems*, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, Eds., Curran Associates, Inc., 2020, pp. 7793–7804. \n\n\n**Ambiguous statement**\n\n- In Eqaution(8), $E_{init}$ is **multiplied by $X^{(l)}$** . While in Line 304, it states **multiplied by $X^{(0)}$** .\n\n > Line 304\n > The initial Dirichlet energy Einit captures the geometric information of the original graph and, when multiplied by the initial node embeddings X(0), ensures that each layer’s update process retains the topological features of the original graph.\n\n- data split question. What are the actual splits used for the Cora, Citeseer, and PubMed datasets? If Yang's split was adopted, why not use the same split as Geom-GCN for consistency?\n > Line 326\n >\n > For this study, we use the **Cora, Citeseer, and Pubmed** datasets Sen et al. (2008), **following the standard training/validation/test splits established by Yang et al. (2016)**\n >\n > Line 367\n >\n > We apply our model to datasets including **Cora, Citeseer, Pubmed**, Chameleon Rozemberczki et al.(2021), Film, Cornell, Texas, and Wisconsin, **following consistent splits of 48%, 32%, and 20%** for training, validation, and testing, respectively.\n\n\n**Minor comments**\n\n- Table1, Table 2, Table 3 out of page width;\n- In the introduction section, authors use the notion `a minimum Dirichlet energy ω` . But in the following text, $\\omega$ is no longer used; instead, $E_{init}$ is used. A consistent notation across the whole paper would be better;\n- There is a mistake in Equation (5): $\\mathcal{E}(f)$ is already a summation over the (i,j) pairs and cannot be summed again." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The authors revealed that the Dirichlet energy decay in deep GNNs is linearly proportional to the edge space sum with a constant $c$.\n\nThe authors provided that, within the Dirichlet energy analystic framework, it is crucial for the activation function $\\phi$ to satisfy $\\phi(0)=0$ in Proposition 3.2. This may indirectly explains why adding a shift $b$ in SReLU is effective.\n\nThe minimum Dirichlet energy constrained message updating scheme showed a good performance.\nExtensive benchmark performance comparisons." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new understanding of Dirichlet energy in the context of GNNs, revealing the relationship between Dirichlet energy decay and edge space collapse. \n\nThe paper also introduces a new message updating scheme, which prevents over-smoothing by incorporating a residual term weighted by the minimum Dirichlet energy. And the authors conducted extensive comparison experiments demonstrating the effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Lack of novelty**\n\nThis paper is a direct follow-up of [1].\n\nConsider equation(8) in [1] : \n\n$X^{(k)}=\\sigma\\left(\\left[\\left(1-c_{\\min }\\right) \\tilde{P} X^{(k-1)}+\\alpha X^{(k-1)}+\\beta X^{(0)}\\right] W^{(k)}\\right)$,\n\nwhere $\\alpha+\\beta=c_{\\text{min}}$.\n\nWhen set $\\beta=0$ and rephrase the equation as \n\n$X^{(k)}=\\sigma\\left(\\left[ \\tilde{P} X^{(k-1)}+ \\frac{c_{\\text{min}}}{\\left(1-c_{\\min }\\right)} X^{(k-1)} \\right]W^{(k)}\\right)$.\n\nReplacement the symbol $\\frac{1}{1-c_{\\text{min}}} \\to \\alpha$ and $c_{\\text{min}} \\to E_{init}$ , we immediately obtain:\n\n$X^{(k+1)}=\\sigma( \\tilde{P} X^{(k)}W^{(k)} + \\alpha E_{init} X^{(k)}W^{(k)}))$.\n\nCompare to Equation(8) in this paper:\n\n$X^{(l+1)}=\\sigma\\left(\\tilde{\\mathbf{L}} X^{(l)} \\mathbf{W}^{(l)}+\\alpha E_{\\text {init }} X^{(l)}\\right)$\n\nThese two update functions are remarkably similar, except for a weight matrix.\n\nIt has been demonstrated in [1] that, utilizing two distinct forms of residual connections can effectively constrain the lower bound of the DIRICHLET energy, and the constraint's intensity can be modulated by a gating parameter.\n\nThe approach presented in this paper appears to gracefully fit within this previously established framework, representing a specific instance when $\\beta=0$.\n\nThe main contribution is addressing the issue that researchers do not know how to choose an appropriate lower bound $c_{\\text{min}}$ for different datasets. While in this paper, the authors suggest that simply using the initial DIRICHLET energy $E_{\\text{init}}$ as the lower bound works very well.\n\nThe authors need to clarify how their method distinguishes itself from or enhances the previous approach.\nIt would be more valuable if the authors could elaborate on the significance of omitting the weight matrix in the update function and the rationale behind selecting the initial DIRICHLET energy as the lower bound among various initial value choices.\n\n[1] K. Zhou *et al.*, “Dirichlet energy constrained learning for deep graph neural networks,” in *Advances in Neural Information Processing Systems*, M. Ranzato, A. Beygelzimer, Y. Dauphin, P. S. Liang, and J. W. Vaughan, Eds., Curran Associates, Inc., 2021, pp. 21834–21846. \n\n \n\n**Lack of experiment**\n\n- Dirichelet energy visualization. Plotting the Dirichlet energy of each layer with and without the $E_{init}$ term would be more persuasive. As stated in Section 4.2, 'The initial Dirichlet energy serves as a lower bound for the Dirichlet energy,' it is expected that $E_{\\text{Dirichlet}}$ will converge to the lower bound $E_{init}$ as the number of layers increases.\n\n- The claim that $E_{init}$ preventing topological collapse in Section 4.2 should be supported by experimental evidence. Visualizing the node representations in the final layer and comparing them to the initial topology would be helpful. Consider using commonly employed techniques, such as t-SNE visualization\\[2\\]\\[3\\] or a color-propagation test[4].\n\n\n\n\n\\[2\\]D. Shen, C. Qin, Q. Zhang, H. Zhu, and H. Xiong, “Handling over-smoothing and over-squashing in graph convolution with maximization operation,” *IEEE Trans. Neural Netw. Learn. Syst.*, pp. 1–14, 2024, doi: [10.1109/TNNLS.2024.3442270](https://doi.org/10.1109/TNNLS.2024.3442270).\n\n\\[3\\]M. Liu, H. Gao, and S. Ji, “Towards Deeper Graph Neural Networks,” in *KDD*, Aug. 2020, pp. 338–348. doi: [10.1145/3394486.3403076](https://doi.org/10.1145/3394486.3403076).\n\n\\[4\\]K. Xu, C. Li, Y. Tian, T. Sonobe, K. Kawarabayashi, and S. Jegelka, “Representation learning on graphs with jumping knowledge networks,” in *International conference on machine learning*, PMLR, 2018, pp. 5453–5462." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024charge,\ntitle={{CHARGE} {DIRICHLET} {ENERGY}: Geometric Perspectives on Over-smoothing in Deep Graph Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0e26yMOCbd},\nnote={under review}\n}" }, "abstract": { "value": "Over-smoothing is regarded as a key issue affecting the performance of deep Graph Neural Networks (GNNs). As the number of GNN layers increases, model performance degrades significantly, due to node embeddings converging into indistinguishable vectors. This phenomenon stems from the recursive aggregation of neighbor node representations, which impairs the distinguishability of node embeddings. From an energy perspective, this is associated with the convergence of node embeddings to a fixed point solution during the minimization of Dirichlet energy, hindering the model's ability to learn underlying geometric structures. While Graph Convolutional Networks (GCNs) have achieved success in modeling graph-structured data, there is still insufficient understanding of how the underlying geometry contributes to the trainability of deep GCNs.\nIn this paper, we present a novel geometric perspective to understand the poor performance of deep GCNs during training, a method called Charge Dirichlet Energy (\\model). We argue that maintaining a healthy geometric structure can significantly enhance the trainability of GCNs and enable state-of-the-art performance, even in base GCN architectures. Subsequently, we analyze the importance and feasibility of learning geometric shapes, demonstrating the critical role of geometric information in training deep GNNs. Extensive empirical validation on multiple benchmark datasets shows that our method improves the geometric shape of deep base GCNs, significantly enhancing their performance and outperforming many state-of-the-art methods in competitive settings. Our contributions include not only a new approach to mitigating over-smoothing and over-compression but also comprehensive theoretical and empirical verification of the importance of geometric structures for the trainability of deep GNNs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Graph Neural Network", "Over-smoothing", "Dirichlet energy" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ff9451b67025a24d1df2e8799f44a91ee720c765.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/c72f46c6226ed60e3749a3d2f277be122db8d0e4.pdf" }, "title": { "value": "CHARGE DIRICHLET ENERGY: Geometric Perspectives on Over-smoothing in Deep Graph Neural Networks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0e2pcSxQJS
PN-GAIL: Leveraging Non-optimal Information from Imperfect Demonstrations
main
Active
Generative adversarial imitation learning;imperfect demonstrations;reinforcement learning
reinforcement learning
5;6;6;8
3;4;4;4
3;3;3;4
3;3;3;3
3;3;4;4
6.25
3.75
3.25
3
3.5
0.662266
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "I have no ethical concerns on this work." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Q1. In contrast to scenarios where $\\pi_1$ dominates, how would the results in Figure 2 be affected if $\\pi_{opt}$ were dominant, a condition under which existing methods are known to perform well? Results for PN-GAIL in the Pendulum-v1 task with various $\\pi_{opt}:\\pi_1$ ratios are presented in Figure 7 of the supplementary material, but a more systematic comparison between PN-GAIL and baseline methods could strengthen the manuscript. If PN-GAIL demonstrates robust performance and outperforms the baseline methods in such scenarios, it would confirm the method's reliability. This evidence would support the assertion that PN-GAIL performs consistently well across a range of dataset quality distributions, thus setting it apart from existing methods.\n\nQ2. Instead of modifying the $\\pi_{opt}:\\pi_1$ ratio while maintaining a fixed total number of demonstrations, what would occur if the optimal demonstrations was kept invariant while the number of suboptimal demonstrations was varied? This setup would illustrate how PN-GAIL effectively utilizes additional suboptimal demonstrations, isolating the influence of optimal demonstrations on imitation performance. It would offer valuable insights into PN-GAIL’s ability to adapt to and leverage diverse demonstration qualities effectively." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This work is well-motivated and effectively addresses the challenges presented in the previous study using sound methods.\n\nA notable strength of this work is its practice-oriented design of the objective functions, which enhances applicability in real-world scenarios.\nThis study removes the dependence on $\\eta$, the class prior $p(y=0)$ for imperfect demonstration datasets, from the primary objective. \nSince $\\eta$ is generally unknown and challenging to estimate, prior work treated it as a hyperparameter, requiring practitioners to invest substantial effort in tuning it. \nBy eliminating this reliance, the proposed approach reduces the overhead associated with hyperparameter optimization.\n\nAdditionally, the authors introduce near-optimal and practical choices for the parameters $\\alpha$ and $\\beta$ in the Balanced Semi-Confidence (BSC) objective, which can be straightforwardly calculated based on the dataset sizes $n_c$ (confidence-labeled) and $n_u$ (unlabeled). \nThis adjustment simplifies the implementation process and supports the broader applicability of imitation learning with imperfect demonstrations in practical settings.\n\nFurthermore, the manuscript includes theoretical analyses showing that (i) the proposed objective helps avoid the imitation of non-optimal data and (ii) derives a sample complexity bound for the BSC method, providing a rigorous foundation for the proposed improvements." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work addresses imitation learning from imperfect demonstrations, utilizing both confidence-labeled noisy demonstrations and unlabeled noisy demonstrations. It aims to overcome two primary limitations of prior work, specifically 2IWIL [A], a representative approach to this problem that employs a two-step learning process: (i) semi-confidence labeler training on the unlabeled dataset, and (ii) confidence-based generative imitation learning.\n\nThe proposed method, PN-GAIL (Positive-Negative GAIL), tackles the limitations of 2IWIL as follows:\n\n1. **Incorporating Negative Risk to Objective**: 2IWIL overlooks the negative risk associated with imperfect demonstrations, leading the discriminator to disproportionately prioritize the positive risk of frequent samples. PN-GAIL addresses this by incorporating both positive and negative risks into the confidence-based imitation learning objective, ensuring a more reliable evaluation regarding to demonstration quality.\n2. **Balanced Semi-Confidence (BSC) Classification**: In 2IWIL, semi-confidence (SC) classification is used to train a confidence labeler for unlabeled demonstrations. However, SC classification tends to overestimate the confidence of labeled data and underestimate the confidence of unlabeled data. To address this, PN-GAIL introduces a balanced semi-confidence (BSC) objective and further suggests near-optimal values for hyperparameters $\\alpha$ and $\\beta$, enhancing practical applicability.\n\n[A] Wu et al., \"Imitation Learning from Imperfect Demonstration,\" ICML 2019." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Despite the many strengths of this work, the empirical results presented in the manuscript do not stand out as particularly impressive.\n\nSpecifically, Figure 1 shows that the performance difference between PN-GAIL and the most competitive baseline across tasks is not significant. In my opinion, as discussed in Section 2, since baseline methods typically assume a dominant proportion of $\\pi_{opt}$, exploring scenarios with a more skewed ratio (e.g., $\\pi_{opt}:\\pi_1=1:10$) might provide a more notable results where conventional methods fail while PN-GAIL successes. \nI think conducting experiments with more extreme demonstration ratios could more clearly demonstrate the scenarios in which PN-GAIL offers a distinct advantage over baseline methods.\n\n**[Minor Comments]**\n1. For Figures 2, 3, and 4, using distinct line colors for different methods would enhance readability.\n2. In the ablation study presented in Figure 3, it would be advantageous to include results from 2IWIL—even though they are already provided in Figure 2. Since 2IWIL represents a variant of PN-GAIL that excludes both the PN objective and the BSC, its direct comparison within the same figure would clarify the incremental benefits of these components." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Theorem 1’s bound relies on knowing the variances. However, the variances can be difficult to estimate in real-world applications. Given this dependency, how practically useful is the bound in scenarios where variance values are unknown or hard to assess?\n* Theorem 2’s bound may become quite loose if the Rademacher complexity is high, as is typical with deep and wide neural networks. Could this have negative implications for the method's reliability when using complex models?\n* Related to my disagree on the claim that 'GAIL treats non-optimal demonstrations as if they were optimal' in the weaknesses section, please could you provide a counterargument?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* Originality: The use of positive and negative risks to manage imperfect demos tackles some of the real-world problems of non-optimal data in a practical way.\n* Quality: The paper provides theoretical analysis for the positive-negative risk approach and shows experimental results across multiple control tasks.\n* Clarity: Most theoretical ideas are clearly presented with good notation.\n* Significance: The approach is relevant for real-world applications. It might have a real impact on the usability of IL in practical scenarios." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an IL method that handles imperfect demos by including both the optimal and non-optimal data in training, assuming that the demos come with a confidence score. The goal is to improve policy learning when demos aren’t perfectly optimal. The method assigns weights to optimal and non-optimal examples through a semi-supervised confidence classifier. Experiments on six control tasks show that PN-GAIL performs better than standard GAIL and other baselines under these imperfect conditions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The method’s reliance on confidence scores may limit its applicability when it’s difficult to assign confidence levels directly. Human annotation of preferences over trajectories, for instance, might be more feasible than assigning explicit confidence scores.\n* The fundamental motivation of this paper is based on a claim that 'GAIL treats non-optimal demonstrations as if they were optimal'. I disagree with this claim. GAIL’s objective is to minimize the JS divergence between the expert and agent trajectory distributions, aiming to reproduce the overall distribution of expert demonstrations. When the agent policy is close to the expert policy, the discriminator's output tends to be 0.5 everywhere. If expert demos include a mix of optimal and sub-optimal trajectories, GAIL should naturally capture this mixture without necessarily assuming optimality. Could you provide a counterargument or clarification on why PN-GAIL’s approach is necessary, given this perspective?\n* Lacks comparison with other advanced IL algorithms, such as f-IRL." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could the authors highlight expert performance more prominently in the figures to enhance clarity and interpretability?\n\n2. Would varying values of $n_{u}$ and $n_{c}$ significantly impact the performance of the proposed method? Additionally, could the authors provide guidelines or criteria for selecting optimal values for these parameters in practice?\n\n3. Figure 1 offers a valuable comparison of the proposed method against GAIL and 2IWIL. To further support this comparison, it would be better if the authors could also provide more intuition of why GAIL and 2IWIL fail but the proposed method succeed. E.g., gradient colors for different confidence scores, and why 2IWIL fails to predict them. Additionally, Why GAIL predicts 5.0 for some of theses data points?\n\n4. To further contextualize this work within imitation learning (specifically, driver behavior imitation), it would be beneficial to incorporate additional relevant studies, such as:\n\n[2] Ruan, Kangrui, and Xuan Di. \"Learning human driving behaviors with sequential causal imitation learning.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 4. 2022.\n\n[3] Hawke, Jeffrey, et al. \"Urban driving with conditional imitation learning.\" 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020.\n\nand so on." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper introduces a new algorithm (based on 2IWIL and IC-GAIL) supported by theoretical analysis and demonstrates its effectiveness across multiple tasks. Experiments are extensive, including different benchmarks, different $\\pi_{OPT} : \\pi_{1}$ ratios, different standard deviations of Gaussian noise." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose PN-GAIL, an extension of the GAIL framework designed to handle the imperfect expert demonstrations. By predicting confidence scores for unlabeled data, PN-GAIL allows for more accurate imitation learning without relying solely on optimal examples. Theoretical analysis is provided for the output of the optimal discriminator in the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although the proposed method is based on [1], e.g., some techniques to prove Theorem 4.1, Theorem 4.2, have been explored in [1], this study sill broadens the scope of [1]. One potential weakness is: although $\\alpha$ and $\\beta$ are intended to play distinct roles in Theorem 4.2, they are selected to be identical in Algorithm 1, which may affect the overall applicability of the algorithm. \n\n[1] Wu, Yueh-Hua, et al. \"Imitation learning from imperfect demonstration.\" International Conference on Machine Learning. PMLR, 2019." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. About clarification: PN-GAIL\\BSC: PN-GAIL without balanced semi-conf (BSC) classification--does this mean no classification used or SC used? Without a probabilistic classifier, how to obtain confidence scores? \n\n2. To verify BSC outperform SC in 2IWIL, straightforward comparisons are PN-GAIL(BSC) vs. PN-GAIL(switch to SC) vs. 2IWIL(switch to BSC) vs. 2IWIL(SC). Why do you consider the way of comparisons that are presented in the paper?\n\n3. It is better to provide more detailed explanations and discussions regarding those figures. For example, the statement about Figure 3 is quite short. The audience would like to learn about what possible reasons are that result in the varying performance patterns across six environments in Figure 3. We can observe that in some cases three colors (methods) are similar; in some cases blue and green are close; in some cases, blue and orange are close; while in some cases, blue works best. Could you provide a systematic analysis?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The main objective/purpose of this paper is articulated explicitly with the existing methods and their limitations highlighted clearly.\n2. The literature review on IL, GAIL, and IL with imperfect information is comprehensive. The preliminaries provide the essential information of 2IWIL.\n3. The theoretical derivations regarding the discriminator modification and classification refinement are straightforward and concise in the main body, which makes audience easy to follow; meanwhile, the supplemental materials in the appendices provide necessary, detailed explanations.\n4. The experiments with three goals setup are tightly related to the main achievements that the paper wants to claim. The experiments are conducted with representative benchmarks across various environments, effectively showing the performances with well-formatted figures and table in the main context.\n5. Overall, the authors identify the potential challenges of 2IWIL about certain non-optimal data with high frequencies in an unlabeled demonstration set significantly affecting reward and policy generation. The topic is likely to be of interest to a large proportion of the community. The proposed PN-GAIL creatively update the previous methods to successfully remove limitations of prior IL results to a certain extent." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Motivated by the limitation of 2IWIL in a type of scenario where certain non-optimal demonstrations have high probabilities of appearing in a set of imperfect (unlabeled) demonstrations, the paper proposes a new method named PN-GAIL, better leveraging optimal and non-optimal information from imperfect demonstrations to learn optimal reward and policy by assessing both positive and negative risks. Moreover, the paper modifies semi-conf classification in 2IWIL to establish balanced semi-conf classification to handle better the cases where certain demonstrations only exist in the confidence (known, labeled) data.\n\nThe authors conduct experiments, comparing their PN-GAIL to four baseline methods across six environments. The results show that PN-GAIL alleviates the impact of the unbalanced frequency in impact demonstrations, outperforms other methods, and maintains relatively good performances given the decreasing number of labels. Also, the outcomes demonstrate that the balanced semi-conf classifier improves performances, particularly in three out of six environments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Missing definition: in Section 3, what δ represents. Please define δ when it is first introduced in the paper.\n\n2. Needing clarification: PN-GAIL\\BSC: PN-GAIL without balanced semi-conf (BSC) classification--does this mean no classification used or SC used? Without a probabilistic classifier, how to obtain confidence scores? Please explicitly state whether a SC classification is used, and to explain how confidence scores are obtained if no classifier is used. \n\n3. Lack of analysis for experimental outcomes: it is necessary to provide more detailed explanations and discussions regarding those figures. For example, in Figure 3 what possible reasons (e.g., characteristics of each environment? limited number of demonstrations?) are that result in the varying performance patterns across six environments. We can observe that in some cases three colors (methods) are similar; in some cases blue and green are close; in some cases, blue and orange are close; while in some cases, blue works best. Please provide a systematic analysis of how different factors might contributing to these patterns." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce a Positive-Negative Generative Adversarial Imitation Learning (PN-GAIL) method within the framework of Generative Adversarial Imitation Learning (GAIL) to leverage non-optimal information from imperfect demonstrations." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024pngail,\ntitle={{PN}-{GAIL}: Leveraging Non-optimal Information from Imperfect Demonstrations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0e2pcSxQJS},\nnote={under review}\n}" }, "abstract": { "value": "Imitation learning aims at constructing an optimal policy by emulating expert demonstrations. However, the prevailing approaches in this domain typically presume that the demonstrations are optimal, an assumption that seldom holds true in the complexities of real-world applications. The data collected in practical scenarios often contains imperfections, encompassing both optimal and non-optimal examples. In this study, we propose Positive-Negative Generative Adversarial Imitation Learning (PN-GAIL), a novel approach that falls within the framework of Generative Adversarial Imitation Learning (GAIL). PN-GAIL innovatively leverages non-optimal information from imperfect demonstrations, allowing the discriminator to comprehensively assess the positive and negative risks associated with these demonstrations. Furthermore, it requires only a small subset of labeled confidence scores. Theoretical analysis indicates that PN-GAIL deviates from the non-optimal data while mimicking imperfect demonstrations. Experimental results demonstrate that PN-GAIL surpasses conventional baseline methods in dealing with imperfect demonstrations, thereby significantly augmenting the practical utility of imitation learning in real-world contexts. Our codes are available at https://anonymous.4open.science/r/PN-GAIL-3828." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Generative adversarial imitation learning", "imperfect demonstrations", "reinforcement learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/42aad028d937cdabcffb5638874918e590128c5b.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "PN-GAIL: Leveraging Non-optimal Information from Imperfect Demonstrations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0eMsrRMmCw
Mufu: Multilingual Fused Learning for Low-Resource Translation with LLM
main
Active
translation;low-resource;large language model
applications to computer vision, audio, language, and other modalities
5;6;8
3;5;4
3;3;3
2;3;3
2;4;3
6.333333
4
3
2.666667
3
0.327327
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1、 more diverse set of low-resource languages in the experimental dataset will be helpful\n2、 the impact of various auxiliary languages can be deeply analyzed\n3、 prompt analyzation can be improved" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Interesting research, Introduces Mufu, a novel approach leveraging multilingual context and post-editing for low-resource language translation.\n2. Employs automatically generated candidates and instructions to correct translations, enhancing LLM's reasoning capability.\n3. Demonstrates robustness against poor-quality auxiliary translations, outperforming specialized NMT systems in many low-resource pairs.\n4. Proposes a hybrid learning paradigm, combining in-context learning and finetuning for improved translation quality.\n5. Implements knowledge distillation to reduce inference costs while maintaining performance gains in low-resource translations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces \"Mufu\" , which is a method for low-resource language translation using a multilingual fused learning approach, specifically targeting large language models (LLMs).\nThe Mufu method, which aims to address the challenge that large language models (LLMs) perform well in translating high-resource languages but still struggle with low-resource languages. The Mufu prompting approach turns the translation task into a post-editing task, leveraging the reasoning capabilities of LLMs with auxiliary translation candidates, requiring the model to assess input quality, align semantics cross-lingually, copy from relevant inputs, and override incorrect instances. Experiments show that LLMs fine-tuned with Mufu-style prompts achieve better performance than the NLLB 1.3B distilled model in 64% of low- and very-low-resource language pairs on the Flores-200 dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Experiment Method Optimization, Consider incorporating a more diverse set of low-resource languages in the experimental dataset to better generalize the findings and evaluate the model's performance across a wider linguistic spectrum.\n\n2. Experiment Conclusion Enhancement, Suggest conducting ablation studies to isolate the specific contributions of different components of Mufu, such as the impact of various auxiliary languages, to fine-tune the approach and maximize translation accuracy.\n\n3. 5-shot Prompting Improvement, Explore the use of meta-learning strategies in 5-shot prompting to enhance the model's ability to quickly adapt to new translation tasks with limited examples, potentially improving the efficiency of the learning process." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the weaknesses for the questions." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- experimental results show some effectiveness of the proposed approach\n- the idea of leveraging multilinguality via the prompt sounds technically good" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper tackles low-resource translation quality improvement in LLM models. To maximize data efficiency in the low-resource setting, the authors introduce a new approach called Mufu, including automatic selection of multilingual translation candidates and an instruction tuning to correct inaccurate translations via the prompt. Experimental results on Flores-200 dataset for English-XX directions show robustness and achieves better performance against NLLB 1.3B distilled model in 64% of low- and very-low resource language pairs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- unclear about the experimental results; how to decide the best prompt template for mufu; any impacts of language combination used in the prompt template - for example, have you ever tried adding high-resource language translation pairs during training to enhance multilingual training with high and low-resource language pairs?\n- results are not convincing enough, maybe due to low-resource setting with limited improvement in ChrF. Can you report other metrics such as sacreBLEU scores? Have you tried finetuning LLM with low-resource monolingual data so that the LLM can more effectively enhance Mufu." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- Were there any accidental translations in a different language for Mufu{5,10,20}?\n- What exactly is Win% vs teacher? For instance, for NLLB 1.3B distilled, its chrF is 46.0 whereas that of teacher is 43.7, still its win% is 41.3? It means NLLB 1.3B was less than 50% correct when compared to teacher model still its chrF score is higher? Another example, Win% vs teacher is 56.2 for NLLB 54B MoE (48.9 chrF) whereas for mufulora20 with PaLM2 S it is 99% with chrF less than NLLB 54B MoE on FLORES 200. It will be great if authors can formalise what is Win% vs teacher.\n- Can the authors explain In theory… model outputs (line 207-211)?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is very clearly written and easy to follow.\n- They combine 2 interesting learning paradigms: ICL and parameter tuning and their core focus is on very-low and low resource language which I really liked. \n- They perform evaluation on NTREX which is important for ood evaluation.\n- The experiments performed by authors are quite extensive. I especially liked mufu5hrl, mufu5tr, distilled, and lora which corroborate their approach of selecting 5,10,20 related languages from URIEL.\n- Quantitative evidence provided in Figure 3 is quite helpful in knowing how language transfer is taking place. Moreover, the attention pattern further helps in understanding how attention pattern is making mufu models perform better." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Mufu, which turns translation into post-editing task by providing auxiliary translations and target translation from teacher model. The student model learns in-context to produce the correct target translation and is then fine-tuned against references. Languages for auxiliary translations are chosen from URIEL and they evaluate using PaLM S family models along with Gemma 2B,7B on FLORES 200 (iid), and NTREX (ood). The paper contains thorough ablation studies as well as cross lingual attention alignment which helps understanding or interpreting how model is learning through in-context." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- No model sizes available for PaLM2 family of models. I’m not sure how to compare them with Gemma or NLLB.\n- If I were to just compare on the basis of chrF score, only PaLM2 XXS -NLT and PaLM2 S are able to beat NLLB 1.3B distilled model in both FLORES 200 and NTREX (and Gemma 7B on FLORES 200). Rest all are inferior to NLLB 1.3B distilled. One suggestion for authors in this case will be to add `Latency` column for all models (higher for mufu and lower for distilled models) to show the trade off between accuracy and latency which will help readers understand how competitive other models are.\n- The authors have mentioned this but finetuning an LLM (or even NLLB with 1B+ param) with just 787 sentences and in-context learning will definitely lead to overfitting which is evident by the fact that mufu20lora performed better than full finetuning. I wonder if that is the case for other models too? \n- It’s great they used Gemma 2, an open weight model but I’m slightly disappointed that majority of their experiments use PaLM2 models which are not public like Gemma 2. \n- Two iteration process (teacher model followed by student model) is quite expensive. The authors have mentioned that distillation helps to alleviate the problem but it only worked for NTREX in PaLM2 XXS - NTL (not for Gemma 7B), performance on FLORES 200 for both distilled models is lower than NLLB 1.3B. \n- The authors experiment with one learning paradigm i.e., in-context learning for LLMs for distillation. Did they try distillation from model outputs (not the one fine-tuned with mufu20)? How much better or worse is in-context learning compared to vanilla distillation?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We finetune LLMs for translation by prompting with multilingual context in order to harness their cross-lingual reasoning, and substantially improve the models' performance in low-resource translation." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024mufu,\ntitle={Mufu: Multilingual Fused Learning for Low-Resource Translation with {LLM}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0eMsrRMmCw},\nnote={under review}\n}" }, "abstract": { "value": "Multilingual large language models (LLMs) are great translators, but this is largely limited to high-resource languages. For many LLMs, translating in and out of low-resource languages remains a challenging task. To maximize data efficiency in this low-resource setting, we introduce Mufu, which includes a selection of automatically generated multilingual candidates and an instruction to correct inaccurate translations in the prompt. Mufu prompts turn a translation task into a postediting one, and seek to harness the LLM’s reasoning capability with auxiliary translation candidates, from which the model is required to assess the input quality, align the semantics cross-lingually, copy from relevant inputs and override instances that are incorrect. Our experiments on En-XX translations over the Flores-200 dataset show LLMs finetuned against Mufu-style prompts are robust to poor quality auxiliary translation candidates, achieving performance superior to NLLB 1.3B distilled model in 64% of low- and very-low-resource language pairs. We then distill these models to reduce inference cost, while maintaining on average 3.1 chrF improvement over finetune-only baseline in low-resource translations." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "translation", "low-resource", "large language model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/84c5f101f608bcec3450bc726c1edb69e63752a8.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Mufu: Multilingual Fused Learning for Low-Resource Translation with LLM" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0eRJRbVG95
Unraveling the Shift of Visual Information Flow in MLLMs: From Phased Interaction to Efficient Inference
main
Active
Multimodal Large Language Models;Visual Information Flow;Inference Acceleration
interpretability and explainable AI
3;3;5;5;6
4;3;3;4;3
2;1;3;3;2
2;2;3;2;3
1;3;3;3;3
4.4
3.4
2.2
2.4
2.6
-0.272166
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the 'weaknesses' part." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe paper is well-written and easy understanding. The figure 1 and 6 are intuitive to understand the overall framework.\n2.\tThe saliency technique used for analyzing the information flow among various tokens are interesting and intuitive. The conclusion that visual information injection dominates in shallow layers while intra-visual aggregation dominates in deeper layers makes sense.\n3.\tExperimental results demonstrate the effectiveness of the proposed method to some extent." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper dive into how multimodal large language models (MLLMs) process and utilize visual information. Based on the widely used saliency technique for interpretability, information flow among different tokens across different layers is analyzed. The authors find out that visual information injection dominates in shallow layers while intra-visual aggregation dominates in deeper layers. Finally, hierarchical image token pruning is proposed to prune at both shallow and deep layer with specific criterion." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe phenomenon analyzed in the paper is not surprising, and previous works [1][2] have pointed out the similar findings that information of vision tokens has migrated to following text tokens within the first few layers of MLLMs. Thus, I think this paper is with limited novelty which employs a commonly used technique to analyze the phenomenon that have been identified.\n2.\tI wonder how the parameter K1, K2 is determined. For different datasets and tasks, the parameters may be different. Directly setting k1 and k2 to a pre-defined value may be not suitable. Could K1, K2 be dynamically adjusted based on the input samples?\n3.\tThe evaluation datasets used in the paper is quite limited. I suggest the authors to evaluate on other commonly used datasets, especially OCR-related or fine-grained datasets to demonstrate the effectiveness, e.g., textvqa, gqa, docvqa, chartqa, seed-bench. For the efficiency evaluation, I suggest the authors to include inference time and GPU memory.\n4.\tTwo different criteria are used in shallow and deep layers. I wonder the performance if the same criterion is used. If the performance is similar, the analysis of different information flow in shallow and deep layers is not very convincing. \n\n[1] An Image is Worth 1/2 Tokens After Layer 2: Plug-and-PLay Acceleration for VLLM Inference.\n\n[2] BOOSTING MULTIMODAL LARGE LANGUAGE MODELS WITH VISUAL TOKENS WITHDRAWAL FOR RAPID INFERENCE" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I noticed that VTM suggests that visual tokens are not essential in the deeper layers of MLLMs and strategically withdraw them at a certain layer. I'm very curious about the advantages of HiMAP compared to VTM." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written, with clear explanations and helpful visuals.\n\n2. This paper introduces intriguing hypotheses and includes extensive, detailed studies to support them.\n\n3. Extensive experiments confirm HiMAP’s effectiveness, showing reduced computational costs while preserving performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper identifies the minimal role of image tokens in MLLM predictions, uncovers patterns in visual-textual interactions, introduces HiMAP as a pruning technique to reduce inference latency without compromising performance, and demonstrates its effectiveness across diverse vision-language tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Lack of ablation studies. For example, additional experiments could be included to examine the impact of various pruning strategies on the model and to assess the effects of different hyperparameter settings, such as K1, K2 and the ratio.\n\n2. The authors might consider including additional benchmarks, such as MME and AI2D, and presenting fine-grained performance scores. Additionally, it would be helpful to include metrics such as GPU memory and total time in the comparisons to provide a more comprehensive evaluation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. As mentioned in Weakness #1.\n2. In Section 2.2, the authors derive two main factors based on insights 1) \"As the model depth exceeds...\" and 2) \"Instruction tokens exert the most...\". The first conclusion is undoubtedly correct, but the second remains questionable. Although much work has validated the redundancy of image tokens in MLLMs, the two insights provided in this paper do not directly lead to this conclusion. The \"limited impact of image tokens\" mentioned in the paper only supports the first conclusion, while the argument for the second conclusion would require a computation of saliency for each image token (assuming a length of N), and if the authors conducted this experiment, they would find that only some image tokens have high significance." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. Universality: As a universal image token pruning algorithm, HiMAP can be easily applied to different architectures of MLLMs to achieve accelerated inference.\n2. Usability: The method is straightforward, with low transfer costs.\n3. The saliency scores and dynamic pruning approach used in the paper can provide inspiration for the field of accelerated inference." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper examines the significance of image tokens in different layers of MLLMs, suggesting that image tokens tend to facilitate modality injection in shallow layers and engage in more internal interactions in deeper layers. Based on this analysis, the paper proposes HiMAP, an algorithm for dynamically pruning image tokens to accelerate inference, which has been validated for its effectiveness across various multimodal tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper lacks line numbers, which seems to deviate from ICLR's submission standards and may hinder reviewers in accurately pinpointing issues within the document.\n2. The experiments are not comprehensive enough, with validation only on a limited number of tasks. As a universal solution, it should be tested on common multimodal benchmarks, such as LLava-Bench, MMBench, etc." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Does the type of task potentially influence the conclusion regarding the minimal importance of visual tokens, which account for only 0.03% of the significance of textual tokens? For example, the proportion of visual tokens may considerably decrease in multi-turn dialogue tasks. At the same time, their relative significance could increase due to the normalization of sequence length reflected in Equations 2, 3, and 4. \n* The insight presented in line 7 on page 4 looks weird. If the contribution of tokens from deeper layers to response prediction is low, why not leverage tokens from the shallow layer with the most significant contribution to generating responses? In this reviewer's opinion, only the comparison within each layer in Figure 2 carries practical significance.\n* The reviewer suggests evaluating the performance of HiMAP against the baseline on fine-grained perception tasks, such as document understanding and OCR (e.g., Chartqa[1] and Docvqa[2]). This would provide a more solid demonstration of HiMAP's efficacy in reducing redundant image tokens. \n\n[1] Chartqa: A benchmark for question answering about charts with visual and logical reasoning. \\\n[2] Docvqa: A dataset for vqa on document images." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* From the novel perspective of information flow, the authors have analyzed the fusion patterns of visual features in MLLMs. Building upon this analysis, they proposed an adaptive approach for visual redundancy elimination.\n* The structural design of HiMAP is intuitive and demonstrates robust performance across a range of tasks, exemplified by image captioning and VQA.\n* The article is highly readable, featuring a well-defined and clear structure." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes that in MLLMs, image tokens primarily convey visual information to instruction tokens in shallow layers, while deeper layers consolidate the remaining visual data. Based on this insight, a plug-and-play visual pruning method, HiMAP, is proposed to reduce the computational costs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* In analyzing the information flow between visual tokens and textual tokens, it is essential to thoroughly examine the flow in both directions. Hypothesis H1 is valid only if it is determined that the primary flow of information occurs from the visual modality to the textual modality, rather than in the opposite direction. This requires conducting an experiment to compare the magnitudes of $S_{vt}$ in Equation 6 with $S_{tv}$.\n\n* Furthermore, merely showing a decline in performance by restricting the interaction between image tokens and instruction tokens in shallow layers does not sufficiently support Hypothesis H1. It is essential to complement this with an experiment that specifically limits the flow of intra-visual information within shallow layers (the current IMG2RND experiment in Figure 4 is not direct). Only when the resulting performance degradation is considerably less pronounced can Hypothesis H1 be adequately substantiated." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please refer to weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1) The paper is well motivated, authors design the HiMAP strategy corresponding to the behavior of how visual information is utilized in MLLM layers.\n\n2) HiMAP manages to reduce the computational costs by approximately 65% without sacrificing performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, authors begin by studying how visual tokens interact in MLLMs and observe that 1) image tokens interact strongly with text instruction tokens to form cross-modal representations in shallow layers; 2) image tokens aggregate remaining visual information in deeper layers. Upon this, they propose a token pruning inference strategy, HiMAP, for MLLM, by selecting the most important image tokens by image-text-attention scores in shallow layers and image-self-attention scores in deeper layers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) The ablation studies are insufficient, e.g., different choices of K1, K2 , R1, R2; ablation for using different importance criteria on shallow layers and deep layers. \n\n2) The finding that LLMs may well process visual tokens in the early layers has already been proposed in previous works[1-3]. The stagewise token pruning strategy has also been proposed for efficient MLLM [3]. Consequently, the novelty of this paper is somewhat limited.\n\n3) More benchmarks for MLLM performance evaluation should be involved to exhibit the effectiveness of HiMAP, e.g., the widely used GQA, MME, MM-Vet, VQAv2 and etc.\nThe paper should be drafted with line number on the page and the writing of the paper should be improved.\n\n\n[1] An Image is Worth 1/2 Tokens After Layer 2: Plug-and-PLay Acceleration for VLLM Inference, in ECCV24.\n\n[2] DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effective for LMMs, in NeurIPS24.\n\n[3] LLaVolta: Efficient Multi-modal Models via Stage-wise Visual Context Compression, in NeurIPS24." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper uncovers a shift in visual information processing in MLLMs and introduces a novel image token pruning method to accelerate inference." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024unraveling,\ntitle={Unraveling the Shift of Visual Information Flow in {MLLM}s: From Phased Interaction to Efficient Inference},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0eRJRbVG95},\nnote={under review}\n}" }, "abstract": { "value": "Multimodal large language models (MLLMs) improve performance on vision-language tasks by integrating visual features from pre-trained vision encoders into large language models (LLMs). However, how MLLMs process and utilize visual information remains unclear. In this paper, a shift in the dominant flow of visual information is uncovered: (1) in shallow layers, strong interactions are observed between image tokens and instruction tokens, where most visual information is injected into instruction tokens to form cross-modal semantic representations; (2) in deeper layers, image tokens primarily interact with each other, aggregating the remaining visual information to optimize semantic representations within the visual modality. Based on these insights, we propose Hierarchical Modality-Aware Pruning (HiMAP), a plug-and-play inference acceleration method that dynamically prunes image tokens at specific layers, reducing computational costs by approximately 65% without sacrificing performance. Our findings offer a new understanding of visual information processing in MLLMs and provide a state-of-the-art solution for efficient inference. Code is released at https://anonymous.4open.science/r/HiMAP." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multimodal Large Language Models", "Visual Information Flow", "Inference Acceleration" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b921e97813be3d37a1eaed29a01d6160817ed83f.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Unraveling the Shift of Visual Information Flow in MLLMs: From Phased Interaction to Efficient Inference" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0er6aOyXUD
Evaluating Robustness of Reward Models for Mathematical Reasoning
main
Active
mathematical reasoning;RLHF;reward models;reward overoptimization;language models;benchmark
datasets and benchmarks
3;5;5;5;6
4;3;4;4;4
2;2;2;2;2
1;2;3;2;2
3;3;2;3;2
4.8
3.8
2
2
2.6
-0.102062
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* RewardMath is based on the dataset MATH500, where does the dataset MATH500 come from? Is MATH500 prior work (and if yes the citation is missing) or is this a contribution of the paper (in this case it should be made clear).\n* Does MATH500 address the incorrect annotation problem found in PRM800K? \n* Can authors also show evaluations and ablations on gsm8k [1] and MATH [2] which are the most common eval tasks for LLM reasoning capabilities?\n* Authors identify that in RewardBench the accepted response often has less steps than rejected ones, which could give a chance for models to reward hack (i.e. reward relies on the number of steps instead of the actual response quality). Did the authors ablate this? I.e. Does the reward-hacking model predict lower reward if we make the accepted response in RewardMath longer? And vice versa, does it predict higher reward if we make the rejected response shorter? \n* Regarding RewardMath giving more than one rejected responses: If one is trying to do preference learning using a llama model as the base model, is it important for the reward model to know the rejected response generated by a non-llama model should be worse than the accepted response? I.e. the distribution could be very different that it never encounters it during preference learning. I.e. for Figure 4, if we use a llama model as the policy, does one-to-many RewardMath still do better than one-to-one RewardMath chosen & Llama rejection?\n* Does reward-model free alignment methods like DPO also suffer from reward model overfitting problem? What is the advantage of using reward models over reward model-free methods for reasoning tasks? \n* Does the benchmark evaluate cases where both the rejected and chosen response arrive at the same answer, but the rejected answer has the wrong steps? I.e. this is common for truth or false questions. \n\n\n[1] Training Verifiers to Solve Math Word Problems\n\n[2] Measuring mathematical problem solving with the math dataset" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The topic on how to Improve LLM reasoning capabilities has recently gained a lot of attention. This paper focuses on having good benchmarks for evaluating these efforts, and this could be very impactful if done correctly. \n\n* Authors identify flaws of existing benchmarks and make good efforts to fix them. \n\n* Paper has good results, specifically Figure 4 is very cool showing RewardMath has stronger correlation with downstream tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Authors aims to design a better benchmark for evaluating reward models in reasoning tasks. Authors identify problems with the previous benchmark RewardBench and proposes RewardMath:\n* RewardBench is based on PRM800K which contains many wrong annotations, RewardMath instead is based on MATH500.\n* RewardBench uses pairs of (human annotated example, LLM annotated incorrect example). RewardMath includes more than one incorrect example.\n* RewardBench’s accepted and rejected examples have different number of steps, this could be a spurious correlation that leads to reward hacking. RewardMath fixes this.\n* RewardBench’s PRM evals uses product instead of mean which biases shorter responses, authors fix this by using mean instead. \n\nTo demonstrate the improvements of rewardmath 1) authors compare performances of different reward models on both RewardBench and RewardMath, and show rewardmath has higher correlation with downstream evals 2) authors show rewardmath exhibits the common trend of larger dataset -> better performance while rewardbench does not." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "See questions I have below" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- As the proxy reward model trained from synthetic data shows far from optimal performance in Table 3 (only around 13% for RewardMATH and 69% for RewardBench), can you consider using better proxy RMs? The increase in value from 12.68 to 13.51 is not very convincing to me that this is a strong trend.\n\n- In Figure 6, the gold reward (or even the oracle reward) does not drop for most cases even with the maximum KL distance considered. If a larger N is considered for BoN sampling, will the graph drop down as in Gao et al [2]? For a larger N is RewardMATH still successful in detecting more robust reward models regarding the overoptimization problem?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper provides clear and sufficient empirical evidence that their RewardMATH benchmark is more reliable than the math subset of RewardBench [1]. The empirical results are also clear as LLM policy using BoN on high-scored reward models on RewardBench shows little to no correlation with the performance increase of Math benchmarks (r-square = 0-0.1), while RewardMATH shows a much stronger correlation \n(r-square = 0.6-0.8) in Figure 3.\n- The authors have evaluated diverse reward models on RewardMATH, including LLMs (generative reward models), classifier-based reward models, and process reward models.\n- The paper considers the problem of over-optimization using a synthetic setup of gold RMs and proxy RMs. \n\n[1] https://arxiv.org/abs/2403.13787" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes RewardMATH, a reward model evaluation benchmark focused on the math domain. This benchmark adopts a one-to-many comparison to evaluate reward models robustly. Specifically, it provides 10 responses for each prompt where only 1 response is the answer. The evaluated reward model is considered accurate only when the answer response is given the highest score among all 10 responses. \nThe authors also empirically provide sufficient evidence that the RewardMATH benchmark provides reliable estimates for reward models that give policies optimized using BoN are indeed robust in math benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The work would be more interesting if the authors showed any other domains (such as coding or text summarisation or maybe safety) reward model benchmark can be improved by the framework proposed here (by adopting multiple responses and using diverse LLMs to generate outputs). Any initial or limited experiments would be helpful. \n\n- The lack of PPO (or DPO) usage for policy fine-tuning in experiments seems like a major weakness. The main contribution of this paper is using policy fine-tuning methods to verify if the RewardMATH benchmark scores correlate with the signals it provides during policy fine-tuning. I agree with this approach and am impressed by the number of experiments conducted to verify this using mainly Best-of-N sampling. However, Best-of-N sampling is an inference time method to generate better model outputs using reward models, whereas PPO (or possibly DPO) is the main fine-tuning method researchers use. Although Figure 5 does show a PPO experiment under a synthetic setup, the number of checkpoints or whether the dots follow the findings from Gao et al [2] is not clear to me. Without any solid PPO results, Best of N sampling seems not enough to verify the benchmark's capability of measuring the robustness of reward models. The work will be much more convincing if the authors show more PPO-trained policy evaluations. Or at least, it will be helpful if the author provides more context as to why PPO is hard to train in their non-synthetic setup. Also, I suspect high-scoring reward models on RewardMATH have the ability to find the best response from multiple responses, and Best-of-N adopts a very similar way as it picks the response with the highest reward, resulting in a high correlation of results. Whether this ability will generalize even on PPO setups is not clear to me at this point. \n\n- Experiment results in Figure 6 compare diverse RMs on both RewardBench and RewardMATH benchmarks with gold or oracle rewards. It would be nice if the authors not only provided the numbers but also a statistical analysis (such as Kendell's tau) that measures the agreement between RewardMATH(or RewardBench) and oracle (or gold) reward scores in Figure 6. \n\n[2] https://arxiv.org/abs/2210.10760" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. [Clarification] Are the prompts used to evaluate the LLM judge on REWARDMATH the same as the prompt used to evaluate the LLM judge on the Reward Bench? Different prompting strategy (eg. difference system prompt) raises concerns regarding fair comparison between the two benchmarks.\n2. [Clarification] What is MATH500? The author did not mention the details behind this dataset which they used for their benchmark. Were there any steps taken to ensure the dataset is not contaminated with the models being evaluated? If the dataset is used during training on any of the evaluated RMs, the benchmark’s reliability will be undermined. \n3. [Clarification] What was the motivation behind different parts of the Synthetic Data experiment? What was the reasoning behind using the MetaMATH dataset? Why was only 80K out of the 155K data points augmented from MATH used for training? \n4. The authors did not mention how the incorrect solutions are ensured to be actually incorrect. Were there steps taken to validate the said incorrect solutions are indeed incorrect?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. **Thoroughness**: The paper presents detailed implementations, including training hyperparameters and experimental protocols. This ensures that other researchers can accurately reproduce the experiments and validate the findings. \n2. **Relevance**: This work addresses a critical gap in the field by focusing on reward model evaluation, a crucial area of research that has significant implications for the development of more reliable AI systems.\n3. **Motivation**: The paper presents a compelling critique of the existing Reward Bench evaluation metric, establishing a strong foundation for their work. The authors make a persuasive case for developing benchmarks that minimize over-optimization risks, backing their arguments with experimental evidence. This dual focus on improving metrics while addressing practical concerns demonstrates clear motivation for the research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new benchmark, REWARDMATH, to improve the robustness evaluation of reward models in mathematical reasoning tasks. It highlights limitations in the existing RewardBench benchmark, which relies on single comparisons between chosen and rejected solutions, potentially leading to reward hacking and misjudgments of model robustness. REWARDMATH addresses this by using one-to-many comparisons with multiple incorrect solutions to better capture robustness and reduce the risk of reward over-optimization. Experimental results indicate that scores on REWARDMATH strongly correlate with policy optimization outcomes and provide a more reliable measure of reward model robustness compared to RewardBench. This benchmark aims to enhance the development and reliability of RLHF systems, with the findings underscoring the potential of REWARDMATH to serve as a trustworthy evaluation tool in this domain​." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Clarity**: The paper is generally well written, however, it has some clarity issues, especially in section 5, which is hard to follow. Clarification questions are asked in the question section, marked with [Clarification]. The authors should address those questions. \n2. **Benchmark Biases**: The paper has several biases, raising concerns on the claimed robustness and reliability. Examples and comments below:\n\n> Line 206: Hence, we first convert the human-annotated solutions from MATH500 into step-by-step machine-generated solutions. We prompt GPT-4, using 4 carefully crafted exemplars for each math subject as part of the prompt.\n\nAll correct solutions in the benchmark are generated via GPT-4, raising concerns regarding biases towards GPT-series models. Even though the authors manually inspect the solutions, the solutions were still mainly generated using GPT-4. Notably, the authors observe LLM judges from the GPT-4 Series to perform significantly higher than other models (Line 286), which is likely due to this oversight (since it is known LLM judges tend to bias their own response, eg. GPT-4 family judge favors responses from GPT-4 family). The authors should use a diverse set of LLMs to curate the correct solutions to avoid potential biases.\n\n> Line 805: Secondly, we instruct GPT-4-0125-preview to select a specific step from the correct solution, transform it into an erroneous step, and then prompt again to continue generating the solutions from the erroneous step.\n\nSimilar to the previous point, employing GPT-4-0125-preview as editor to insert errors into other LLMs’ answers may introduce biases. Additional validation is needed to ensure the benchmark does not exhibit any bias towards GPT family models. \n\n> Line 402: We assume Internlm2-7B-reward, which performs well on both RewardBench and REWARDMATH, as the gold RM.\n\nThe use of Internlm2-7b-reward as the gold standard lacks sufficient justification and raises several concerns about experimental validity. The author relies primarily on performance metrics from RewardBench and REWARDMATH, but this approach is problematic for multiple reasons. First, the authors themselves criticized RewardBench for containing incorrect ground truth data and failing to adequately assess reward models. Second, using REWARDMATH as a benchmark is circular since it's the very dataset being studied. High scores on these benchmarks alone don't necessarily indicate that a reward model can reliably approximate human preferences. To establish Internlm2-7b-reward as a legitimate gold standard, the author should conduct additional validation studies specifically demonstrating its ability to align with human judgment on mathematical tasks.\n\n> Line 426: We find that proxy reward models trained on smaller datasets reach peak rewards at lower KL divergences, indicating faster over-optimization.\n\nThe author assumes KL divergence adequately captures optimization degree without proper justification. KL may not account for other important aspects of policy change. Further study will strengthen the experimental results. \n\n3. **Comprehensiveness**: This paper's scope is notably narrow, focusing solely on evaluating reward models' performance on mathematical problems. While it attempts to address limitations in a small subset of the Reward Bench dataset, its improvements remain constrained. The study primarily concentrates on reward over-optimization, overlooking other potential vulnerabilities in reward model benchmarking. Additionally, the benchmark's methodology of comparing one correct solution against multiple incorrect ones limits its thoroughness. Furthermore, the author's assumption that MATH500 adequately represents mathematical reasoning tasks may be oversimplified. These limitations collectively suggest a need for a more comprehensive approach to reward model evaluation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Why do the authors choose 10 generations with 9 incorrect and 1 correct answer in the k-way comparisons? How does the choice of k and numbers of correct and incorrect answers affect the resulting correlations and reward overoptimization? The relationship between reward overoptimization and the proposed benchmark needs more rigorous analysis." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper provides a good correlation study between the proposed benchmark and best of N downstream performance in datasets like MATH and Gaokao etc. The proposed one-to-many comparison seems to be on the right direction for better correlation with downstream performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces REWARDMATH, a benchmark for evaluating reward models in mathematical reasoning tasks, arguing that it provides a more reliable evaluation than the existing RewardBench by using one-to-many comparisons instead of pairwise comparisons. The authors validate their approach by showing correlation between benchmark performance and best-of-n sampling results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I have the following major concerns:\n\n1. Technical Contribution & Novelty: The primary contribution seems to be replacing pairwise comparisons in reward bench with best of N comparisons, which is an incremental modification rather than a substantial methodological advance. I do not think the change only is sufficient for a publication in top machine learning conference. The correlation between N-way comparison performance and best-of-N sampling results is somewhat expected and doesn't provide deep insights into reward model behavior. \n\n\n2. Unclear Definition of Robustness: The paper uses \"robustness\" throughout but never provides a precise definition. The authors seem to equate robustness with correlation to best-of-n sampling results, but this is circular reasoning since the benchmark itself uses similar methodology. There's no clear framework for what properties a \"robust\" reward model should have beyond empirical correlation with certain metrics. \n\n\n3. Limited Experimental Validation: The paper relies heavily on correlation with best-of-n sampling as validation, but doesn't explore other important aspects of reward model behavior. To make the paper deeper and broader, it would be great if the authors could try comparing the correlation of different real down-stream fine-tuning techniques like BoN SFT, DPO, PPO etc. and see whether how RewardBench and RewardMath correlate with downstream performance there. It would also be interesting to see if such observation extends to other domain like coding, and perhaps even open-ended generations without ground truth." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Q. How much of the relatively poor results for RewardBench is due to the noisy annotations inherited from PRM800K, as mentioned in Sec. 3.1? In other words, could simply fixing these annotations significantly change the comparison?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper identifies a particular limitation of RewardBench, a popular benchmark for evaluating reward models, in assessing mathematical reasoning and introduces a new benchmark that addresses this issue by including one-to-many comparison data.\n2. The authors provide extensive experiments and analyses across different reward model types, including both proprietary and open models, and assess various performance metrics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses a specific limitation of RewardBench, a widely used benchmark for reward model evaluation, in its assessment of mathematical reasoning capabilities. To this end, the authors introduce RewardMATH, a new benchmark that employs one-to-many comparisons of chosen and rejected responses to mathematical questions to enhance evaluation robustness. Experiments show that RewardMATH correlates well with policy performance and is more effective at identifying potential reward overoptimization and the reliability of reward signals." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The benchmark comparison primarily involves RewardBench, which is designed to evaluate reward models more holistically across various domains. However, is the comparison in terms of mathematical reasoning appropriate, given that RewardMATH is specifically designed for this purpose? If RewardBench is indeed the most comprehensive eval set even for mathematical reasoning tasks prior to RewardMATH, it would be helpful to clarify this point.\n2. The benchmark appears to lack cases where reward models must distinguish between correct solutions of varying quality, such as those missing reasoning steps. It is also unclear whether 500 samples is sufficient to cover diverse mathematical reasoning tasks.\n3. Tables 1 and 2 report performance comparisons of various LLMs on RewardBench and RewardMATH. The results seem to merely suggest that the two benchmarks differ significantly. Can we conclude from these results that \"high scores on RewardBench do not guarantee **robustness** in reward models\"?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a design for a reliable benchmark for reward models and validate our design using the results of optimized policies and through the lens of reward overoptimization." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024evaluating,\ntitle={Evaluating Robustness of Reward Models for Mathematical Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0er6aOyXUD},\nnote={under review}\n}" }, "abstract": { "value": "Reward models are key in reinforcement learning from human feedback (RLHF) systems, aligning the model behavior with human preferences.\nParticularly in the math domain, there have been plenty of studies using reward models to align policies for improving reasoning capabilities.\nRecently, as the importance of reward models has been emphasized, RewardBench is proposed to understand their behavior.\nHowever, we figure out that the math subset of RewardBench has different representations between chosen and rejected completions, and relies on a single comparison, which may lead to unreliable results as it only see an isolated case.\nTherefore, it fails to accurately present the robustness of reward models, leading to a misunderstanding of its performance and potentially resulting in reward hacking.\nIn this work, we introduce a new design for reliable evaluation of reward models, and to validate this, we construct RewardMATH, a benchmark that effectively represents the robustness of reward models in mathematical reasoning tasks.\nWe demonstrate that the scores on RewardMATH strongly correlate with the results of optimized policy and effectively estimate reward overoptimization, whereas the existing benchmark shows almost no correlation.\nThe results underscore the potential of our design to enhance the reliability of evaluation, and represent the robustness of reward model.\nWe make our code and data publicly available." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "mathematical reasoning", "RLHF", "reward models", "reward overoptimization", "language models", "benchmark" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1d4b478bfde9d6708c1e37c9fa5ce32c5f4d1a17.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/f881b604bd9898f154a5ed11cc10b7f855da77d6.zip" }, "title": { "value": "Evaluating Robustness of Reward Models for Mathematical Reasoning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0eu837jdBD
Autoencoder-Based Hybrid Replay for Class-Incremental Learning
main
Active
Catastrophic Forgetting;Class-Incremental Learning;Continual Learning;Task Confusion.
transfer learning, meta learning, and lifelong learning
3;5;5;5
4;4;1;4
2;3;2;3
2;3;3;2
1;2;1;3
4.5
3.25
2.5
2.5
1.75
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What are the specific challenges encountered when integrating CPSEM and RFA into existing architectures?\n- Have you considered applying AHR to non-vision data, such as text or audio?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "+ The combination of generative and exemplar replay in a single system that minimizes memory while maintaining high performance is novel.\n+ The proposed method achieves a significant reduction in memory requirements (O(0.1t)), which is crucial for scalability in CIL.\n+ Comprehensive experiments across multiple benchmarks and comparisons with state-of-the-art (SOTA) methods demonstrate the robustness of AHR." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel approach to CIL called Autoencoder-based Hybrid Replay (AHR). This method combines exemplar and generative replay techniques to address key challenges in CIL, such as task confusion and catastrophic forgetting (CF). The hybrid autoencoder (HAE) serves as both a discriminative and generative model, storing data in a compressed latent space with minimal memory (O(0.1t)) compared to traditional exemplar methods (O(t)). The use of charged particle system energy minimization (CPSEM) and a repulsive force algorithm (RFA) aids in optimal placement of class centroids in the latent space. The experimental results indicate that AHR consistently outperforms existing baselines across five benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-The paper lacks exploration of real-world applications or more complex, dynamic scenarios beyond standard benchmarks.\n-Performance could be impacted if the autoencoder's compression and reconstruction capabilities are not well-optimized.\n- While memory reduction is emphasized, the impact of this method on significantly larger-scale datasets or more diverse data distributions is not detailed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 1 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "I have read this paper carefully. Unfortunately, this paper is totally out of my research area. Therefore, I cannot capture the brilliance of this paper." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper is well organized.\n\n2. I appreciate the extensive experiments.\n\n3. The idea of modeling the energy dynamics within the system akin to charged particles is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an autoencoder-based hybrid replay (AHR) strategy that leverages our new hybrid autoencoder (HAE) to function as\na compressor to alleviate the requirement for large memory, achieving O(0.1t) at the worst case with the computing complexity of O(t) while accomplishing state-of-the-art performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The writing of Section 2 (\"OUR STRATEGY: AUTOENCODER-BASED HYBRID REPLAY (AHR)\") is confusing. Please outline the motivations of each step and explain why it makes sense.\n\n2. The technique contribution is fuzzy. I want to know what technique used in this paper and what technical challenge or a novel idea in the technique of proposed method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "please refer to weakness section" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1) Easy to follow and addresses the important topic of computational burden for continual learning algorithms\n2) The motivation behind the Hybrid replay is well written by contextualizing the current works of literature along with their gaps" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Hybrid Autoencoder (HAE) and Autoencoder-Based Hybrid Replay (AHR) strategies to reduce the memory burden for CIL, especially for replay-based approaches. HAE combines both discriminative and generative modeling to handle classification and replay tasks. It employs Charged Particle System Energy Minimization (CPSEM) equations and the Repulsive Force Algorithm (RFA) to manage class separation within its latent space, enabling class identification using Euclidean distance. AHR integrates exemplar and generative replay strategies by storing samples in the latent space, which significantly reduces memory usage. Its decoder is designed to memorize training data, allowing for effective replay without the typical issues of hazy pseudo-data found in other generative approaches. Simulations in various benchmark datasets also validate the hypothesis." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) Inconsistent notation\n: In the explanation of AHR (184) you are referring are T_l but in the algorithm, it is T_i which is the same for P\n : what does the * refer to in algorithm 1?\n : What is J_l?\n : ^ in explanation and ' is used in Figure 1 are interchangeably used for generated output\n : what is \\mathcal{T} in Figure 1?\n 2) Is there any explanation for how the memory is reduced to 0.1t\n\n 3) It is claimed that the complexity reduces to 10%, but no empirical evidence is provided to validate that hypothesis\n4) Evaluation metric: It is unclear to readers, (line 373) does the accuracy represents the accuracy on the only last task after training on all tasks or is average on all previous tasks.\n5) How does the number of exemplars decrease over time? You are representing the exemplars in latent space? Does it mean reducing the number of classes?\n(Table 3) Why the different numbers of epochs for AHR and others? Please be clear on the size of latent and raw ? is 150 (latent) better than 20 (raw)\n6) It would be clearer to the readers if there was some explanation of how CPSEM and RFA create incremental embeddings\n7) The main objective of this paper is to reduce the size of exemplars in memory. In the related work section, the authors focus on mainly describing the current replay mechanism without mentioning how the current strategies fall short in reducing the size and its relation to AHR\n\n\n8) There is no comparative analysis of the work with state-of-the-art replay methods such as:\n I) Rolnick, David, et al. \"Experience replay for continual learning.\" Advances in neural information processing systems 32 (2019).\n\n II) Buzzega, Pietro, et al. \"Dark experience for general continual learning: a strong, simple baseline.\" Advances in neural information processing systems 33 (2020): 15920-15930.\n\nwhich makes it challenging to assess the significance of the work in the literature.\n\n### Comments on evaluation: \nI am mainly concerned about the accuracies for CIFAR100 and mimiImageNet. There are various works [FeTrIL [1] by Petit, Grégoire, et al., FeCAM [2] by Goswami et al] utilizing ResNet-18/32 achieving higher accuracy of more than 65% even in exemplar-free settings. I wonder how with an exemplar, the model is not able to maintain that accuracy.\n\n[1] Petit, Grégoire, et al. \"Fetril: Feature translation for exemplar-free class-incremental learning.\" Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2023.\n\n[2] Goswami, Dipam, et al. \"Fecam: Exploiting the heterogeneity of class distributions in exemplar-free continual learning.\" Advances in Neural Information Processing Systems 36 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Overall, I believe the paper has significant issues with presentation (as outlined in points [a-e] of the weakness section), which necessitate a major reformatting. Additionally, regarding other concerns, the novelty appears limited to performing classification at the encoder level and the introduction of CPSEM for initializing class centroids. While the latter is a novel contribution, it is neither well-explained nor well-motivated (as noted in points [f-g] of the weakness section). The experimental section, particularly the method comparison, needs refinement by considering recent related work on the proposed approach, utilizing higher-resolution benchmarks, and employing architectures with a larger latent space (as indicated in points [h-i] of the weakness section). Moreover, the details about the training resources should be clarified (point [l] in the weakness section).\n\nI believe the paper has the potential for significant improvement for a future submission. To enhance its novelty, the authors should focus more on the description and proposal of the class centroid initialization, which could provide both theoretical and empirical insights. However, as previously mentioned, these insights are lacking in the current version. Additionally, improving the experimental section would further strengthen the submission.\n\nConsidering all the above, I recommend rejecting the current submission. The paper is not yet ready for publication and requires significant revisions. I suggest making these improvements and submitting it to a different venue." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The Charged Particle System Energy Minimization (CPSEM) method for initializing class centroids for the encoder training is novel and interesting.\n- Several comparisons are carried out in the experimental section, and the method shows good performance on the benchmarks and methodologies used for comparison.\n- Detailed analysis on resource consumption is provided.\n- The ablation study comparing the proposed AHR with AHR using original images (AHR-lossless) highlights that the quality of the images generated by the encoder is sufficiently good for replay, as AHR with original images achieves similar performance. I appreciated this analysis." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the problem of Class-Incremental Learning by introducing a Hybrid Autoencoder (HAE). The proposed model is designed for both exemplar replay during incremental training and classification at inference. The autoencoder consists of two components: an encoder, which is trained to minimize the Euclidean distance between the latent representation and the corresponding class centroid, and a decoder, which is trained to minimize the reconstruction error between the input and output images. Both the encoder and decoder are trained with a distillation loss on the previous task to mitigate activation drift.\n\nAt the encoder level, the class centroids serve as anchors in the latent space, guiding the latent representations towards their respective classes. These centroids are initialized before training using the Charged Particle System Energy Minimization (CPSEM) method, which ensures that the centroids are well-separated. After training, a nearest-mean classification rule is applied to classify test images based on the proximity of their latent representations to these centroids.\n\nIn the post-training phase, the encoder's latent space output is used to populate a replay buffer with latent representations of the current task samples, following a herding strategy. These representations are replayed in the next task by feeding them into the decoder trained on the previous task, which generates the corresponding images. These generated images are then used for training on the new task. Instead of storing images, as is typically done in incremental learning, the proposed method reduces memory requirements by storing only the latent space representations.\n\nThe authors compare their approach to several incremental learning methods on benchmarks such as MNIST, Balanced SVHN, CIFAR-10, and miniImageNet. They also analyze the method's resource consumption, evaluate different decoder sizes, and provide an ablation study by comparing its performance when real images are used during the replay phase instead of the generated ones." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Overall, I believe the presentation of the paper requires significant improvement. Below are my major concerns regarding the presentation:\n\n- (a) The complexity analysis in the introduction needs to be clarified and expanded. The notation $O(0.1t)$ is incorrect according to the definition of Big-O notation. What does this represent? If the authors intend to convey that memory is saved by storing the latent space representation, I suggest incorporating both the latent space dimension and image size into the complexity analysis. Additionally, the term $e$ in $O(cte)$ is not defined and requires clarification.\n\n - (b) The notation throughout the paper is difficult to follow, with multiple indices used unnecessarily (e.g., in Equation 1). This excessive use of notation makes the paper hard to read. I suggest simplifying the notation wherever possible.\n\n- (c) Equation 1 seems to imply that all examples from previous tasks are needed to minimize the reconstruction error. However, I understand that this is not the case—some examples are real images, while others are generated. The replay buffer should be explicitly highlighted to clarify this in the equation.\n\n- (d) The three pseudocode blocks on page 4 make the methodology difficult to follow. Including all three pseudocode blocks on a single page compresses the accompanying methodology description into less than half a page. As a result, some LaTeX formulas in the main text break across two lines, further increasing the difficulty of reading.\n\n - (e) The organization of the paper should be reconsidered. Given the limited space for submission, dedicating more than two pages to the literature review while allocating just over one page to the methodology does not allow for a proper description of the proposed approach. I suggest moving the extended literature review to the appendix and presenting a more concise version in the main paper.\n\nRegarding the methodology and experimental section, my major concerns are as follows:\n\n- (f) The introduction of the Charged Particle System Energy Minimization (CPSEM) for initializing class centroids is interesting but requires additional explanation. It is unclear why this type of initialization benefits the autoencoder and how it relates to the Coulomb interaction energy . While I do not expect a full background on the calculus of variations for minimizing energy, more mathematical details—even in the appendix—would be helpful. An analysis of how centroids are distributed in the latent space is required to underline why the proposed strategy is effective. Furthermore, the CCE (class centroid embedding) placement is explained only through pseudocode, with no accompanying description. At a minimum, the operations performed in Algorithm 2 should be explained in words to provide intuition, especially for readers unfamiliar with the physics-based intuition behind this algorithm.\n\n- (g) Regarding CCE, why is this initialization considered effective? If the goal is to initialize centroids such that the class centroids are distant from each other, why not simply use the K-means algorithm? Alternatively, why not select class centroids as the latent vectors for each class that are the most distant from each other in terms of Euclidean distance, in a similar way as performed with hard negative sampling?\n\n- (h) The usage of the latent space for memory reduction and the decoder for latent replay is not novel. For example, Ayub et al. (ICLR 2021) [1] employ the encoder for storing latent representations and replay these latent representations in subsequent incremental learning steps. When the memory budget is reached, the latent representations are compressed into centroids and covariances. A comparison with their approach is necessary. The storage of latent representations for incremental learning and efficient memory replay with autoencoder is also explored in [2].\n\n- (i) Comparison in Table 2. 1) Some comparisons are unnecessary since the methods the authors compare to perform different tasks. For instance, Prediction Error Based Classification (PEC) [3] is designed for online continual learning (single-pass data), while the authors address the problem of offline incremental learning. It is clear that the harder setting for which PEC is designed results in lower performance compared to the authors' method. The authors should compare their approach only with offline class incremental learning methods under the same conditions. 2) Since the authors' method operates in an offline incremental learning setting, they should compare it with recent exemplar-based class incremental approaches, such as X-DER [4] and MEMO [5]. Additionally, the comparison should consider using a larger and more realistic backbone, such as ResNet-18, which is now commonly evaluated with more parameters and a larger latent space [4][5]. The paper should also evaluate how the method performs with higher-resolution images (e.g., 224x224) on a dataset like ImageNet100 [5].\n\n- (l) In Table 3, the authors report the wall-clock time for training their methods. They state that training takes about 8 hours on CIFAR-100 with ResNet-32, which seems excessive. In FACIL [6], joint training on a not particularly novel GPU requires less time, as only about 400k parameters need to be optimized. What are the timings for joint training with the same epoch budget? An incremental learning method should be more efficient than joint training. Additionally, the authors should specify the device used for the experiments when reporting training time.\n\n[1] A. Ayub and A. Wagner, “{EEC}: Learning to encode and regenerate images for continual learning,” in International\nConference on Learning Representations, 2021.\n\n[2] Caccia, L., Belilovsky, E., Caccia, M. &amp; Pineau, J.. (2020). Online Learned Continual Compression with Adaptive Quantization Modules. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research.\n\n[3] Michał Zaj ˛ac, Tinne Tuytelaars, and Gido M van de Ven. Prediction error-based classification for\nclass-incremental learning, in ICLR 2024\n\n[4]Zhou, Da-Wei and Wang, Qi-Wei and Ye, Han-Jia and Zhan, De-Chuan, A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning, in ICLR 2023 \n\n[5] Matteo Boschini, Lorenzo Bonicelli, Pietro Buzzega, Angelo Porrello, Simone Calderara. Class-Incremental Continual Learning into the eXtended DER-verse, in TPAMI 2022 \n\n\n[6] Marc Masana, Xialei Liu, Bartlomiej Twardowski, Mikel Menta, Andrew D. Bagdanov, Joost van de Weijer Class-incremental learning: survey and performance evaluation, TPAMI 2022" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A memory-efficient architecture for incremental learning based on model-based and exemplar-based approach." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024autoencoderbased,\ntitle={Autoencoder-Based Hybrid Replay for Class-Incremental Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0eu837jdBD},\nnote={under review}\n}" }, "abstract": { "value": "In class-incremental learning (CIL), effective incremental learning strategies are essential to mitigate task confusion and catastrophic forgetting, especially as the number of tasks $t$ increases. Current exemplar replay strategies impose $\\mathcal{O}(t)$ memory/compute complexities. We propose an autoencoder-based hybrid replay (AHR) strategy that leverages our new hybrid autoencoder (HAE) to function as a compressor to alleviate the requirement for large memory, achieving $\\mathcal{O}(0.1 t)$ at the worst case with the computing complexity of $\\mathcal{O}(t)$ while accomplishing state-of-the-art performance. The decoder later recovers the exemplar data stored in the latent space, rather than in raw format. Additionally, HAE is designed for both discriminative and generative modeling, enabling classification and replay capabilities, respectively. HAE adopts the charged particle system energy minimization equations and repulsive force algorithm for the incremental embedding and distribution of new class centroids in its latent space. Our results demonstrate that AHR consistently outperforms recent baselines across multiple benchmarks while operating with the same memory/compute budgets." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Catastrophic Forgetting", "Class-Incremental Learning", "Continual Learning", "Task Confusion." ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d5d18926236cda4e00b58d03c40e762b89ecc4c5.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/1e7fc3634266b38a80c28780735cf0abb3833d13.zip" }, "title": { "value": "Autoencoder-Based Hybrid Replay for Class-Incremental Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0fD3iIBhlV
Emergence of a High-Dimensional Abstraction Phase in Language Transformers
main
Active
interpretability;intrinsic dimension;large language models
interpretability and explainable AI
5;5;6;6;8
3;3;4;3;5
2;3;2;3;3
3;3;2;2;3
3;3;3;3;3
6
3.6
2.6
2.6
3
0.912871
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "While the analyses show some interesting trends, it is difficult to tell how meaningful or significant the numerical differences are. Methods for analyzing LLM layers other than through ID could have been discussed in a prior work section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper conducts a broad analysis across 5 different LLMs and considers a range of questions and ablation studies (e.g., estimating ID on shuffled data, comparing layers across different models); altogether an impressively broad set of experiments. The paper is clearly written and presents a few new insights (e.g., correlation between peak onset and performance). Code and data would be made available, which would be valuable for the community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper uses the technique of intrinsic dimension estimation as a tool for analyzing properties of different transformer LLM layers. 5 different LLMs are analyzed on textual inputs from 3 different public-domain corpora. In addition to computing the intrinsic dimensionality (ID) (using the generalized ratios intrinsic dimension estimator) for different layers, the ID is correlated with performance of different layers' representations on syntactic and semantic probing tasks. Furthermore, the difference in representational power between different layers is measured using an Information Imbalance criterion. The authors find that middle layers in LLMs have the highest ID; ID peaks seem to be an indicator of linguistic structure being learnt; early onset of peaks in ID across layers is correlated with better next token prediction performance performance; and high ID peak layers are representationally equivalent across different LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The use of ID as an analysis tool for LLM layers is not an entirely new idea (e.g., https://arxiv.org/pdf/2402.18048). \nMost of the results (e.g., the peaking of ID at middle layers, emergence of linguistically informative representations in those layers) has been shown before by means of other methods (e.g., mutual information or canonical correlation analysis). These should have been discussed in more detail under prior work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "(1) Please check the questions listed under Weaknesses." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "(1)\tAlthough inspired by the earlier work of (Valeriani et al., 2023), this work greatly extends the models investigated to 5 distinct mainstay transformer-decoder-only LLMs and added more extensive probing and downstream tasks on defined datasets to analyze ID profiles across layers. Hence, the conclusions drawn in this work are verified across various models, datasets, and tasks, making the findings more convincing.\n\n(2)\tThe comparisons to related works, esp. (Valeriani et al., 2023) which inspires this work, are clearly presented, hence the contributions of this work are clear and solid.\nThe verification of the emergence of a central high-dimensionality phase, and analysis of language processing behavior and performance during the high-dimensionality phase are quite thorough.\n\n(3)\tThe analysis in Conclusion demonstrates that many findings in this work align with prior works and concurrent works. The paper clearly summarizes insights of guidance for future research. The Appendix provides detailed experimental setup and additional results. And finally, the analysis of potential applications of the findings is valuable to the research community.\n\n(4)\tOverall, the paper is clearly written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work takes a high-level geometric approach to analyze intrinsic dimension (ID) of the representational manifold at each layer of a decoder-only Transformer LLM to understand how layer geometry relates to layer function. Although inspired by the earlier work of (Valeriani et al., 2023), this work greatly extends the models investigated to include five mainstay decoder-only LLMs, and added more extensive probing and downstream tasks on defined datasets to analyze ID profiles across layers. The resulting observations are different from those from (Valeriani et al., 2023). This work made quite a few interesting findings on detecting broad qualitative patterns, and provides useful guidance for future research towards interpretability, analysis of model behavior and quality, and model pruning and layer-specific fine-tuning etc." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1)\tAlthough the paper is overall clearly written, please make sure that every symbol used needs to be clearly defined when it first appears, e.g., d in Section 3.4.\n\n(2)\tPlease provide rationale for critical algorithmic designs, for example, please clarify why GRIDE is selected, and why the three alternative measures for comparing layer’s representation spaces are chosen. \n\n(3)\tCurrently, k is still selected based on visual inspection. It would be useful to propose methods that can automatically select k.\n\n(4)\tIt is interesting that OLMo seems a bit of an outlier compared to the other 4 LMs, although it also exhibits the ID peak and other related properties. It would be useful to provide insights on why OLMo behaves differently from the other models, and shed light on patterns of any potential “outlier” LM." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. There seems to have second ID peak in the later layers over LLMs. Do you think this second ID peak might reveal additional insights?\n2. In your analysis (Figure 4), you observed that Pythia and OPT exhibit very similar representations. Could this similarity be attributed to pre-training on similar datasets? If so, how might this influence your findings, and have you considered controlling for dataset overlap to isolate structural factors more effectively?\n3. The work focuses on classification tasks to analyze representation spaces in language models. Could you explain why generative tasks were not included? Do you expect the observed ID peaks and representation patterns to differ in generative contexts?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This work conducts experiments on various LMs (e.g., OPT-6.7B, Llama-3-8B, Pythia-6.9B, OLMo-7B, and Mistral-7B) using multiple datasets, providing a comprehensive analysis. It also observes how representational intrinsic dimensionality (ID) varies across layers and proposes insightful hypotheses. Furthermore, this work inspires the research community to explore the utilization of ID information in transformer-based LM applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper explored how transformer-based language models evolve their internal representations across layers, revealing a distinct high-dimensional abstraction phase. The authors observe the some findings across multiple LMs and datasets, and they provide a foundation for better understanding and optimizing language model architectures. This work bridges the gap between geometric representation and linguistic function in transformer-based LMs. Also, it highlights the potential of intrinsic dimensionality as a tool for analyzing and evaluating LMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the paper combines two methods, GRIDE (Denti et al.) and Information Imbalance (Glielmo et al., 2020), to analyze four large language models (LLMs), this combination may fall short in terms of novelty. In Section 4.1, the choice of pre-training datasets for evaluation is also a limitation. Since these datasets have likely been encountered by the models during training, the results may not provide a fully accurate picture of the models’ generalization capabilities. Testing on unseen datasets would be crucial to evaluate the robustness and generalizability of the observed patterns, especially in real-world applications where unseen data is the norm. The study is limited to a narrow range of LLMs in terms of scale. Evaluating models of varying sizes (e.g., smaller models alongside large ones) would offer a more comprehensive understanding of how model size impacts intrinsic dimensionality and representation overlap across layers." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I strongly advice to improve the submission w.r.t. the mentioned weaknesses. That helps both quality and reach.\n\nThe first paragraph of Asset Section C.1 (lines 809-836, in particular 828-829) mentions sensitivity of ID estimation w.r.t. to noise, small scales, density variations and curvature. That analysis suggests some sort of frequency decomposition integrated with the ID estimation." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The findings in bold letters at lines 406-407 and lines 425-426 may be useful to some researchers that need to train and/or select models. The fact that ID seems to change gradually over layers is interesting, but may have a simple explanation in the extreme averaging scale of these models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The submission is analytic work trying to correlate intrinsic dimension of intermediate NN/LLM representations with linguistic targets. Comparison of the method applied to different models allows insights into some of their learned structural differences." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Except for the few strengths mentioned above, the submission does not explain for what else the gained insights can be used for or wether they are more useful than that at all.\n\nThe analysis focuses only on fully trained models and does not provide insights into how ID changes over time. A correlation analysis to other work would add more value. My first thought was a correlation to the IB method (e.g. Tishby et al. 2000 and Schwartz-Ziv & Tishby 2017), but this may not be the only or best choice.\n\nThe submission wrongly mentions PCA being linear (line 061, applies only to its original form) which leads to the quick conclusion to discard it. This is puzzling as the research on non-linear PCA is quite diverse based on very different techniques and there's even early work using neural networks dating back to 1991 (Mark Kramer: \"Nonlinear PCA Using Autoassociative NNs\")." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In the data section, it is not very clear to me what does it mean to \"extract 5 non-overlapping partitions of 10k 20-token sequences\" and how the shuffled version is generated, can authors explain more about this?\n\nIn section: The ID peak marks a transition in layer function, I think the relation between ID peak and \\delta(l_i \\to l_first) is not very clear. It has very similar shape in OPT and somewhat in pythia, but LLAMA has a completely different curve. It is maximizing towards the end of the layer instead of the center of layer.\n\nIn section 4.2, authors also claim the relation between ID-peak and a few tasks. However, Figure (a) and (b) do not have very clear co-related trend between ID peaks and tasks' performance. In particular, task performance in Figure 5(b) seems to be monotonically increasing instead of peaking in the middle. Can authors justify more about this?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "I like that authors combine evidence from a few different perspectives to demonstrate the relation between intrinsic dimension peak and transition to abstract processing. They also conduct experiments on a few corpus and a few models as well, which make the claim more general and robust" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work analyzes properties of representations of several LLMs through a few approaches: downstream probing, intrinsic dimensionality and information imbalance. The analysis is mainly developed around intrinsic dimension and they show LLMs typically have a few intrinsic dimension peaks across layers. Additionally, they suggest that those peaks indicate transition to abstract linguistic processing through a variety of analysis" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The method section is weak and the explanation of intrinsic dimension computing is not enough given its importance in this work. I was not able to identify which variable is corresponding to the intrinsic dimension without going through the cited paper. It seems to be the variable $d$ which authors did not explain what it is.\n\nAdditionally, author made a wrong claim in line 177~178 that \\mu has a generalised pareto distribution. I cannot find any resources claiming this specific distribution is a (generalized) pareto distribution including the original cited paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024emergence,\ntitle={Emergence of a High-Dimensional Abstraction Phase in Language Transformers},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0fD3iIBhlV},\nnote={under review}\n}" }, "abstract": { "value": "A language model (LM) is a mapping from a linguistic context to an output token. However, much remains to be known about this mapping, including how its geometric properties relate to its function. We take a high-level geometric approach to its analysis, observing, across five pre-trained transformer-based LMs and three input datasets, a distinct phase characterized by high intrinsic dimensionality. During this phase, representations (1) correspond to the first full linguistic abstraction of the input; (2) are the first to viably transfer to downstream tasks; (3) predict each other across different LMs. Moreover, we find that an earlier onset of the phase strongly predicts better language modelling performance. In short, our results suggest that a central high-dimensionality phase underlies core linguistic processing in many common LM architectures." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "interpretability", "intrinsic dimension", "large language models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/44bbf2fa310ab7e9b22953196bd6ec5075ee7736.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/ed12e7e0818a442228e9843cb02e29791857cba6.zip" }, "title": { "value": "Emergence of a High-Dimensional Abstraction Phase in Language Transformers" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0fJfVOSUra
ThunderKittens: Simple, Fast, and $\textit{Adorable}$ Kernels
main
Active
Systems;Kernels;Efficiency;Efficient Models;IO Awareness;GPUs
infrastructure, software libraries, hardware, systems, etc.
5;5;6;8
5;4;4;4
3;3;3;3
2;3;3;3
4;2;3;3
6
4.25
3
2.75
3
-0.471405
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Thanks for submitting the excellent paper to ICLR. While in general I enjoyed reading the paper, I have a few thoughts on the extension of the paper. Specifically, this paper proposes a new CUDA abstraction that allows users to write new kernels. However, it seems that it is built on top of the fact that all the dimensions should be a multiple of 16. This could be problematic in the context of dynamic shapes where the dimension does not divide 16. Could you please elaborate on how could the proposed technique be extended to such cases?\n\nBesides, the paper uses auto-tuning for further adjust the hyperparameters for a better performance. Could you elaborate how much the tunning overhead is?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper proposes methods at different levels that simplify CUDA kernel implementations\n* The paper can achieve a similar performance compared to the state-of-the-art implementation" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new programming library for implementing efficient CUDA kernels. tThe paper contains the three ideas at three different levels for CUDA kernel implementation: (1) At warp-level, the author proposes to organize the tile as the multiple of 16; (2) at thread-block level, the author devises template libraries to overlap between different asynchronous warps; (3) at grid-level, the author proposes methods for managing kernel launch kernel launching overheads. As a result, the proposed library can achieve a performance on par with the existing state-of-the-art implementations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The paper has not discussed the tunning overhead with the proposed techniques." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1)\tIs your framework limited to the Hopper series? Can it be applied to A100s, or other GPUs such as the A40/L40?\n2)\tYou focus on the 16x16 register block level, but how can your framework be extended to smaller blocks, such as with GEMV, sparse operations, and masked operations (e.g. non-power-of-two dimensions and strided masking, such as in Natten).\n3)\tThroughout the paper, you focus on BF16 precision (with the exception of softmax); have you considered other data types, such as integer types or floating-point formats like FP8?\n4)\tHow could your framework be extended to handle multi-GPU operations, such as Fully Sharded Data Parallel (FSDP) for split operations? This seems like a natural extension of the producer-consumer model.\n5)\tYou compare yourself against Triton, which also supports AMD GPUs. Can you address this as a potential tradeoff in the paper? Alternatively, if your framework can be trivially extended to ROCm, this should be included in the paper with a demonstration, otherwise it represents a tradeoff between efficiency and portability.\n6)\tYour cost model in Section 2.2 is effectively a Roofline model; could you contextualize this in the existing literature? The results in Table 3 are expected, as reordering increases the arithmetic intensity (FLOPs/Byte) of the inner loops.\n7)\tThroughout the paper, the emphasis on industry versus academic adoption (including the use by undergraduates) feels extraneous and detracts from the main narrative. The paper’s contributions should stand on their own without reliance on external endorsements or applications.\n8)\tFigures 2 and 5 present a simplified sketch for softmax, whereas the true implementation is significantly more complex, potentially leading to a misleading comparison with PyTorch. Furthermore, Figure 2 led me to question why you are using C at all for the API, when the listing could easily have been captured by a python trace (e.g. Triton). This design choice is only clarified upon reviewing the implementation details provided in the appendix and supplementary material.\n\nTo build on these questions, the feedback below addresses specific technical details and aims to enhance overall clarity. While this paper presents a strong contribution toward improving kernel efficiency, addressing these points will better showcase the authors’ contributions.\n\nMinor Technical Errors:\n\n-\t044: The H100 datasheet shows a 7.4x ratio between TCs and ALUs, not 16x. Additionally, my understanding is that the TCs necessarily require bubbles as the Register path cannot keep up with the TC I/O for full throughput. \n-\t136: This should be \"can load\" or \"may load\" instead of \"loads.\" In general, a kernel does not necessarily need to load data from memory. Kernels can rely solely on arguments (loaded into registers at startup) to generate new data. For example, a kernel might generate a pseudo-random noise tensor without accessing memory.\n-\t148: The 32 threads must be within the same quadrant, where “consecutive” or “adjacent” would be more appropriate than “nearby”.\n-\t150: In Ampere, a warp cannot simultaneously occupy different functional units, though separate warps can. For accuracy, please verify this claim against the Hopper documentation or micro-benchmarking paper, otherwise consider omitting if verification is unavailable.\n-\t167: Excess registers spill over into Global Memory, not L1. They can appear in L1 due to the memory hierarchy, but this is at the discretion of the hardware cache manager.\n-\t171: Multiple thread blocks can only schedule on the same SM if there is sufficient space (e.g. SMem), otherwise they would clobber each other.\n-\t173: This statement should be more precise to mention “all thread blocks” and that the L2 is hardware managed, making it distinct from the software managed SMem.\n-\t179: The tail-effect cost mentioned only applies to singular kernels. Ideally the GPU should have multiple kernels in flight, which can run concurrently.\n-\tIt would also be relevant to mention that kernels which contain too many instructions can cause slowdown as they will incur ICache misses.\n\nPresentation Issues:\n\n-\tThe abstract should be revised for clarity, with suggested improvements like “creates a”, “suggest that”, and “resembling PyTorch.”\n-\tThe paper could benefit from clarity revisions in several sections, where phrasing and word choice could make technical details easier to follow. Lines: 073, 170, 178, 205, 278, 299, 301, 328, 370, 397\n-\t325: You should not use \"[1]\" and \"[2]\" to enumerate concepts as they are easily confused with reference indicators. \n-\tTable 2 and Table 3 should probably be Figures like Figure 6. It is also unclear why these stop at 4 stages, K=1024, and what K is. (MxN)x(NxK)? \n-\tFigure 7 and 8 should use subfig captions rather than plot titles. If parameters are common among subfigures, then they should be stated in the figure caption, otherwise in the subfig caption. The fontsize for the axis and labels is too small. Finally, the batch size does not match with the titles and caption.\n-\tThe table in Section 4.2 is missing a caption and column (TK is listed twice).\n-\tThe reference links are broken in Appendix B." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors demonstrate significant improvements to computational efficiency within a clearly defined framework that appears relatively straightforward to adapt. Their framework also provides functionality for more complex resource management, which is often challenging to manage directly in CUDA. Additionally, the authors demonstrate the impact of varying hyperparameters for several key kernel operations, most of which match or exceed standard baselines. Lastly, the results show a surprising contrast with Triton implementations, positioning their approach within the CUDA domain while achieving a similar level of complexity to Triton." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a framework to facilitate easy writing of efficient CUDA kernels. The authors leverage the asynchronous compute capabilities of the Hopper series GPUs by following a producer-consumer paradigm, to efficiently overlap different kernel operations. Additionally, the authors investigate the impact of various memory ordering strategies, demonstrating that relatively simple strided patterns offer the best tradeoffs. Lastly, the authors demonstrate performance that is comparable to or exceeds existing methods, including Triton.\n\nOverall, the work provides a significant contribution to improving computational efficiency for common operations, though the application appears limited in scope. Additionally, minor technical and structural errors impact readability. These issues could be addressed in a revision, at which point I would be inclined to raise my score." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-\tThe application appears limited in scope, which should be explicitly addressed. For example, is the framework limited to Hopper GPUs and above? And the focus on 16x16 register blocks may limit extensibility to other common cases such as GEMV and sparse computations.\n-\tThe paper contains many issues with presentation, including caption errors, grammatical and awkward wording, and typos, all of which impair readability. \n-\tThe paper overlooks relevant computer architecture literature regarding performance modeling, specifically in the context of balancing compute and memory (e.g. roofline analysis). Many of the findings presented in the paper are expected from the existing literature." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. What are the fundamental challenges preventing CUTLASS from avoiding bank conflicts? Could it be that the FlashAttention3 kernel simply did not select the optimal layout?\n2. CUTLASS has implemented both ping-pong and cooperative kernel variants for GEMM, with varying performance across different scenarios. How does TK support ping-pong and cooperative kernels, and could you include a comparison with CUTLASS in Figure 7’s GEMM kernel results?\n3. TK appears designed specifically for the Hopper architecture with asynchronous features. Is it also compatible with Ampere or other GPU generations? How does TK’s performance on an A100 compare to Triton?\n4. Following Q3, if Blackwell GPUs were released, would TK’s abstractions remain applicable? How do you plan to ensure extensibility across GPU generations?\n5. What's the usage of the cost model in Section 2.2? This formula is highly simplified and does not guide any optimization or automatic search later.\n6. Section 3.1 discusses various layouts — do users need to manually manage data organization and specify layouts in TK?\n7. Figure 5 is just some wrappers of mbarriers. Any insights here?\n8. Can TK effectively handle quantized kernels, where data layout is crucial for efficient transfers from TMA and WGMMA computation? How does it perform on FP8 GEMM and FlashAttention kernels?\n9. What is TK's performance on causal attention kernels?\n10. Please provide detailed experimental configurations in the Appendix. For example, which versions of PyTorch and Triton were used? Was `torch.compile` employed to optimize those network layers? For cuBLAS, was the latest [cuBLASLt](https://developer.nvidia.com/blog/introducing-grouped-gemm-apis-in-cublas-and-more-performance-updates/) autotuning enabled? Since PyTorch also uses Triton as a backend, what distinguishes the two baselines in Figure 8?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The TK library provides a useful abstraction for writing high-performance asynchronous kernels on GPU.\n2. The presentation is clear and accessible, especially the introductory sections on GPU architecture, which provide a helpful overview for ML researchers who may lack in-depth experience with GPU programming.\n3. The experimental results are compelling, showing performance on par or better than highly optimized kernels, such as FlashAttention3. The paper also demonstrates significant speedups across different kernel types compared to state-of-the-art frameworks like Triton and PyTorch." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents ThunderKittens (TK), a C++ embedded library for writing high-performance CUDA kernels for NVIDIA GPUs. It introduces warp-, thread-block-, and grid-level abstractions to facilitate mapping of kernels to the GPU hierarchy. Experimental results indicate that TK can outperform strong industrial baselines, achieving superior performance for GEMM and attention kernels." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The TK library is still too low-level with too many details, which requires users to manage synchronization carefully and does not simplify the programming burden.\n2. The novelty and advantages of TK over CUTLASS are unclear. Many functionalities seem achievable with CUTLASS as well. The authors mention that TK addresses bank conflicts, but the evidence presented is minimal. There appear to be no inherent limitations in CUTLASS that would prevent it from avoiding bank conflicts.\n3. Similarly, the benefits of TK over Triton are not well established. Triton, embedded in Python with a PyTorch-like API, may offer a more accessible interface. By contrast, TK, embedded in C++, still requires explicit handling of communication with mbarrier operations like expect and arrive. No user study or lines of code comparisons are provided to demonstrate that TK improves programmer productivity.\n4. Experimental results are good, but still missing comparisons in some important cases like quantized kernels and causal attention.\n5. The work reads more like a system paper, with limited ML-focused insights, raising questions about its fit for ICLR.\n\nMinor:\n- P4: \"Since the frameworks are not C++ embedded, it can be challenging to use specialized hardware instructions\" This statement is inaccurate; TVM provides mechanisms to incorporate low-level TensorCore instructions, and Triton also has [inline](https://triton-lang.org/main/python-api/triton.language.html#inline-assembly) operation to include PTX code.\n- Section 2 does not discuss the Tensor Memory Accelerator (TMA) on Hopper, which is essential for asynchronous optimizations mentioned in Contribution 2.\n- Appendix B labels appear broken (??)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Could the authors elaborate on the potential for cross-platform compatibility? Given the focus on NVIDIA’s H100 GPUs, it would be helpful to understand whether TK’s abstractions could be adapted to other GPU architectures, like AMD or Apple, and what challenges might arise.\n\nThe paper demonstrates TK’s strong performance on medium-sized data blocks, but could the authors provide more insights into how well TK scales with very large datasets? Are there specific limitations to consider for applications requiring high parallelization or extensive data handling?\n\nCould the authors expand on their design choice to limit TK to a few key abstractions? Are there specific reasons why additional templates or adaptive features were not incorporated, and would doing so have risked undermining the framework’s simplicity?\n\nIn scenarios with high memory demands, how does TK manage the balance between memory overhead and computational efficiency? Further detail on this balance could clarify TK’s suitability for applications with varied memory and compute requirements.\n\nLastly, could the authors clarify TK’s debugging process, especially for users who may not be familiar with GPU optimization? Since GPU kernel errors can be challenging to diagnose, any insights into how TK might support error handling and debugging would be valuable for potential adopters." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper offers a fresh and practical approach to GPU kernel programming, using only a handful of essential abstractions to make high-performance kernel writing accessible to a wider range of developers. This simplicity-oriented approach can reduce the complexity typically associated with GPU development, which could be particularly valuable for those without extensive CUDA experience. In terms of performance, THUNDERKITTENS shows impressive results, even surpassing established libraries like CuBLAS and FlashAttention-3 in several tasks, especially in backward pass operations for attention mechanisms and linear attention. The results strongly suggest that TK’s design strikes a good balance between simplicity and performance optimization. Furthermore, by aligning its design with PyTorch and NumPy, TK makes it easier for non-specialists to adopt, potentially expanding the accessibility of efficient GPU programming." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces THUNDERKITTENS (TK), a framework that simplifies writing AI kernels for GPUs while still allowing for high performance. Using a few key abstractions, TK provides tools for developers to create efficient kernels without deep expertise in GPU programming. Through benchmarking, the authors show that TK performs on par with or better than other leading frameworks like CuBLAS and FlashAttention-3 for various AI tasks. TK’s accessible design, inspired by PyTorch and NumPy, aims to make high-performance kernel development more straightforward and accessible to a wider audience." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1- While the minimalistic design is a key strength, it may also limit TK’s flexibility for more specialized AI tasks that require tailored optimization strategies. As demands grow for handling complex and emerging AI workloads, the current set of abstractions could potentially fall short.\n\n2- The focus on NVIDIA’s H100 GPUs raises questions about how well TK can transfer to other platforms, such as AMD or Apple GPUs. Expanding on cross-platform compatibility would provide more clarity about TK’s broader usability.\n\n3- Though the paper demonstrates strong performance on medium-sized data, it is less clear how TK handles scalability with very large datasets or highly parallelized scenarios. Addressing its limitations in these settings could further support TK’s value in real-world applications." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "ThunderKittens (TK) is a framework that simplifies the creation of high-performance AI kernels through key abstractions, enabling efficient implementation of ML architectures on GPU hardware and surpassing previous approaches in hardware utilization." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024thunderkittens,\ntitle={ThunderKittens: Simple, Fast, and \\${\\textbackslash}textit\\{Adorable\\}\\$ Kernels},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0fJfVOSUra},\nnote={under review}\n}" }, "abstract": { "value": "The challenge of mapping AI architectures to GPU hardware is creating a critical bottleneck in AI progress. Despite substantial efforts, hand-written custom kernels fail to meet their theoretical performance thresholds, even on well-established operations like linear attention.\nThe diverse hardware capabilities of GPUs might suggest that we need a wide variety of techniques to achieve high performance. However, our work explores whether a small number of key abstractions can drastically simplify the process. We present ThunderKittens (TK), a framework for writing performant AI kernels while remaining easy to use and maintain. Our abstractions map to the three levels of the GPU hierarchy: (1) at the warp-level, we provide 16x16 matrix tiles as basic data structures and PyTorch-like parallel compute operations over tiles, (2) at the thread-block level, we provide a template for overlapping asynchronous operations across parallel warps, and (3) at the grid-level, TK can help hide the block launch and tear-down, and memory costs. We show the value of TK by providing kernels that match or outperform prior kernels for a range of AI operations. We match CuBLAS and FlashAttention-3 on GEMM and attention inference, and outperforms the strongest baselines by $10-40\\%$ on attention backwards, $9\\times$ on state space models, and $14\\times$ on linear attention." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Systems", "Kernels", "Efficiency", "Efficient Models", "IO Awareness", "GPUs" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c01e94f1108ac427db7e8cd8e7d916e3feede76a.pdf" }, "presentation": null, "primary_area": { "value": "infrastructure, software libraries, hardware, systems, etc." }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/2b822264ca600550882aee85b36c3fdd96aa88e0.zip" }, "title": { "value": "ThunderKittens: Simple, Fast, and $\\textit{Adorable}$ Kernels" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0fXcrl35V0
Second-order finite-time and fixed-time systems for sparse recovery and dynamic sparse recovery
main
Withdraw
frame of defining penalty functions;noise;accelerated distributed generalized reweighted noise filtering consensus algorithm;accelerated distributed robust generalized reweighted denoise consensus algorithm;$l_p$-norm minimization;multi-target tracking
optimization
Yihua Huang
~Yihua_Huang3
0
0
0
0
0
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": { "value": "@misc{\nhuang2024secondorder,\ntitle={Second-order finite-time and fixed-time systems for sparse recovery and dynamic sparse recovery},\nauthor={Yihua Huang},\nyear={2024},\nurl={https://openreview.net/forum?id=0fXcrl35V0}\n}" }, "abstract": { "value": "In the rapidly advancing field of healthcare, efficient processing of sparse data is essential for applications such as medical imaging and personalized medicine.\nThis paper introduces innovative second-order finite-time and fixed-time systems\ntailored for sparse recovery in healthcare data, incorporating control laws into\nthe second-order derivative. We validate the stability and convergence of these\nsystems within finite and fixed times using the Lyapunov method. Furthermore,\nwe examine the tracking performance and assess both practical finite-time and\nfixed-time convergence. The effectiveness of our systems is highlighted through\ncomparative analyses with existing methods, with numerical experiments demon\u0002strating superior accuracy and dynamic tracking capabilities of sparse biomedical\nsignals." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Yihua_Huang3" ] }, "authors": { "value": [ "Yihua Huang" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "frame of defining penalty functions", "noise", "accelerated distributed generalized reweighted noise filtering consensus algorithm", "accelerated distributed robust generalized reweighted denoise consensus algorithm", "$l_p$-norm minimization", "multi-target tracking" ] }, "large_language_models": { "value": [ "No, not at all." ] }, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "huang|secondorder_finitetime_and_fixedtime_systems_for_sparse_recovery_and_dynamic_sparse_recovery" }, "pdf": { "value": "/pdf/2dd49ca54fc6effbed7812f7d412000db391464a.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": { "value": "No" }, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": { "value": "Yes" }, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Second-order finite-time and fixed-time systems for sparse recovery and dynamic sparse recovery" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0fhzSFsGUT
PETRA: Parallel End-to-end Training with Reversible Architectures
main
Active
Model parallelism;Delayed gradient;Reversible architectures
optimization
5;6;8;8
4;4;5;4
2;3;3;3
2;3;3;4
3;4;4;3
6.75
4.25
2.75
3
3.5
0.555556
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- How is the approximated gradient influenced by the depth of the model? I would expect the error to increase as the model gets deeper.\n\nI find the paper very interesting and am ready to increase my grade should my remarks be addressed by the authors." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well written, clear, and has helpful illustrations.\n- The algorithm seems simple, natural and intuitive.\n- While the algorithm relies on reversible layers, it can still be mixed with standard non-reversible layers, for which a standard backpropagation is performed.\n- The authors validate their algorithm with thorough experiments and analyses." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a new algorithm for training reversible models. Compared to backpropagation, it can be run on each layer in parallel and with a reduced memory cost. They show empirically the advantages of their algorithm on RevNet models for image classification." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Invertible networks are currently not very used. This limits the direct applications of the algorithm. However I am aware that PETRA could motivate such the use of such architectures.\n2. The experiments are only performed on RevNet models for image classification. As mentioned in the conclusion, it would be very nice to see experiments on more tasks and models. Indeed, as PETRA is applicable to only a subset of models (reversible models), it is frustrating to only see experiments on a single architecture.\n3. Lines 509-510: I think you meant RevNet instead of ResNet." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How PETRA perform on large model and more complex task, such as pretraining language model? The experiment in the paper is weak. The scalability of PETRA can not be verified by the current empirical results. Experiments on distributed pretraining for llm is necessary to validate the efficiency of PETRA, for example: experiments on Pile dataset with varying model size.\n\nIs the reversible architecture necessary for PETRA? For models that integrate both reversible and non-reversible layers, how does PETRA manage memory savings and efficiency, and could these hybrid architectures affect its scalability benefits?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The PETRA paper presents a new alternative for large-scale neural network training, offering efficient parallelization by decoupling forward and backward passes, which enables stages to compute independently across devices. Utilizing reversible architectures, PETRA removes the need for activation and parameter storage, achieving up to 54.3% memory savings, making it especially valuable for training large models. It demonstrates accuracy comparable with backpropagation on datasets like CIFAR-10 and ImageNet." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "PETRA is a model-parallel training method for reversible neural networks that decouples forward and backward passes, eliminating the need for activation or parameter buffers. This enables efficient parallel computation across devices with reduced memory overhead. PETRA matches backpropagation in accuracy on datasets like CIFAR-10 and ImageNet, while also achieving notable speed and memory savings, making it a potential alternative for large-model training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Dependency on Reversible Architectures: The approach is designed specifically for reversible architectures, which may limit its application to models that can be easily adapted to this structure. Non-reversible architectures, such as standard ResNets or some types of transformers, may not benefit as fully from PETRA’s memory and efficiency gains.\nIncreased Communication Overhead: While PETRA reduces memory usage, its reversible stages require additional communication overhead during the backward pass, which could affect scalability on very large, distributed systems. And the PETRA propose dividing a model into some\nScalability Constraints with Non-Reversible Layers: Although PETRA performs well on reversible architectures, any non-reversible stages still require stored activations, potentially increasing memory use and complicating scalability for models that include such layers." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper demonstrated that activation reconstruction can work well with out-of-sync backward weights, and the reconstructed activations can be used to update weights.\n- The paper has shown real computation and memory savings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes to perform model parallel training using reversible architectures. Compared to delayed gradient, the proposed method is more memory efficient since it does not need to stash weights. It is shown that on shallower architecture the performance is slightly better than regular backprop and on deeper architecture such as ResNet-50, there is a slight drop but not significant. Overall, the work is likely to have a big impact as a way to scale up model parallel training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- It would be nice to see at what scale the method starts to break down (say when there is more and more delay in reconstruction). And show a plot on reconstruction error and final performance as a function of the number of delay steps. The model depth can be another variable to explore, aside from the few standard model architectures, perhaps sweeping a wider range of depths.\n- Algorithm 1 is a little hard to process.\n- The method relies on gradient accumulation to fully match with the It is unclear to me how gradient accumulation would have any impact when a large batch / data parallel is employed. This may not be a concern for LLMs, but for ImageNet and SSL training, many use very large batch sizes." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "How did you partition the architectures for your experiments? How many layers/blocks in each stage? Were they all the same size? And if so, would that not bring them out of sync during training such that top layers/stage were idle a lot of the time? The size of the feature maps is decreasing in the layer index, no? Thus, the lower layers/stages would consume more memory and compute than the top ones?\n\nPerhaps you could add some information about this in the appendix :-)" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-written and easy to follow. The idea of utilizing reversibility for parallelization is a nice, simple, and novel idea! Consequently, I find myself sufficiently convinced that the method works --- albeit, that the empirical evaluation is somewhat limited. The novelty and applicability of the method mostly outweighs my concerns about the evaluation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes to utilize the concept of reversible architectures to improve parallelization in DNN training. A model is split into multiple stages that are trained asynchronously; i.e. in a model parallel fashion. Leveraging reversibility, the training of the different stages is effectively decoupled. This scheme offers a linear speedup in the number of stages relative to end-to-end backprop, while reducing the memory footprint. The method is evaluated using ResNets/RevNets with three different image classification benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My only objection to this work is the limited number of experiments. They are limited to ResNet/Revnet 18/34/50 and CIFAR10, ImageNet-32, and ImageNet. It would definitely improve the paper to have at least a few more architectures included." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We show how combining reversible architectures with delayed gradient approaches for model parallelism allows to achieve computational speedups with drastic memory reduction." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024petra,\ntitle={{PETRA}: Parallel End-to-end Training with Reversible Architectures},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0fhzSFsGUT},\nnote={under review}\n}" }, "abstract": { "value": "Reversible architectures have been shown to be capable of performing on par with their non-reversible architectures, being applied in deep learning for memory savings and generative modeling. In this work, we show how reversible architectures can solve challenges in parallelizing deep model training. We introduce PETRA, a novel alternative to backpropagation for parallelizing gradient computations. PETRA facilitates effective model parallelism by enabling stages (i.e., a set of layers) to compute independently on different devices, while only needing to communicate activations and gradients between each other. By decoupling the forward and backward passes and keeping a single updated version of the parameters, the need for weight stashing is also removed. We develop a custom autograd-like training framework for PETRA, and we demonstrate its effectiveness on standard computer vision benchmarks, achieving competitive accuracies comparable to backpropagation using ResNet-18, ResNet-34, and ResNet-50 models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Model parallelism", "Delayed gradient", "Reversible architectures" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3d00e6616b7a6732f6741738818efb89f355c8e8.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/f014b7dfedb72aeef80dcbbd894dd8047ec65620.zip" }, "title": { "value": "PETRA: Parallel End-to-end Training with Reversible Architectures" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0fwJMANq9P
Efficient Heuristics Generation for Solving Combinatorial Optimization Problems Using Large Language Models
main
Active
Heuristic Generation;Large Language Models;Combinatorial Optimization Problem
generative models
3;5;5;6
4;3;3;4
2;2;2;3
1;2;3;3
2;3;2;3
4.75
3.5
2.25
2.25
2.5
-0.229416
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I would appreciate the authors response and clarification on the points listed under \"weaknesses\"" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Strengths:\n- The topic is interesting and of recent interest\n- The approach (CAP and PPP) seems novel.\n- The experiments show significant gains over the baselines in deriving penalty heuristics for guided local search, as well as more moderate gains on constructive heuristics for TSP, heuristic measures for ant colony optimization, and reshaping of attention scores in neural combinatorial optimization," }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents Hercules, an LLM-based algorithm for generating heuristics for combinatorial problems. The paper seems to extend the framework in Ye et al., 2024 with a more advanced direction generation (based on identifying core components in heuristics) as well as an LLM-based fitness calculation. The experiments show gains over the baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Weaknesses:\n- I found the claim about information gain to be quite confusing. \n\t- First, a lot of information is missing: why the number of core components corresponds to the number of heuristics (can we not have multiple core components per heuristic or the same core component in multiple heuristics)? why do we assume that the set of all possible directions can be partitioned into mutually exclusive subsets that correspond to components (can we not have the same direction for multiple core components)?\n\t- Second, it is really not clear why the information gain means we get better heuristics (as indicated in lines 284-285)? If the components generated are of low-quality the directions may be of lower quality as well.\n\n- Experimental evaluation:\n\t- It is not clear what is being reported under gain: the definition is based on \"the performance of ...\" but it is not clear how performance is measured.\n\n- Writing: the writing could improve as a lot of information is not clearly presented. For example there are no clear definitions for a range of terms like parent heuristics, elite heuristics, etc.\n\n- The paper does not provide significant insight into the impact of the proposed techniques (CAP and PPP) beyond the experimental results. For example, it would be interesting to show an analysis of the correlation between predicted fitness values and quality of heuristics." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Could you provide details on the outer meta-heuristics (GLS, ACO etc.)? How much of the results are due to the LLM integration with CAP, PPP etc. vs. the meta-heuristics leading the search into good solutions. \nIt would be interesting to know the comparison between default GLS, ACO or even other baselines for TSP, BinPacking, MKP to position the results in this paper. As is, it is hard to evaluate the significance" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Integrating LLMs with heuristic solving is an exciting combination. \nThe paper implements an end-to-end pipeline that starts with a seed query that then mimics evolutionary computing via LLMs, yielding heuristics that can be embedded in the Local Search Meta-Heuristics for different combinatorial problems. \nThe connection with Information Gain is an excellent addition\nFrom a practical perspective, the paper considers several details into account such as reducing costs via LLM predictors." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the generation of heuristics for combinatorial optimization problems using LLMs. \n\nThe work continues similar work in this space that tries to mimic evolutionary computation (crossover, mutation) via LLMs. The result is an LLM-infused metaheuristic algorithm. \n\nUnlike the previous work, the paper claims 1) to address introducing more problem specificity into the prompts and 2) to speed up the process by using LLMs to predict the performance of generated accuracies to skip over evaluating them fully. \n\nOverall, I enjoyed reading this paper, and I appreciated the work that went into building an end-to-end pipeline with several components." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "As rightly noted in the paper, the idea of mimicking evolutionary computation via LLMs is not new. In fact, most (all?) crossover and mutation operators are from Ye et. al. 2024. On the one hand, the experiments and the ablation study show that the proposed modifications might offer some benefit in the results, and on the other hand, they can be regarded as incremental, and it is not clear what's the main takeaway. \n\nRegarding the presentation, I found it difficult/confusing that many moving parts are introduced as large components with several acronyms Hercules, CAP, PPP, EXEMPLAR, Cons --but after all, the provided pseudocode shows the overall algorithm, so I am not sure what these abstractions add to the presentation. Also, the paper claims our \"propriety\" CAP algorithm --what does that mean? \n\nThe idea of adding more specificity to the prompts seems reasonable at a high level, but the paper overindexes too much into the example in Figure 1. The information gain analysis is interesting (and is borrowed from a Hu et. al. 2024) but at the end what happens is we select top-k core components. And that's also not uniform, we do that only some number of iterations (denoted by \\lambda in the paper), all of which remain as more hyper-parameters to deal with.\n\nThe experiments cover TSP, CVPR, Binpacking, Multi-Knapsacks. Importantly, the starting seed function seems critical to the approach. The method generates heuristics but the overall approach to solve these problems are meta-heuristics. (please correct me if I understand this correctly). For TSP, we use guided local search. For BinPacking and Knapsacks we use Ant-Colony Optimization. One might argue that the settings of the outer meta-heuristics and their performance are crucial to the overall results and not just the heuristics (generated by LLMs here.) The experiments do not discuss or study any of this. \n\nAdditionally, all comparisons are with other LLM-based heuristics generations. Note that this is quite a costly approach (hence some effort with performance predictors to save time etc.). According to the tables in the appendix, we are consuming many many minutes upto 5 hours. Then, it is not clear to me how to fairly evaluate these results. How does the same GLS and ACO without the advanced heuristics found by LLM but with standard heuristics perform given the same amount of time? (Btw, does this time include LLM queries or only running the heuristics after the LLM generates them against the instances?) \n\nThis might not be surprising that the choice of LLM quite affects the results (Table 1; LLama vs GPT-4o). But then it makes one wonder how much of the value comes from the many moving components proposed here vs. plain and simple, the underlying LLM." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The differences are small in some cases and it would be great if the authors could\nprovide error bounds or confidence intervals for the empirical results.\n\nWhy is KGLS is reasonable base heuristic?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The proposed framework is interesting and seems to work well in practice." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a framework to use LLMs to generate heuristics for solving\noptimization problems. The authors describe their framework and evaluate it\nempirically, comparing to other approaches in the literature." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The choice of KGLS as seed heuristics should be justified as it was not designed\nfor the general TSP. Why not LKH? This should also be considered in the\nempirical evaluation; in particular to answer the question of whether KGLS is a\nreasonable heuristic to start with in this case (improving over a weak heuristic\nis easier than improving over a strong heuristic).\n\nFigure 5 has no axis labels." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "This submission does not have ethics concerns." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please reply to my comments in \"Weaknesses\"." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is laudable for its well-structured and logical presentation, providing a comprehensive understanding of the research topic.\n\n2. The article is praiseworthy for its extensive experimental data and significant findings. The authors have selected a number of baselines for comparative experiments on different benchmarks.\n\n3. The supplement provided in this article is adequate. It explains in detail for the reader what is not expanded in detail in the paper, including specific experimental data, hyperparameter settings, Critical Difference Analysis, etc." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the application of LLM in autonomously generating heuristics to solve COPs and proposes a novel algorithm named Hercules to address the two main challenges of existing approaches.\n\nHercules utilizes the Core Abstraction Prompting (CAP) method to abstract core components from elite heuristics and incorporate them as prior knowledge in prompts, thereby reducing the specificity of search directions. This paper further introduces Hercules-P, an efficient variant of Hercules that integrates CAP with the novel Performance Prediction Prompting (PPP) method. PPP leverages LLMs to predict the fitness values of newly derived heuristics based on their semantic similarity to previously evaluated ones, significantly reducing the required computing resources." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In the literature of TSP and CVRP, it is known that those conventional heuristic algorithms, such as LKH [1] and EAX [2], exhibit robust performance. It appears, however, that this submission does not address LKH and EAX, nor does it provide a comparative analysis of the proposed algorithm against these established methods.\n\n1. In lines 85-100 of the Introduction section, the authors describe two challenges to LLM-based HG methods, mentioning in the second challenge that these methods introduce numerous linear operations and conditional branches that make the GPU less efficient for these algorithms. In lines 113-128, the authors claim to have proposed Hercules-P in order to better address the second challenge, but I don't seem to have read in the manuscript how Hercules-P reduces linear operations and conditional branches, making GPUs more efficient in processing these algorithms. May I ask if the authors have solved this challenge? If not, these representations are inappropriate.\n\n2. Does the appearance of CVRP in Section 4.4 stand for Capacitated Vehicle Routing Problem? The authors do not explain what CVRP stands for in the body of the manuscript, and the only explanation appears in the code comments in the Appendix section (line 1337). This cannot be very clear to the reader when it comes to understanding the manuscript.\n\n3. In Section 2.3, the authors mention two challenges for NCO solvers: improving generalisation capabilities and large-scale COPs performance. In Table 5, for LEHD, the performance improvement of either Hercules or Hercules-p gradually decreases as the problem size of TSP or VCRP increases. Does this mean that Hercules also fails to address the challenges faced by NCO solvers? Can Hercules still provide performance gains when the problem size is larger? Further discussion is requested from the authors.\n\n\n## References\n[1] Keld Helsgaun. General k-opt submoves for the Lin-Kernighan TSP heuristic. Mathematical Programming Computation 1(2-3): 119-163 (2009)\n\n[2] Yuichi Nagata, Shigenobu Kobayashi. A Powerful Genetic Algorithm Using Edge Assembly Crossover for the Traveling Salesman Problem. INFORMS Journal on Computing 25(2): 346-363 (2013)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024efficient,\ntitle={Efficient Heuristics Generation for Solving Combinatorial Optimization Problems Using Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0fwJMANq9P},\nnote={under review}\n}" }, "abstract": { "value": "Recent studies exploited Large Language Models (LLMs) to autonomously generate heuristics for solving Combinatorial Optimization Problems (COPs), by prompting LLMs to first provide search directions and then derive heuristics accordingly. However, the absence of task-specific knowledge in prompts often leads LLMs to provide unspecific search directions, obstructing the derivation of well-performing heuristics. Moreover, evaluating the derived heuristics remains resource-intensive, especially for those semantically equivalent ones, often requiring unnecessary resource expenditure. To enable LLMs to provide specific search directions, we propose the Hercules algorithm, which leverages our designed Core Abstraction Prompting (CAP) method to abstract the core components from elite heuristics and incorporate them as prior knowledge in prompts. We theoretically prove the effectiveness of CAP in reducing unspecificity and provide empirical results in this work. To reduce the required computing resources for evaluating the derived heuristics, we propose few-shot Performance Prediction Prompting (PPP), a first-of-its-kind method for the Heuristic Generation (HG) task. PPP leverages LLMs to predict the fitness values of newly derived heuristics by analyzing their semantic similarity to previously evaluated ones. We further develop two tailored mechanisms for PPP to enhance predictive accuracy and determine unreliable predictions, respectively. The use of PPP makes Hercules more resource-efficient and we name this variant Hercules-P. Extensive experiments across various HG tasks, COPs, and LLMs demonstrate that Hercules outperforms the state-of-the-art LLM-based HG algorithms, while Hercules-P excels at minimizing computing resources. In addition, we illustrate the effectiveness of CAP, PPP, and the other proposed mechanisms by conducting relevant ablation studies." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Heuristic Generation", "Large Language Models", "Combinatorial Optimization Problem" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ac3fa83ee38011cab5c926ee7d3021d8e5560d77.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Efficient Heuristics Generation for Solving Combinatorial Optimization Problems Using Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0gGPVbRqOE
Splitted Wavelet Differential Inclusion for neural signal processing
main
Active
Wavelet smoothing;differential inclusion;weak signal;signal reconstruction;Parkinson's disease;burst activity
applications to neuroscience & cognitive science
3;3;5;6
3;4;2;2
2;2;2;4
2;2;2;3
1;2;3;2
4.25
2.75
2.5
2.25
2
-0.870388
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "How does the SWDI method compare to deep learning-based approaches or other adaptive wavelet techniques in terms of accuracy and computational efficiency?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "Originality\nThe proposed SWDI method creatively combines wavelet analysis with differential inclusion to address limitations in current shrinkage methods by focusing on both strong and weak signals, thus enhancing the detection of signal features important for clinical applications.\n\nQuality\nThe paper is well-founded with rigorous theoretical analysis that supports the authors' claims.\n\nClarity\nOverall, the paper is clearly written, although some improvements can be made (see 'Weaknesses' section)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel method, the Splitted Wavelet Differential Inclusion (SWDI), for enhancing neural signal analysis, particularly for applications related to Parkinson’s disease. SWDI introduces a dual-parameter approach that estimates both the strong and whole signals simultaneously, addressing limitations of previous wavelet shrinkage techniques. The authors demonstrate that their closed-form solution path improves estimation accuracy for both signal components. This work contributes to the field of neural signal processing by offering a robust framework for analyzing complex neural data, which could have significant clinical applications in neurodegenerative disease research." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The clarity of presentation of this paper can be improved. There are some English mistakes, subject-verb disagreement, missing conjunction 'and', etc. These should be carefully addressed prior to the publication of this paper.\n\nExamples:\n\nline 052: Add 'and' before 'non-parametric shrinkage'.\nline 053: Change 'contains in the signal' to 'contained in the signal'.\nline 056: Change 'composed by' to 'composed of'.\nline 073: Change 'On the other' to 'On the other hand'.\nline 107: Add 'and' before 'non-parametric shrinkage'.\nline 114: Change 'have' to 'has'.\nline 123: Add 'and' before 'the non-burst component'.\nline 373: Change 'includes' to 'include'.\nline 377: Change 'the same ... with' to 'the same ... as'.\nline 382: Change 'as a contrast' to 'in contrast'.\nline 386: Change 'compare to' to 'compared to'.\nline 402: Add 'and' before 'then increases'.\nline 500: Insert 'be' between 'may' and 'due to'.\nline 512: Change 'Fig. 3,. 4' to 'Figs. 3 and 4'." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "Medical data from Parkinson patients is used." }, "flag_for_ethics_review": { "value": [ "Yes, Discrimination / bias / fairness concerns" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the weaknesses and reply to those." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* It is a nice idea to go beyond considering only large wavelet coefficients, i.e., the strong part of the signal.\n* The author substantiated her/his novel approach with a theoretical foundation (see, i.e., Theorem 4.6)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper improves on the well-known wavelet shrinkage approach, introducing a novel method coined \"Splitted Wavelet Differential Inclusion (SWDI)\". as opposed to wavelet shrinkage, it also takes weak components of the signal to be analyzed in consideration. The effectiveness of SWDI is showed by numerical experiments on data from Parkinson patients." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* At present, we have powerful methods based on learning. It is not clear and now even discussed why those are not taken into consideration, There might be good reasons, but this requires a careful discussion.\n* The numerical experiments only compare to other model-based methods, mainly wavelet-based approaches. Again, in partiular, learning based methods (DNNs, etc.) need to be used for comparison." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Clarification is needed regarding the problem setting, statistical assumptions, and the choice of baseline methods (see weaknesses above)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The considered problem is central and important.\n- The application to neural signals, specifically in the context of Parkinson’s disease, is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper considers the longstanding problem of recovering a temporal univariate signal from its noisy observations, which a fundamental problem in signal processing. The authors approach this challenge using wavelet analysis, where they propose to partition the signal into strong and weak components based on the magnitudes of the wavelet coefficients. The paper presents a new method, termed Splitted Wavelet Differential Inclusion (SWDI), which is designed to recover the strong component by employing a differential inclusion framework. It is shown theoretically and empirically in a simulation that the proposed method recovers the strong signal more accurately than other methods based on wavelet shrinkage. Additionally, the method is demonstrated in application to neural signals, where the goal is to identify medication effects on Parkinson’s disease." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The **presentation** of the entire paper, and particularly, the technical aspects is challenging to follow, which hinders comprehension of the core ideas and derivations.\n- Due to the presentation style, it is difficult to clearly appreciate the novelty of the paper. The mix of formal and informal statements complicates rigorous validation.\n- The **problem setting** lacks clarity, particularly its statistical model. While the strong signal is defined based on the noise standard deviation $\\sigma$, the method’s dependence on the signal-to-noise ratio (SNR) or $\\sigma$ is unclear. Additionally, it is not specified whether the derivations and results assume Gaussian noise or if the true signal $f$ is deterministic.\n- **Numerical results** are limited. Figures are of low resolution with small fonts, making them hard to interpret, especially Figure 1. Expanding the numerical experiments to encompass a broader range of cases and a more comprehensive comparison with alternative shrinkage methods would enhance the paper. The justification for the selected baselines is unclear, considering the prevalence of other methods addressing the same problem." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "It would be beneficial for the authors to discuss the tradeoffs between wavelet techniques and more recent methods, such as deep learning-based approaches, in this specific application. A comparison or discussion of how their method performs relative to recent non-wavelet techniques would provide valuable context for evaluating the method's effectiveness.The distinction between weak signals and noise could be clarified further. I suggest that the authors provide more detailed criteria for how they distinguish between the two, and discuss whether there are alternative approaches. The \"differential\" aspect of the proposed method requires clearer explanation. A step-by-step description of how the differential inclusion is applied, ideally with a simple example, would greatly improve the clarity of this concept and make it more accessible to readers." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper introduces the Splitted Wavelet Differential Inclusion (SWDI) method, which improves the estimation of both strong and weak neural signals by utilizing an ℓ2 splitting mechanism. It demonstrates better accuracy than traditional wavelet shrinkage, particularly in Parkinson's disease signal analysis, capturing non-burst activity alongside stronger signal components." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes the Splitted Wavelet Differential Inclusion (SWDI) method for neural signal processing, achieving better strong and weak signal estimation in Parkinson's disease analysis." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "To better demonstrate the proposed method's practicality, I suggest including comparisons with non-wavelet methods, particularly those that have seen recent success in this field. This could provide a clearer perspective on how the method performs in a wider range of real-world applications. Specific examples of non-wavelet techniques, such as deep learning-based methods, would strengthen the evaluation. While wavelet techniques have been widely used in the past, it would be helpful if the authors could justify their choice of wavelets in this context and explain how their approach advances the state-of-the-art. Additionally, comparisons with more recent works in the field would help to clarify the method's relevance and novelty. The paper's content seems more suitable for signal processing journals or conferences, such as TSP, INDIN, or ICASSP." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a new Wavelet smoothing method to enhance the signal reconstruction in neuroscience applications." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024splitted,\ntitle={Splitted Wavelet Differential Inclusion for neural signal processing},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0gGPVbRqOE},\nnote={under review}\n}" }, "abstract": { "value": "Wavelet shrinkage is a powerful tool in neural signal processing. It has been applied to various types of neural signals, such as non-invasive signals and extracellular recordings. For example, in Parkinson's disease (PD), $\\beta$ burst activities in local field potentials (LFP) signals indicated pathological information, which corresponds to \\emph{strong signal} with higher wavelet coefficients. However, it has been found that there also exists \\emph{weak signal} that should not be ignored. This weak signal refers to the set of small coefficients, which corresponds to the non-burst/tonic activity in PD. While it lacks the interpretability of the strong signal, neglecting it may result in the omission of movement-related information during signal reconstruction. However, most existing methods mainly focused on strong signals, while ignoring weak signals. In this paper, we propose \\emph{Splitted Wavelet Differential Inclusion}, which is provable to achieve better estimation of both the strong signal and the whole signal. Equipped with an $\\ell_2$ splitting mechanism, we derive the solution path of a couple of parameters in a newly proposed differential inclusion, of which the sparse one can remove bias in estimating the strong signal and the dense parameter can additionally capture the weak signal with the $\\ell_2$ shrinkage. The utility of our method is demonstrated by the improved accuracy in a numerical experiment and additional findings of tonic activity in PD." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Wavelet smoothing", "differential inclusion", "weak signal", "signal reconstruction", "Parkinson's disease", "burst activity" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/04306f7c92f670b242d9177a8b5c6bfa7f07858b.pdf" }, "presentation": null, "primary_area": { "value": "applications to neuroscience & cognitive science" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Splitted Wavelet Differential Inclusion for neural signal processing" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0gOQeSHNX1
Tackling the Abstraction and Reasoning Corpus with Vision Transformers: the Importance of 2D Representation, Positions, and Objects
main
Active
Abstraction and Reasoning Corpus;Abstract Visual Reasoning;Transformers;Vision Transformers
generative models
3;5;5;8
4;4;5;4
3;2;3;3
2;3;2;3
2;3;3;3
5.25
4.25
2.75
2.5
2.75
-0.080845
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In figure 7, ViTARC-VT has very large variance in terms of performance. Any reason for it? Also what is the key technique leading to the significant performance improvement from ViT-Vanilla to ViTARC-VT? BorderTokens does not look to be important from this figure. Is this because of the 2D positional encodings in ViTARC-VT, instead of 1D positional encodings in ViT-Vanilla?\n2. The Equation (12) is not quite clear. I did not find how to calculate the value of $r_{left}$ and $r_{right}$.\n3. It is unclear how to tune the hyper-parameters $\\alpha$ and $\\beta$ in Equation (10), and no ablation studies are provided." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The Abstract Visual Reasoning (AVR) tasks is interesting and important to study because it requires strong reasoning capability of ViTs. The paper also has very interesting findings, highlighting the importance of positional encodings in solving pure vision-based visual reasoning tasks.\n2. This works provides very detailed model improvements they have tried to improve the performance, from ViT-Vanilla to ViTARC-VT and ViTARC. This is very useful to the communities to reproduce the experiments and improve further.\n3. The final model ViTARC achieves very strong performance in most of the tasks, which is also a significant improvement over ViT-Vanilla." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies vision transformer in abstract visual reasoning ARC tasks which do not include any text or background knowledge, and focus purely on visual abstraction and pattern recognition. However, directly training a ViT with one million examples per task fails on most ARC tasks. Then, several techniques are proposed to improve model performances, including improved 2D positional encodings, and object-based positional encoding. This work highlights the importance of positional information for utilizing Vision Transformers in visual reasoning tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The training/evaluation protocol is not clearly defined. The paper does not show clearly the generalization ability on unseen tasks. All the task they use for evaluation have some training examples during training. It would be very interesting to use some tasks pure for evaluation which are not seen during training.\n2. Some of the key techniques used in this work are not new, like 2D (Relative) Positional Encoding, which have been discussed in the original ViT/Swin Transformer papers and plays a key role for performance improvement in this work. Though some new techniques are introduced in this work, like Positional Encoding Mixer (PEmixer) and Object-based Positional Encoding (OPE), the overall contribution and novelty is marginal." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1 - Did you try using RoPE embeddings instead of RPE, in the second point of Section 5? I am curious about the differences.\n\n2 - About RPE: the paper mentions there is an encoding for left and above, and another one for right and below. How are \"above and to the right\" patches encoded? Why not using an explicit \"above left\", \"above right\", \"bottom left\", \"bottom right\"? It seems like the approach is re-using the 1D sequential approach without taking into account that we are modeling 2D inputs.\n\n3 - How do your architectural changes influence other vision tasks? E.g. classification, detection, VQA, etc. The SOTA ViT models are nowadays used for many tasks (at least as part of many tasks as vision encoders). It would be great if the changes proposed in the paper did not hurt performance in other tasks.\n\n4 - Related to previous question, did you try starting from a pre-trained ViT?\n\n5 - Did you try training jointly for all tasks? Adding a task ID as context for example.\n\n6 - Did you notice any overfitting or underfitting on the final models? Any scaling properties with the data? On the final models, is 1M samples necessary, of 100k would be enough? Would the models perform better with even more samples?\n\n7 - When generating pixels, during evaluation does the pixel value have to be exact? Is the prediction done as a classification in the RGB [256 x 256 x 256] space? Or is it a regression in the [0-1](x3) range? If it is a regression, how is the correctness evaluated?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1 - The paper addresses an interesting question, which is: if the tasks in ARC are visual tasks, how can se use the current vision tools to deal with them? \n\n2 - The reasoning behind every contribution in the paper is well explained. The paper is easy to follow. \n\n3 - Related to the previous point, I particularly like the analysis in Figure 6. It helps understand what the model is (not) paying attention to.\n\n4 - The results in the paper show a clear improvement with respect to the original ViT vanilla baseline, meaning the proposed contributions, overall, are helpful." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the ARC benchmark from a vision perspective, and answers the question of what changes are required in current SOTA vision architectures to deal with the tasks in ARC. \n\nThe paper suggests that the bad results of vanilla ViTs on these tasks are due to their poor encoding of spatial and positional information, and proposes approaches to deal with this limitation.\n\nFinally, the paper shows results that indicate that the proposed changes were useful for the performance on the ARC benchmark." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper has some weaknesses that I believe can be addressed, but also should be addressed.\n\n**1 - Unclear that this is vision modeling**.\n\nThe main argument of the paper is that vision transformers should be improved to work on ARC. But the proposed changes make the model not be a vision model anymore. While the paper mentions that \"Transformer-based LLM approaches convert the images into strings, which does not fully capture all relevant structural information”, this is not too different from what this paper does. \n - By construction, vanilla Transformers model inputs as sequences, but there are some aspects that make them more \"vision\" specific. Some of them are not included in the original ViT (e.g. local attention), and the others (e.g. patching) are removed in this paper (see next points). A non-Transformer-based architecture (e.g. a U-Net in a diffusion model) would make the connection to vision more clear. (I'm not suggesting such a model should be used, just exemplifying the point).\n - The pixels are encoded in a 1x1 grid, effectively making it a sequence.\n - There is an end-of-sequence token, just as there would be if the task was modeled as a sequence. \n - Even object positional encodings are added, abstracting away the low-level vision from the tasks. \n\nAn emphasis on vision is given in the paper more than once, e.g. \"However, in vision tasks, especially those requiring detailed visual reasoning, spatial relationships often carry as much importance as, if not more than, the content of the tokens\". I believe, however, that a lot of the learned lessons are not about vision, but about structured predictions. In general vision tasks, for example, the content of tokens *is* more important than the spatial relationships.\n\n\n**2 - Unclear significance of contributions**.\n\nOverall, the paper reads as a sequence of contributions the authors tried, one after another, and building on the previous one, without any global understanding of the problems. I believe the process to get to a solution should not be part of the paper. The paper should just present the final results and justify them. This makes the presentation confusing, for example when showing results in section 4, before going back to explaining some more technical contributions in section 5. \n\nBut the main problem is not about presentation. I believe there is some disconnect between the different contributions. Some of the latter ones should make the former ones unnecessary, and these are not ablated properly. The following are some examples:\n\n - First, learnable embeddings are removed in favor of positional embeddings. But then a PEmixer is necessary to learn position-specific embeddings. And also, RPE is required because APE encodes spatial changes very slowly. All of this is confusing. I believe the paper should directly start by explaining the final encoding and its reasoning, not present two different encoding techniques before the final one and show how they are not ideal.\n - The paper presents quantitative results showing that PEmixer and OPE, on top of ViTARC-VT actually *decrease* the performance of the model. \n - The padding is added at the beginning, but then it results in problems that need to be corrected. I address this in more detail in the next weakness.\n\n**3 - It is unclear that all the padding contributions are necessary**.\n\nWhy are padding tokens necessary? The original formulation (sequential padding at the end), where the padding tokens are ignored by the attention should be the correct one (computationally it also makes more sense). The per-row padding (without being attended to) is effectively the same as the per-sequence padding, so I am not sure why it is being mentioned. \n\nI understand that there is a problem about the output not understanding boundaries corrected. But the per-row EOS tokens should be enough to address this issue. For the model, it should be equally hard to predict an \"end of row\" token than it is to predict the current _<arc endxgrid>_ token. Would it be possible to ablate this? This is another example of weakness #2. The final solution should not include previous iterations of the solution, if these are not necessary. No (attended-to) padding would imply: \n - More efficient forward and backward passes.\n - No need for three different kinds of end-of-grid tokens. Only a single one would be enough.\n\n**4 - No baselines or comparisons**.\n\nThere are no baselines or comparisons to other approaches, only to their own vanilla ViT. Also, I could not find the paper's performance on the private set of the ARC benchmark, so it is hard to compare to SOTA approaches. Specially interesting comparisons would be to approaches that model ARC as a sequence using an LLM, as I believe the approaches are not very different (see weakness #1).\n\n---\n\nOther minor weaknesses:\n\n- Figure 8 Appendix A seems to motivate the first main technical contributions of the paper. It should not be in the appendix.\n\n- It is unclear why the PEMixer is helping. If every example in the task has different coordinates that are important, I don't understand why it would learn a global (across all samples) weighing. What would be the benefit of giving one position more weight than another one, if every example has different important positions?\n\n- The paper could contain more clarifications and ablations. See \"Questions\" section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Can you show that the model works on other complex reasoning tasks outside of ARC as mentioned in point 1 above?\n2. Does the model work without strong priors, e.g., on input and output grid shape?\n3. Can you add in more baselines trained on the same amount of data?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I appreciate that the authors explore the limits of a data-driven approach to ARC, as well as propose potential inductive biases to encode into a reasoning model for ARC. General priors for reasoning tasks are indeed important. The quantitative results compared to a naive ViT are promising for ARC." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors focus on a data-driven model to solve ARC. They first establish that vanilla ViTs fail on ARC despite being trained on a million examples. They then show that a 2D visual representation with ViT improves performance, and that object-based positional information further yields significant performance gains on ARC." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Many of the architectural designs in the proposed model are made for solving ARC specifically. I believe ARC is a great intermediate proxy task for complex reasoning, but should not be an end goal in and of itself. With enough inductive biases, I believe that solving ARC with a million examples is reasonable, but is not particularly enlightening for the community. For example, 2D padding with <arc_pad> tokens and border tokens <arc endxgrid> that define grid conditions, etc, are very much defined for ARC itself. Would like to see if this model generalizes to other reasoning tasks, for example Raven's progressive matrices, odd one out challenges, etc. \n2. In addition, I'm not convinced that these are indeed the best inductive biases. For example, I believe that by using a \"2D template\" indicated by padding, border, and new-line tokens, the method is endowed with a priori knowledge of what the final grid shape should look like (and also, what the initial grid shape looks like). One core challenge of ARC is precisely that it needs to infer what the output dimensions are (how the shape transforms). Giving the model knowledge that one token should represent one block in ARC is a strong prior to be injecting. \n3. The object-based positional encodings based on OpenCV's contour detections will struggle on more complex or different shapes. This “objectness” should be captured by the visual tokens implicitly. \n4. No other baselines than ViT are explored, though there have been many works proposed for ARC & related reasoning tasks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What do the authors think about adding a caveat in the paper about W2 above? (i.e. the fact that this makes zero progress on the ARC challenge, and that the benefits of the proposed methods still need to be demonstrated on a meaningful task)\n\nI can't solve Task A (6e02f1e3) in Fig. 2. I guess other readers would benefit from an explanation? (I'm afraid this sortof makes a Vanilla ViT superhuman on this task!)" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper studies a popular architecture (ViTs), so it should be of broad interest.\n\n- The paper shows a domain where ViTs fail dramatically. The task itself is not interesting (ARC in a new, supervised, large-data setting), but the finding is interesting because it points at intrinsic deficiencies of ViTs.\n\n- Several modifications are described that improve the performance. It's kindof the opposite of an ablation study, but it does the job of demonstrating that novel methods imbue ViTs with novel capabilities." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the ARC benchmark, assessing the potential of ViTs for this task when trained with supervision on a single task at a time with plenty of (procedurally-generated) data (which is different from the original few-shot setting).\n\nThe paper first shows that standard ViTs fail. It then shows several modifications to the architecture that make them perform much better." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I don't see significant flaws in this paper. Potential weaknesses:\n\n- (W1) The proposed modifications (in particular the visual tokens) are quite straightforward. This can be seen a good thing. I am actually surprised that the absolute PE and padding (to handle images of different sizes or aspect ratios) have not been used before. Are the authors certain this hasn't been described in the existing literature?\n\n- (W2) There is a valid argument that this paper solves a task that is extremely uninteresting in itself. It has no analogue in the real world and it completely defeats the purpose of the original ARC challenge (because of the supervised, large-data setting, focusing on a single task at a time). This paper makes absolutely no progress towards the goal of the ARC challenge. I still think that the findings are interesting for the reasons mentioned in my \"Summary\" above, i.e. that the proposed improvements give ViTs new abilities. The main issue now is that the contributions of this paper will only have value if/when the abilities are demonstrated to be useful for another task/setting." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Getting vision transformers to \"reason\" on the Abstraction and Reasoning Corpus (ARC)" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024tackling,\ntitle={Tackling the Abstraction and Reasoning Corpus with Vision Transformers: the Importance of 2D Representation, Positions, and Objects},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0gOQeSHNX1},\nnote={under review}\n}" }, "abstract": { "value": "The Abstraction and Reasoning Corpus (ARC) is a popular benchmark focused on *visual reasoning* in the evaluation of Artificial Intelligence systems. In its original framing, an ARC task requires solving a program synthesis problem over small 2D images using a few input-output training pairs. In this work, we adopt the recently popular *data-driven* approach to the ARC and ask whether a Vision Transformer (ViT) can learn the implicit mapping, from input image to output image, that underlies the task. We show that a ViT—otherwise a state-of-the-art model for images—fails dramatically on most ARC tasks even when trained on one million examples per task. This points to an inherent representational deficiency of the ViT architecture that makes it incapable of uncovering the simple structured mappings underlying the ARC tasks. Building on these insights, we propose ViTARC, a ViT-style architecture that unlocks some of the visual reasoning capabilities required by the ARC. Specifically, we use a pixel-level input representation, design a spatially-aware tokenization scheme, and introduce a novel object-based positional encoding that leverages automatic segmentation, among other enhancements. Our task-specific ViTARC models achieve a test solve rate close to 100% on more than half of the 400 public ARC tasks strictly through supervised learning from input-output grids. This calls attention to the importance of imbuing the powerful (Vision) Transformer with the correct inductive biases for abstract visual reasoning that are critical even when the training data is plentiful and the mapping is noise-free. Hence, ViTARC provides a strong foundation for future research in visual reasoning using transformer-based architectures." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Abstraction and Reasoning Corpus", "Abstract Visual Reasoning", "Transformers", "Vision Transformers" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/01c0fe21bad2c80ddf91ed7cd127e3d5d8f0deb7.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/392b7455561e15c58c5053d1ab3362104851a621.zip" }, "title": { "value": "Tackling the Abstraction and Reasoning Corpus with Vision Transformers: the Importance of 2D Representation, Positions, and Objects" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0gVatTOgEv
Glider: Global and Local Instruction-Driven Expert Router
main
Active
Parameter Efficient Fine-Tuning;LoRA;Cross-Task Generalization
transfer learning, meta learning, and lifelong learning
3;3;5;5
3;4;3;3
2;2;3;2
2;1;2;2
3;2;3;1
4
3.25
2.25
1.75
2.25
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Inovation and contribution:\n1. The core of this paper combines the scores from the Global Router and the Local Router, then performs a top-k operation based on the combined scores. a. Line 85: The descriptions of the algorithm in the preceding and following contexts are clearly conflicting, making it contradictory to describe the algorithm effectively. b. Relying on GPT-4 turbo to gather information from the full text means the process heavily depends on GPT-4's capabilities. Furthermore, as seen in line 346 and the subsequent ablation experiments, the value of $\\alpha$ is significant, indicating a high weight assigned to the Global score.\n2. The paper extensively references the work of Muqeeth but fails to explain the rationale behind these references, raising concerns about the originality of the article.\n3. Relying on GPT-4-turbo for global semantic information raises some questions about the overall workload and originality of the research. From the formula in line 346 and the subsequent ablation experiments, it appears that the global score heavily influences the final affinity score, making the expert selection largely dependent on GPT-4-turbo. Additionally, the potential latency issues introduced by using GPT-4-turbo during inference are not addressed, which is a significant concern. Overall, placing so much emphasis on GPT in the paper could weaken its persuasive impact.\n\nFormat and Typo Issues:\n1. 60) Formatting issue: The left parenthesis is missing.\n2. 67-68) The sentences are semantically repetitive.\n3. 140) What does the arrow ($\\rightarrow$) signify here?\n4. 216) This figure is drawn in a hard model; it’s unclear what it is trying to convey.\n5. 279) The explanation of $\\Psi$ is missing.\n6. 289-290) The notation used here is confsued.\n7. 301) There is no explanation for the newly introduced $\\sigma$.\n8. 319) Suddenly introducing the variable 𝑡 t from line 289 is confusing.\n9. 323) Why isn’t the normalization process presented in a formula? Section 4.2 is poorly written, relying solely on descriptive language, lacking organization.\n10. 354) The T0 held-in dataset is omitted." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The insights of the paper are to be praised.\n2. Very interesting topic and focus on the router optimization. \n3. Use the big model and small model together to solve the problem \n4. Experimental results are good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on addressing the trade-off between performance improvement and generalization ability in expert modules. It assumes that this issue arises from the lack of global semantic context in token-level routing strategies, it then seeks to resolve this problem by combining global semantics with token-level information through the use of both global and local expert routers during routing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Writing and Presentation:The paper could benefit from some polishing. There are a number of typos and semantic issues, and the overall formatting could be improved for better readability. Additionally, some figures are a bit challenging to interpret. For instance, Figure 1 is only referenced in Appendix B but appears as the first figure in the Related Works section, which can disrupt the flow and clarity for the reader.\n\n2. Clarity of Background and Concepts: The background and explanation of key concepts in the paper could be clearer. While there are many references to ideas and works by Yadaav, the connections and explanations aren’t sufficiently detailed, which may leave readers a bit confused. In my initial reading, I found myself questioning whether the discussion pertained to a Mixture of Experts (MoE) scenario or Model Merging.\n\n3. Logical Flow and Mathematical Details: The paper seems to lack some logical coherence, especially regarding mathematical descriptions and derivations. There are no thorough mathematical proofs provided, and the modeling of the scenario feels a bit scattered across different sections. Some variable explanations are incomplete, which can be frustrating. Moreover, discussions around problems and solutions could be more precise. For instance, when mentioning that routing strategy issues stem from a lack of global semantics, it would be helpful to have more rigorous mathematical reasoning or experimental evidence to support this claim.\n\n4. Inconsistencies: There are some notable inconsistencies in the paper. For example, in line 85, it mentions that the Global Router selects the top-2 experts based on global semantics. However, the description of the Global Router algorithm starting from line 329 doesn’t reference any top-2 (or top-k) selection process. The top-k expert selection is only brought up later around line 347, based on the final score calculated from the weighted sum of global and local affinity scores. Clarifying these points would enhance the overall coherence of the paper.\n\n5. The experimental design could use some improvement. The main experiments lack detailed explanations, and Figure 3 is somewhat unclear. Many of the experimental configurations seem to mirror those from Muqeeth's work without providing enough context, which might raise questions about the originality of this study. Additionally, the ablation experiments focus on relatively trivial variables, while more significant factors—such as the differences between excluding and including the global semantics generated by GPT-4-turbo—are overlooked. Addressing these points could enhance the depth and rigor of the research." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Could you explain why T5 was chosen as the primary architecture for evaluation? Have you conducted any preliminary experiments with decoder-only models e.g. (LLama3-8B etc)?\n* Could you provide more details about the evaluation metrics used in Table 1? What exactly do these numbers represent?\n* Since this design integrates routing into the LoRA inference procedure, could you provide a detailed analysis of the additional computational overhead? How much will the inference latency be affected by such design?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper leverages LLMs to generate semantic task descriptions, providing global context for routing decisions which is a unique approach not explored in previous routing methods\n* The paper well address the limitations of current approaches (focusing on either held-in or held-out tasks) and provides a novel solution integrating both." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents GLIDER, a method that combines global semantic and local token-level routing for LLMs. The key innovation is using an LLM to generate task instructions that guide expert model selection globally, while using token-level routing locally. Tested on T5-based models with T0 and FLAN benchmarks, GLIDER improves performance on held-in tasks by 6.6% while maintaining strong generalization on held-out tasks. This shows that incorporating LLM-driven semantic understanding helps achieve better routing compared to pure token-level approaches." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The experiments focus solely on T5, an older encoder-decoder architecture. The effectiveness of GLIDER on modern decoder-only models (like GPT family, LLaMA, etc.) remains unproven, which is crucial given these are now the mainstream architectures for LLMs.\n* Table 1 lacks clarity on evaluation metrics and methodological details. Without clear metric definitions and evaluation protocols, it's difficult to fully assess and compare the reported improvements.\n* The routing design will bring extra computational overhead, how will GLIDER's inference latency change compare with the normal LoRA decoding methods (vLLM's lora inference module)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "# incremental addition of new experts\n\nOne question that is not addressed is the incremental addition of new experts/datasets pair, and what are the consequences (on normalization, etc.). It seems it should be trivial to do this incrementally, but a discussion in the appendix (and an example use-case when, say a previously held-out task becomes held-in so that the router now uses global instead of local weights) would certainly improve the quality of the paper\n\n# limit of global router\n\nthe global router is constructed using a sample of 3 questions, which may be ok for very simple and low-diversity datasets, but not for more complex and diverse tasks,. a more in-depth study of global router across a wider range of datasets with quantitative assessment of global router policies seems necessary" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "this work introduces a routing mechanism to tradeoff between local and global experts, to increase performance on held in tasks, without compromising capability to handle held out tasks.\n\nthe goal is clear and the approach is simple (as it is heuristic in nature)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "the paper studies ensembles of LLM models (model MoErging), proposing a technique for selecting experts to route tokens to at global (for selection of experts for in profile tasks) and local levels (to have more flexibility to handle out of distribution tasks)\n\nthe paper is incremental in nature (with small differences with respect to Phatgoose, from architectural design choice, to experimental settings and way too many details, to the point it feels it should be named Phatgoose++ instead of Glider)\n\nalthough appealing, the approach is heurisitic in nature: given this, one would have expected a significantly larger experimental part, including a wider range of tasks (and possible comparison points beyond those adopted in Phatgoose), and a statistical relevant comparison of improvements" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "# evaluation\nas this work has no theoretical basis, one would have expected a significantly larger experimental part to convince the reader of the generality of the approach on \n- a significant wider range of tasks (and possible comparison points beyond those adopted in Phatgoose),\n- further exhibiting a statistical relevant comparison of improvements\n\nthis is not the case, so the paper execution is far from being convincing.\n\nAdditionally, while the main advantage of this work is to increase performance of held-in tasks, authors additionally point out advantages that are too thin to be worth noting; and they do so in a disturbingly biased manner. For instance, authors claim 0.9% over held out tasks over Phatgoose (in bold), but the 0.9% (actually 0.88%) is the maximum observed across 5 held out datasets in Tab 3 (for the other 4 is 0.39%, 0.16% -0.53% and 0.3%) \n\n# empirical evidence of global router\n\nthe work is motivated to find *semantical* resemblance beyond tasks. however, the approach on held out tasks seems to leverage *syntactical* resemblance . Fig 1 shows held out tasks to systematically select two experts (one of which seems to be further common to a couple of tasks). Yet appendix B just shows the tasks to be syntactically similar: i..e, the Q&A pair has a cloze format, which is rather typical of simple benchmarks. As such, I am little reassured that the performance generalization will be maintained on more complex tasks, and this work is far from fully elucidating the robustness of the proposed method.\n\nQuantitative assessment of global experts over a wider range of diverse tasks (say few tens of datasets per type of answer) would have allowed to get true insights about the nature of global experts (e.g., whether the identified expert triggers \"cloze\" type answers, irrespectively of the semantic of the question)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My first question relates to the core motivation of improving held-in performance and the necessity of doing so given that for any held-in task, we always have access to the expert specialized to that task. Could the authors explain scenarios in which using GLIDER is preferable to simply selecting the specialized expert for a known held-in task? \n\nMy second question is to what extent is GLIDER more of a novel component to local routing schemes that aims to encode global context, as opposed to an entirely new model. If GLIDER is more a novel component than an entire model, then I think the authors should include ablation studies on the local router choice, in particular using Arrow." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1) \nThe core idea -- that incorporating global information of the specialization of finetuned expert models into local routing schemes can improve expert aggregation algorithms -- is intuitive and persuasive.\n\n\n2) \nThe use of an LLM to encode global semantic information of the overall expert specialization is a creative method for effectively integrating the required global context" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose GLIDER, which is a token-level and module-level hierarchical routing strategy for combining pools of parameter-efficient finetuned expert models, incorporating both local and global information over the specialization of the expert models. The local component learns a gating between LoRA modules which selects the best layer module from the pool of experts for each token. The global component uses an LLM to encode the overall task expertise of each expert, which is then incorporated into the routing scheme to enhance the routing such that it is sensitive both to local module-wise expertise and overall global expertise of the aggregated models. This scheme hopes to maintain strong generalizability to unseen tasks without sacrificing performance on held-in tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) \nMy first concern relates to the overall problem setting of the paper and its core motivation of improving performance on held-in tasks. The authors claim that existing MoErging methods often prioritize generalization to unseen tasks at the expense of performance on held-in tasks, and indeed in Table 1 the authors report as one of their main results that GLIDER significantly outperforms baselines on held-in tasks. However, performance on held-in tasks is deemed unimportant precisely because we already have access to the specialized expert trained on that exact held-in task, and so we can always retain performance on any given held-in task by simply using the expert specialized to that task. Indeed, this is the reason why Phatgoose does not report results for held-in tasks. For this reason, the 'Oracle Expert' recorded in Table 1 for held-in tasks is not an Oracle but an attainable result for which we always have access - it is just the expert specialized to that given task. \n\nSo in this sense, given that for held-in tasks GLIDER still underperforms the expert specialized to that given task, I'm not yet convinced of one the proposed main benefits of GLIDER, since for any given held-in task we could easily get better performance by just selecting the corresponding expert specialized to that task. \n\nFundamentally, I'm not yet persuaded that the performance gains on held-in tasks justify the claim that GLIDER is an overall superior model, since by the problem setting these are not tasks we need to optimize for. If the authors can provide justification for why performance on held-in tasks is indeed important and why selecting GLIDER over the corresponding specialized expert for a given held-in task would be preferable, then I would be happy to change my score, but as it stands I'm concerned that a large proportion of GLIDER's performance gains are on tasks that we need not optimize for.\n\n2) \nA second concern is that GLIDER's architecture, by the authors' own acknowledgement, is basically identical to Phatgoose for the local component of the hierarchical router. This being the case, the contribution of the paper is more so appending a global context component to Phatgoose, rather than an entirely new model. It would therefore be informative to consider alternative backbones for the local component of the router, for example Arrow. This could help isolate the contribution of the global routing component and help to demonstrate the robustness of potential improvements brought about by the proposed inclusion of global information.\n\n3) \nSome grammar / spelling related issues:\n\nLines 53-54: 'However, MoE methods that train experts from scratch while MoErging utilizes a decentralized...' -> delete 'that'\n\nLine 72: 'retrieve the correct expert for all token at every module' -> tokens should be plural\n\nLine 264: 'our goal is to build a MoErging method that dynamically routing queries' -> should be 'routes queries'\n\nLine 286: 'this work specifically focuses in LoRA' -> focuses 'on' LoRA\n\nLines 288-289 'Given the $t^{th}$ input token activation $u_i$ -> should be $u_t$ I'm assuming?\n\nLine 332: 'added before the model to independently process the fully query' -> process the 'full' query\n\nLine 348: 'the output of the module for token activation $u_i$ is computed as $Wu_i + \\sum_{k \\in \\xi_{top}} w_k * B_kA_ku_i$ -> It looks like you've forgotten to actually define $w_k$, I'm assuming it's the softmaxed affinity score, but you've left it undefined." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024glider,\ntitle={Glider: Global and Local Instruction-Driven Expert Router},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0gVatTOgEv},\nnote={under review}\n}" }, "abstract": { "value": "The availability of performant pre-trained models has led to a proliferation of fine-tuned expert models that are specialized to a particular domain or task. This has enabled the creation of powerful and adaptive routing-based “Model MoErging\" methods with the goal of using expert modules to create an aggregate system with improved performance or generalization. However, existing MoErging methods often prioritize generalization to unseen tasks at the expense of performance on held-in tasks. This limitation adversely impacts practical applicability, as real-world deployments require robust performance across both known and novel tasks. We observe that current token-level routing mechanisms neglect the global semantic context of the input task. This token-wise independence hinders effective expert selection, particularly for held-in tasks, as routing decisions fail to incorporate the holistic semantic properties of the task. To address this, we propose a novel method, Global and Local Instruction Driven Expert Router (GLIDER) that integrates a multi-scale routing mechanism, encompassing a semantic global router and a learned local router. As recent LLMs demonstrate advanced reasoning capabilities for semantic-related contexts, the global router leverages this ability to enhance expert selection. By utilizing the input query and an LLM, the router generates semantic task instructions that guide the retrieval of the most relevant experts across all layers. This global guidance is complemented by a local router that facilitates token-level routing decisions within each module, enabling finer control and enhanced performance on unseen and challenging tasks. Our experiments using T5-based expert models for T0 and FLAN tasks demonstrate that GLIDER achieves substantially improved held-in performance while maintaining strong generalization on held-out tasks. Additionally, we perform ablations experiments to dive deeper into the components of GLIDER and plot routing distributions to show that GLIDER can effectively retrieve correct expert for held-in tasks while also demonstrating compositional capabilities for held-out tasks. Our experiments highlight the importance of our multi-scale routing that leverages LLM-driven semantic reasoning for MoErging methods. Our code is attached as supplementary material." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Parameter Efficient Fine-Tuning", "LoRA", "Cross-Task Generalization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f4f192ceabeb35c360c8f4ee9054561613663198.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/cd603e8628610244550f0422dfc0bc414ac3019e.zip" }, "title": { "value": "Glider: Global and Local Instruction-Driven Expert Router" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0gqCIaBRQ9
Regularized DeepIV with Model Selection
main
Active
Nonparametric estimator;instrumental variables;model selection;causal inference.
learning theory
5;5;5;6
4;4;3;3
2;3;3;3
2;3;3;3
2;3;2;4
5.25
3.5
2.75
2.75
2.75
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* is it really fair to say that your algorithm is more computationally tractable when it is based on MLE?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* With the (unfortunate) exception of the introduction, I found the paper mostly well-written and clear.\n\n* The paper studies an interesting problem, proposes a natural solution, and proceeds to analyze said solution. While I am not familiar with the immediately preceding related work (in IV), this seems clean to me." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies a two stage procedure for regression in the scenario where the errors are not conditionally independent. They first learn a conditional density to make use of instrumental variables and consequently solve a square loss erm problem weighted by the learned conditional density. They show that this procedure attains mostly standard nonparametric rates." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The organization of the paper is hard to follow and the introduction is way too terse. As someone well-versed in nonparametric statistics but not necessarily IV methods, I had to skip ahead to section 4 to really understand what was going on. Stating that you are trying to solve some fixed point equation in the introduction is not conducive to most people's understanding of the problem you are solving. \n\n* My overall feeling is that the result is somewhat incremental. To my understanding the main difficulty lies making standard guarantees for MLE in Hellinger^2 compatible with the square loss. I could not entirely follow why this is so challenging and would encourage the authors to further explain why this is the case (for instance, in the very last paragraph of section 1, you mention this difficulty but do not really expand on it, nor do you reference the lemmata which might be useful for understanding this difficulty)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "please see above" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "RDIV offers several key advantages over existing methods. It addresses three significant limitations of prior literature: it eliminates the need for unique IV regression identification, avoids reliance on the often unstable minimax computation oracle, and supports model selection procedures." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the problem of nonparametric instrumental variable (IV) regression, a framework with wide applications across fields such as causal inference, handling missing data, and reinforcement learning. The objective in IV regression is to solve the conditional moment equation, 𝐸 [ 𝑌 − ℎ ( 𝑋 ) ∣ 𝑍 ] = 0, where 𝑍 serves as the instrument. The authors introduce RDIV, a regularized variant of the DeepIV method, marking the first work to provide rigorous theoretical guarantees for DeepIV. RDIV enhances generalization by incorporating Tikhonov regularization. Methodologically, RDIV follows a two-stage approach. The first stage involves learning the conditional distribution of covariates, while the second stage refines the estimator by minimizing a Tikhonov-regularized loss function." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "It is unclear how the method compares for example to recently developed methods (see arxiv:2405.19463; to appear at NeurIPS 2024) that completely avoids minimax formulations, as well as avoiding the need for two-stage procedures." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "How is Tikhonov regularization related to a function space parametrized by the neural network? It seems not straightforward to relate it to weight decay.\n\nIs there a computational gain when minimax optimization is no longer needed?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-written, and the results are motivated well. I didn't go through the proofs, but the explanations after each result are insightful, and ease the reading. The theoretical contribution is solid." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the nonparametric instrumental variable regression with Tikhonov regularization (RDIV), and proves that RDIV allows model selection procedures and matches the SOTA convergence rate. I agree with the author's claim that this work is the first attempt to provide rigorous guarantees for DeepIV. With Tikhonov regularization, the model selection procedure achieves the oracle rate and iterative RDIV matches the SOTA rate." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The numerical experiments are only based on simulated data. It would be better to have some results from real data to demonstrate the strength of the proposal." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* To convert the MLE guarantee into an $L_2$ guarantee, the authors assumed a minimum density on the conditional density. What’s the benefits/drawbacks compared with the conditional mean embedding based methods (although it also requires some assumptions like HS operators)." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The authors discussed lots of the aspects for non-parametric clearly." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This manuscript is basically a technical paper, discussed about two-stage non-parametric IV and model-selection in the second stage when equipped with an additional $L_2$ regularization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* I feel the problem studied in this paper is with limited novelty. The transformation between one-stage and two-stage algorithms and analysis is in general only a technical problem and have been discussed in different places like DualIV, and the $L_2$ regularization itself makes the model-selection easier (with the strongly convexity)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024regularized,\ntitle={Regularized Deep{IV} with Model Selection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0gqCIaBRQ9},\nnote={under review}\n}" }, "abstract": { "value": "In this paper, we study nonparametric estimation of instrumental variable (IV) regressions. While recent advancements in machine learning have introduced flexible methods for IV estimation, they often encounter one or more of the following limitations: (1) restricting the IV regression to be uniquely identified; (2) requiring minimax computation oracle, which is highly unstable in practice; (3) absence of model selection procedure. In this paper, we analyze a Tikhonov-regularized variant of the seminal DeepIV method, called Regularized DeepIV (RDIV) regression, that can converge to the least-norm IV solution, and overcome all three limitations. RDIV consists of two stages: first, we learn the conditional distribution of covariates, and by utilizing the learned distribution, we learn the estimator by minimizing a Tikhonov-regularized loss function. We further show that RDIV allows model selection procedures that can achieve the oracle rates in the misspecified regime. When extended to an iterative estimator, we prove that RDIV matches the current state-of-the-art convergence rate. Furthermore, we conducted numerical experiments to justify the efficiency of RDIV empirically. Our results provide the first rigorous guarantees for the empirically well-established DeepIV method, showcasing the importance of regularization which was absent from the original work." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Nonparametric estimator", "instrumental variables", "model selection", "causal inference." ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/36421fa6453920abc8f29f0ef6d69cd1560c852a.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Regularized DeepIV with Model Selection" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0h6v4SpLCY
Universal generalization guarantees for Wasserstein distributionally robust models
main
Active
generalization guarantees;optimal transport;distributionally robust optimization;nonsmooth analysis
optimization
6;6;8
3;3;4
3;3;3
3;3;3
3;3;4
6.666667
3.333333
3
3
3.333333
1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "-" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I am happy to increase my score and support this paper with a high confidence if the authors can provide an extensive discussion during the rebuttal on the assumptions in Azizian et al. (2023a) . In particular, my two major questions are: can the authors be more precise in which cases their assumptions are weaker than the ones in Azizian et al. (2023a). In particular, can you give an example for a class of distributions that are covered by this paper but not by Azizian et al. (2023a)? Moreover, can the authors explain why the proof in Azizian et al. (2023a) breaks for your assumptions and why it is not trivial to extend the proof?\n\n\n\nSmaller question:\nIsn't assumption 3.1 (1) always true satisfied by w<=1. Is it possible that this is a typo?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper addresses an important problem is generalization bounds/theoretic ML. In particular:\n\n- The paper is well written and the results are nicely presented. \n- The proof sketch in Section 4 is excellent. It is very easy to follow and often neglected in these types of papers\n- The proof idea is smart, non-trivial and interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents novel bounds on for the DRO loss using the Wasserstein distance. In particular, they address the question of finding the minimal $\\rho$ used by the empirical robust loss such that the loss is an upper bound for the actual population loss. The main challenge is to overcome the dependency on the distance between W(P_n, P) \\sim n^{1/d}. While this problem has been studied in the literature, and dimension free bounds exist, this paper presents a proof requiring weaker assumptions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Given that this is a more traditional field, I would expect a clearer comparison with the existing works. While the authors do a very good job in presenting the proof idea, it is not so clear how the proof fundamentally differs from existing works." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What are the practical implications of the generalization guarantees compared to Azizian et al. (2023a)? Can you provide some numerical results analogous to Appendix H of Azizian et al. (2023a)?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The generalization guarantees of this work do not rely on restrictive assumptions like smoothness compared to the previous work (Azizian et al. 2023a). \n- This paper is well-structured, and the theoretical results and proof sketches are clearly presented." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper provides exact generalization guarantees for Wasserstein Distributionally Robust Optimization (WDRO) for a wide variety of models with compactness and finite Dudley's entropy assumptions. The results apply to radius $\\rho$ scaling as $O(1/\\sqrt{n})$, which does not suffer from the curse of dimensionality. The results also cover the double regularization case." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In Section 3.2, the authors discussed how their results on generalization guarantees apply to linear regression and logistic regression. However, more complicated models such as neural networks with ReLU or other smooth activation functions (e.g. GELU) are not discussed. \n- The theoretical results require a lower bound on $n$, while Theorem 3.4 of Azizian et al. (2023a) applies to all $n \\ge 1$. The implications of this requirement should be discussed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- Why not submit to JMLR? The paper is very rigorous and rather long and technical for an ML conference? You also examine the problem in good detail.\n\n- Could you provide a simple example in Theorem 3.2, where the optimal coupling is known under (say) Gaussianity assumptions?\n\n- I'm a bit confused. What does $\\operatorname{argmax}_{\\Xi}\\,f$ mean in (5) a sup norm or something? \n\n- Why is $\\min\\{ c(\\xi,\\zeta): ... \\}$ measurable? In particular, (independent of the meaning of the argmax, above question), why is there a measurable selection $\\xi\\mapsto \\zeta$? Without this, its not clear that $\\rho_{\\operatorname{crit}}$ is well defined. I'm guessing this is Berge's theorem (which is in Aliprantis & Border) somehow, but please spell it out for us :)\n\n- Each result assumes that the (difficult to interpret) $\\rho_{\\operatorname{crit}}$ is \"large enough\". Can you please provide a general set of conditions ensuring that $\\rho_{\\operatorname{crit}}$ can be bounded away from $0$. \n\n- Is it fair to compare, verbally, our results to those of Fournier et al. (and similar bounds, say, found in [1])? Since you are considering a small ball around the empirical measure while their results guarantee a minimal radius such that the empirical measure contains the true measure whp. Furthermore, those rates are only tight (afaik) when the measure is very spread out; more precisely, it is Alhors $d$-regular, see e.g. [3] for a nice clean proof. \n\n- In theorem 3.2, why is $\\pi^{P,Q}\\ll \\pi_0$? To be this isn't directly evident... I.e.\\ why is the RHS not trivially $-\\infty$ in general?\n\n\n\n[1] Graf, Siegfried, and Harald Luschgy. Foundations of quantization for probability distributions. Springer Science & Business Media, 2000.\n[2] Otto, Felix, and Cédric Villani. \"Generalization of an inequality by Talagrand and links with the logarithmic Sobolev inequality.\" Journal of Functional Analysis 173.2 (2000): 361-400.\n[3] Kloeckner, Benoit. \"Approximation by finitely supported measures.\" ESAIM: Control, Optimisation and Calculus of Variations 18.2 (2012): 343-359." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well written, interesting, and theoretical and provides very nice lower bounds on the robust empirical risk. The results are nice, and so is the use of set-valued analysis to derive them. Several relevant examples are considered, making a large portion of how these results can be used nearly transparent." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The proposed paper provides lower bounds on the robust empirical risk under unorthodox but interesting scaling limits on the radius of the Wasserstein ball around the empirical risk. The paper uses some cool techniques which are not often seen in machine learning. \n\nThe paper is relatively clearly written. However, I think there are a few little things here and there which are either difficult to justify (in the current form) or perhaps not well-defined (see below). Also, the introduction is excessively general while the setting rapidly collapses to a much more specific setting shortly after." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Nevertheless, I think some of the assumptions are a bit opaque (see below), and I'm not certain some quantities are well-defined.\n\n*Minor*\n- Citing Cuturi and Perée's book is odd when mentioning the Wasserstein distance. Perhaps the original source or a book on optimal transport such as Villani's book, would be more natural, IMO.\n\n- The definition of Wasserstein distance circa (1) is incorrect, $\\Xi$ must be Polish, and c *must* be a power $p\\in [1,\\infty)$ of a *metric* topologizing $\\Xi$; what you write is just some transport problem. Eg if c is not symmetric, then $W_c$ is not a metric in general, or if $c(x,y)=0$ for all $x,y$ then $W_c$ cannot separate points.\n\n- Perhaps \"suitable\" distributions is more appropriate before (1), since the distance explodes if these have no finite moment. \n\n- Line 65: bad grammar: \"it does not introduce approximate term\" also imprecise.\n\n- Line 66: Is Wainright's book and Boucheron the best reference? Perhaps older papers, e.g. on VC dimension, Bartlett's old papers on Rademacher complexity, or old papers on chaining are more natural references?\n\n- Line 69: \"This theoretical feature is specific to WDRO and highlights its potential to give more resilient models.\" This can be **much** less hand-wavy. Please explain more precisely/mathematically.\n\n- Line 149: In a metric space $(x,..)$ not \"In (X,..) a metric space\".\n\n- Assumption 1 vs. Line 145: You say that $\\Xi$ is just a measurable space, then later you say its a compact metric space. Why no be forthright and say its a metric space on line 145. Similarly, why is $\\mathcal{F}$ an arbitrary family of functions, then straightaway after is actually a compact set of continuous functions.\n\n- Line 176: Why jointly Lipschitz? If $\\Theta$ is compact, then since you already assumed $\\Xi$ is compact, then it is enough for $\\Theta\\times\\Xi\\ni (\\theta,\\xi)\\mapsto f(\\theta,\\xi)\\in \\mathbb{R}$ to be continuous; to deduce the compactness of $\\{f(\\theta,\\cdot):\\,\\theta \\in \\Theta\\}$ by the currying Lemma. \n\n- Line 176: Not sure why you say \"if $\\Xi$ is compact, since this was assumed a few lines earlier on the same page.\n\n- Maybe more natural examples come from Arzela-Ascoli...\n\n- Should the definition of the Dudley entropy integral really be in a footnote, while more basic ideas are in the main text.\n\n- Line 221: The words \"the metric\" are missing.\n\n- Line 223: There are many more references of the use of this type of metric, especially in exponential convergence rate results for Markov chains (wrt $W_1$ over countable metric spaces with this distance).\n\n- Line 245: \"sample randomness\" (I know what you mean...but the word independent is misleading as this \n\n- Assumption 1: Why call (2) jointly continuous, it is just standard continuity (actually inform continuity by compactness)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Exact generalization guarantees for Wasserstein distributionally robust models with dimension-free sample rates." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024universal,\ntitle={Universal generalization guarantees for Wasserstein distributionally robust models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0h6v4SpLCY},\nnote={under review}\n}" }, "abstract": { "value": "Distributionally robust optimization has emerged as an attractive way to train robust machine learning models, capturing data uncertainty and distribution shifts. Recent statistical analyses have proved that generalization guarantees of robust models based on the Wasserstein distance have generalization guarantees that do not suffer from the curse of dimensionality. However, these results are either approximate, obtained in specific cases, or based on assumptions difficult to verify in practice. In contrast, we establish exact generalization guarantees that cover a wide range of cases, with arbitrary transport costs and parametric loss functions, including deep learning objectives with nonsmooth activations. We complete our analysis with an excess bound on the robust objective and an extension to Wasserstein robust models with entropic regularizations." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "generalization guarantees", "optimal transport", "distributionally robust optimization", "nonsmooth analysis" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/54769a76e1c4dcf908ecc8ba3447d3eadf3d62fc.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Universal generalization guarantees for Wasserstein distributionally robust models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0hc7iQLhCt
HessianGrad: Optimizing AI Systems with Hessian-Aware Textual Gradients
main
Active
LLM;Prompt Optimization;Gradient Descent
foundation or frontier models, including LLMs
3;3;5
5;4;4
3;2;3
3;2;3
3;3;3
3.666667
4.333333
2.666667
2.666667
3
-0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could you provide justifications for the difference between your optimizer prompt and OPRO’s meta-prompt [1] for prompt optimization? \n\n[1] Yang, C., Wang, X., Lu, Y., Liu, H., Le, Q. V., Zhou, D., & Chen, X. (2023). Large Language Models as Optimizers (No. arXiv:2309.03409). arXiv. http://arxiv.org/abs/2309.03409\n\n[2] Pryzant, R., Iter, D., Li, J., Lee, Y. T., Zhu, C., & Zeng, M. (2023). Automatic Prompt Optimization with “Gradient Descent” and Beam Search (No. arXiv:2305.03495). arXiv. http://arxiv.org/abs/2305.03495" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The core idea addresses a critical limitation of iterative/reflective methods that only focus on immediate feedback. \n\n2. This work covers a wide array of recent literature, and the presentation is clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Existing automatic optimization methods only focus on immediate feedback, which can be easily trapped by the local optima. HessianGrad is analogous to second-order derivative methods by taking into account the feedback over multiple iterations. Experimental results show consistent gains of HessianGrad in prompt optimization, solution refinement, and code optimization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Besides the analogy provided in Eq. 2 and Eq. 3, the core implementation of HessianGrad is to include a meta-prompt that encourages LLM to reflect over multiple turns as shown in Pg. 12. However, this is arguably analogous to the real Hessian matrix that the work is trying to deliver. Moreover, whether LLM is able to capture the second-order phenomenon is also questionable and lacks justification in this work. \n\n2. The actual technical contribution of this work is to provide a more refined meta-prompt to the original TextGrad’s meta-prompt. First, the contribution of the refined meta-prompt appears to be limited. Second, the OPRO work [1] has also included similar meta-prompts to reflect over multiple turns by feeding the iterative optimization trajectory into the context of LLM. Therefore, the novelty of this work is limited and appears more as an incremental improvement on TextGrad.\n\n3. The selected baselines in main experiments are also questionable. Most baselines are TextGrad and M-TextGrad. However, for each task, competitive baselines, e.g. ProTeGi [2] in prompt optimization, are not compared." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In general computational frameworks, HessianGrad is computed by adding a small quantity in the iteration direction and recalculating the gradient once, followed by taking the finite difference of the gradients. From a practical standpoint, can the direct finite difference version of similarity $\\mathcal{S}(r(p_t), r(p_{t-1})) = \\frac{|| \\mathcal{L}(r(p_t)) - \\mathcal{L}(r(p_{t-1}))||}{||p_t - p_{t-1}||}$ save computational costs (since $\\mathcal{L}(r(p_{t-1}))$ and $p_{t-1}$ are both values computed from the previous iteration) and achieve similar effects?\n2. There is a typo in Equation (3) in Section 3. The second-order partial derivative should be denoted as $\\frac{\\overset{\\sim}{\\partial}^2\\mathcal{L}}{\\overset{\\sim}{\\partial}p_t^2}$ rather than $\\frac{\\overset{\\sim}{\\partial}^2\\mathcal{L}}{\\overset{\\sim}{\\partial}^2p_t}$." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper is well written with clear presentation of the new algorithm, and the experiments are rigorous with repeatability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new method for optimizing system prompts through gradient descent. The authors outline the limitations of the classical method \"TextGrad\", and present solutions to mitigate these issues. The second-order derivative (i.e., HessianGrad) is introduced into \"TextGrad\", thereby reducing the likelihood of the system prompt getting trapped in local minima. The authors conduct empirical experiments on three tasks: prompt optimization, solution optimization, and code optimization. The conclusion is that the proposed new method outperforms the naive TextGrad method and the Momentum-Enhanced TextGrad method across four mainstream models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Overall, this paper may have an algorithmic contribution, but supplemental details are required on both theoretical and experimental aspects.\n\n1. Theoretical aspect:\n- When defining the similarity $\\mathcal{S}$ of the prompt before and after each iteration, you do not multiply it by any hyperparameters and directly add it to the original TextGrad to obtain the HessianGrad. Consider adding two hyperparameters, $\\beta_1$ and $\\beta_2$, to the two terms of HessianGrad, similar to Adamw. Could you conduct an ablation study on the effect of adding these hyperparameters, or provide justification for why they were not included in the current formulation?\n- You use the $L_1$ norm to define $\\mathcal{S}$, however, when measuring semantic similarity, cosine similarity is more commonly used as it signifies that the loss $\\mathcal{L}$ before and after iteration is closer in direction, while $L_1$ primarily signifies that $\\mathcal{L}$ is closer in numerical value. It is recommended to provide a rationale for choosing the $L_1$ norm as a similarity metric. (Simple reasons are acceptable, such as \"$L_1$ norm is easier to compute\"). Could you compare the performance of your method using L1 norm versus cosine similarity, or provide empirical justification for why L1 norm was chosen over other common similarity metrics?\n\n2. Experimental aspect:\n- Please provide the loss curves for HessianGrad, M-TextGrad, TextGrad, and CoT of their iterative processes for a representative example from each of the three tasks (prompt optimization, solution optimization, and code optimization) to demonstrate the effect of \"HessianGrad Escaping Local Optima\" as shown in Figure 1.\n- Calculating HessianGrad typically requires more computational resources. Could you provide a detailed comparison of computational resources (GPU memory, runtime) for HessianGrad versus the baseline methods across all three tasks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please respond to the concerns in the \"Weaknesses\" part.\n\nQ1. The motivation (line 223~226) of introducing the similarity function does not match well with the second-order optimization theory. How can the similarity of the responses provide second-order information?\n\nQ2. The concrete definition of the similarity function on line 244 is meant to connect with the formulation of second-order optimization. However, this definition is contrary to the motivation in line 226 (\"more gradual and thoughtful evolution of the response over multiple iterations\"), since larger similarity means larger fluctuation between successive steps, according to this definition. Moreover, this definition is actually focusing on changes in feedback ($L(r(p_t))$) instead of response ($r(p_t))$), and this is a point that contradicts the motivation of this paper. Please clarify how the definition aligns with the stated motivation." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper writing is clear and easy to understand.\n2. The proposed optimization method achieves considerable improvements on a variety of tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a new LLM-based textual optimization method that takes the evolution of LLM systems responses across iterations into account. Improvements are achieved on multiple textual optimization tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although the authors have written about the difference between momentum-based methods and HessianGrad, the novelty of transferring the focus from feedback similarity to response similarity is somewhat weak. The authors should include more convincing ablation experiments to verify that tracking the dynamics of responses is more effective.\n\n2. The second-order Hessian formulation can not provide sufficient theoretical support for the optimization framework. The relationship between tracking feedback and tracking responses is not comparable to first-order and second-order optimization." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024hessiangrad,\ntitle={HessianGrad: Optimizing {AI} Systems with Hessian-Aware Textual Gradients},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0hc7iQLhCt},\nnote={under review}\n}" }, "abstract": { "value": "Recent advancements in large language models (LLMs) have significantly enhanced the ability of LLM-based systems to perform complex tasks through natural language processing and tool interaction. However, optimizing these LLM-based systems for specific tasks remains challenging, often requiring manual interventions like prompt engineering and hyperparameter tuning. Existing automatic optimization methods, such as textual feedback-based techniques (e.g., TextGrad), tend to focus on immediate feedback, analogous to using first-order derivatives in traditional numerical gradient descent. However, relying solely on first-order derivatives can be limited when the gradient is either very small or fluctuates irregularly, which may slow down or stall optimization. To address these limitations, better adaptation in regions with small or fluctuating gradients is necessary. Second-order gradient methods, which incorporate the Hessian matrix, offer a promising solution by enabling more precise adjustments. Inspired by this, in this paper, we introduce HessianGrad, a novel optimization method that leverages textual feedback and tracks the iterative evolution of LLM systems responses across iterations, leading to more dynamic and adaptive optimization. We evaluate the effectiveness of HessianGrad on three tasks: prompt optimization, solution optimization, and code optimization. Experimental results demonstrate that HessianGrad consistently improves performance across all three tasks, achieving a **7.8%** improvement in prompt optimization, a **20.72%** gain in solution refinement, and a **29.17%** increase in code optimization compared to baselines, highlighting its adaptability and effectiveness in optimizing LLM-based systems." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "LLM", "Prompt Optimization", "Gradient Descent" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f05fc8da8a3b631d67a8b72c55683865f055e7e6.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "HessianGrad: Optimizing AI Systems with Hessian-Aware Textual Gradients" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0hyShAPeBj
IT$^3$: Idempotent Test-Time Training
main
Active
idempotence;generalization
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;3;3;5
5;3;3;4
2;2;2;2
2;2;2;3
3;2;4;3
3.5
3.75
2
2.25
3
0.174078
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- Does the test time objective result in networks that are idempotent on OOD samples? Presumably once the function drifts from the fixed anchor function that test time loss no longer reflects idempotence? It would be good to see measures of idempotence on training and test samples.\n\n- How does the intuition about idempotence being a generalization of orthogonal projection hold up? The proposed method considers idempotence only in the y-variable, not the entire function." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Idempotent training is an interesting and novel approach for addressing the out-of-distribution generalization problem.\n\n- The paper shows successful application of the considered approach on a large number of experimental settings, including in online-learning settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "- This paper proposes a test-time-training based approach to address the distribution shift or OOD generalization problem by learning models that are idempotent.\n\n- In particular, this paper proposes a method where models $f_{\\theta}: \\mathcal{X} \\times \\mathcal{Y} \\to \\mathcal{Y}$ are trained by minimizing both the difference between $f_{\\theta}(x, y)$ and $y$ as well as the difference between $f_{\\theta}(x, 0)$ and $y$, resulting in a model that is idempotent on the training set. Then at test time, models are optimized on the test data before running inference to make them idempotent on test inputs, by minimizing the difference between $f_{\\theta}(x, 0)$ and $f_{\\theta}(x, f_{\\theta}(x, 0))$ for an OOD input $x$.\n\n- The paper shows empirically for several different settings that idempotent test-time-training improves classification accuracy on out-of-distribution samples." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Test-time training requires undesirable extra compute for test-time optimization of the whole model. How much additional compute is needed compared to running inference on the base model? How does this method scale with model size?\n\n- The paper lacks analysis that explains how or why idempotent training is expected to improve out-of-distribution analysis. Further investigation and ablations that provide intuition for how this method works would be valuable.\n\n- The proposed method has not been evaluated on standard real-world out-of-distribution generalization benchmarks such as DomainBed [Gulrajani and Lopez-Paz 2020] and WILDS [Koh et al 2020]. The presented experiments are on smaller models/datasets.\n\nReferences:\n\nGulrajani, Ishaan, and David Lopez-Paz. \"In search of lost domain generalization.\" arXiv preprint arXiv:2007.01434 (2020).\n\nKoh, Pang Wei, et al. \"Wilds: A benchmark of in-the-wild distribution shifts.\" International conference on machine learning. PMLR, 2021." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Do you use a special encoding or placeholder value instead of 0 for regression tasks?\n2. If the purpose of the neural \"don't know\" zero is purely for contrast and its specific value is not important, will it be more computationally efficient to use a representative value such as the median of $y_i$ where $i \\in \\text{training set}$?\n3. In EMA (Morales-Brotons et al., 2024), when $\\alpha = 1$, the online $IT^3$ aligns with the offline version, and when $\\alpha = 0$, it encounters the collapse issue described in Section 3.2. Could you provide guidance on selecting the value of $\\alpha$ or share experimental results demonstrating performance across different $\\alpha$ values?\n4. The right panel of Figure 2 and Figure 16 both appear to represent the car data. Could there be a potential mistake or duplication here?\n\nMorales-Brotons, D., Vogels, T., & Hendrikx, H. (2024). Exponential moving average of weights in deep learning: Dynamics and benefits. Transactions on Machine Learning Research." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The authors extend the concept of TTT (Sun et al., 2020) by incorporating idempotence, offering a simple yet elegant solution. The approach is intuitive and Figure 1 is particularly helpful for quickly grasping the core idea, even for those who are not experts in distribution shift." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces $IT^3$, a generic method for addressing distribution shift challenges. By enforcing idempotence, $IT^3$ sequentially adapts to potential distribution shifts during inference. The authors show the performance of $IT^3$ across diverse scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. (cf Figure 1) During training, in addition to feeding the standard $(x, y)$ pair to train the model, the authors also input the $(x, 0)$ pair to ensure the model satisfies the property of idempotence, referring to the zero as a neutral “don't know” signal. While this approach may work in classification tasks where zero could represent a new “don’t know” class, in regression tasks, it is unclear how the model differentiates this zero from an actual label of 0.\n2. (Line 211) Online $IT^3$ appears to rely significantly on the use of Exponential Moving Average (EMA). However, the authors did not provide a citation for this technique. \n3. In the first experiment (Section 4.1 on tabular data), the method of randomly substituting features with zeros in the test set may resemble a missing data scenario rather than a true distribution shift. In other experiments, the authors simulate distribution shift by splitting the training and testing data based on a specific covariate to create distinct distributions. It is unclear why the same method was not applied for the first experiment. If the authors prefer using the zeroing method, they should include figures or statistical tests to substantiate that the training and testing data are distributionally different, rather than relying solely on intuition.\n4. There are unnecessary abbreviations throughout the manuscript. For instance, “such that” is shortened to “s.t.” in line 490. The proposed method, $IT^3$, is inconsistently referred to as ITTT in parts of the manuscript, such as in Figure 13.\n5. Figures 14 to 16 are not mentioned or referred to in the main text. This omission is unusual and may confuse readers as to the purpose or relevance of these figures.\n6. Figures 12 and 15 have the exact same title.\n\nSome minor improvements and spelling corrections for clarity:\n1. (Line 22) $x$ (not $\\mathbf{x}$) is not mentioned before.\n2. (Line 77) $y_2$ is not mentioned before.\n3. (Line 85) \"th input\" should be corrected to \"the input\".\n4. (Line 131) Missing right parenthesis: $f(f(z))$.\n5. (Line 490) “s.t. that” should be replaced with “such that”.\n6. (Line 495) Remove the extra colon (\":\").\n7. In Table 1, the title references “qualitative” results, but the data presented are numerical and should be described as “quantitative” results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "As mentioned before, I find this work quite interesting but the lack of proper baselines push me towards a weak reject opinion. As for specific questions and ways that the authors can improve their work:\n* More baselines are needed on all tasks; even if the method does not translate exactly to the setup considered, the authors could perform minor adaptations so that it does. For example, why not consider additional self-supervised tasks as discussed at Sun et al. (2020)? In the case where simple rotation prediction might not apply, something simple like denoising an artificially corrupted image could still work as a self-supervised task. Another example would be on the online setup; there, methods from the TTA literature could be applied, such as the work of [1] which works even on the instance level and without batch normalisation.\n* Apart from the CIFAR-C case, most other distribution shifts are generated by just partitioning the datasets according to some rule and then training on a subset while considering the other as OOD. This is a bit of a constrained setting and I would encourage the authors to consider more diverse shifts, as that would better highlight the usefulness of IT$^3$. For example, why not add noise to the road segmentation task, in the same way that it was done for CIFAR 10? This could be plausible real world setting where there is a fault in the sensor, thus the images come out corrupted. \n* How is the label encoded in the input in the various settings considered? This is an important information for reproducibility. Furthermore, is the loss at Eq. 2 used for all settings (even when it might not make much sense, such as classification)? \n* In the online setting, the authors consider a smooth transition between the distribution shifts, which might not be practically realistic. How does the method behave when the transition between distribution shifts is not smooth? \n* How many update steps on each datapoint do the authors do? Does the test time optimization reach a fully idempotent model and does “more idempotence” provide better performance? \n\n\n[1] Towards Stable Test-Time Adaptation in Dynamic Wild World, Niu et al., ICLR 2023" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The paper is mostly well written; the ideas are explained clearly and reasonable intuitions are being given. Same goes for the experimental evaluation and discussion of the settings. \n* The idea is simple and I find it quite interesting. While the main architecture of the method relies heavily on the prior work Durasov et al., the application on the TTT setting is novel, since, as far as I know, idempotence hasn’t been explored in such a scenario. \n* The tasks considered in the experiments are quite diverse, which is nice to see. They range from simple tabular data, to standard image classification, to regression and dense prediction with various architectures. \n* The authors demonstrate improvements with IT$^3$ on all settings considered upon (admittedly very simple) baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work presents IT$^3$ a method for test time training that relies on the concept of idempotence. During training, the authors train the neural network to be able to predict the ground truth label by either conditioning on it (by concatenating it to the input) or by assuming a placeholder value for it (by concatenating a $\\mathbf{0}$ value to the input). At test time, the authors then fine-tune the model on unlabelled data by matching the predictions of the model when using $\\mathbf{0}$ as the label with that of the model when conditioning on its own output at the $\\mathbf{0}$ label. This essentially leads to a soft idempotence constraint which, according to the authors, allows the model to move out-of-distribution data closer to in-distribution ones and thus improve performance when distribution shifts happen at test time. The authors then evaluate IT$^3$ on a variety of tasks and architectures." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The main weakness I see in this work is the (almost complete) lack of proper baselines. For example, on only the CIFAR task there is another TTT baseline but on all of the others the authors just compare against not doing anything. This make it hard to gauge significance of the method against other prior art.\n* The work makes claims about how idempotence can be seen as a generalisation of a projection and that it allows to map the OOD data to the in-distribution. Thus, while the authors do spend some time to explain why would intuitively their method works, they do not have any ablation studies to verify that these exact intuitions hold in practice. \n* IT$^3$ as a method requires two sequential forward passes through the model, so in practice it can be slow and the authors do not discuss the efficiency of IT$^3$ relative to other works." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weaknesses section (especially point 2). I am happy to raise my score if my concerns are resolved." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The main strengths of this paper:\n\n1- Simplicity of the approach: the method is easy to understand and to implement. \n\n2- The paper is generally well-written and easy to follow.\n\n3- The breadth of the experiments: The authors are commended for the variety of different tasks that they tested their proposed methods on." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a new method for test-time training: at training time the model is trained to be Idempotent, and at test-time two copies of the models are leveraged (one frozen and one adapted) to encourage the model to be idempotent under distribution shifts. Experiments are carried out on different tasks to demonstrate how versatile the proposed method is." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weaknesses of this paper are:\n\n1- The method section, while simple, is a bit confusing. Why does the paper present two variants of IT$^3$? The experiments in section 4.6 and Table 1 clearly favors the online version of the method over the one with frozen predictor. This questions why not leveraging the online version in all the experiments? Is there an experimental setup where the offline one is much better than the online one? Why doesn't the paper present one method and treat the other as a special case/variant that is less powerful?\n\n2- The main weakness of this paper the lack of baselines in the experimental section. In all of the presented results, a very suboptimal version of TTT is compared against in **one experiment** only. This really questions how strong is IT$^3$/IT$^3$-online is when compared against strong baselines. Here are some suggestions for necessary experiments:\n\n2a. Since TTT/TTT++[A] are directly comparable with IT$^3$, then I suggest to *at least* have them in all of the experiments with one-to-one comparison in terms of batch size and model architecture. Also, consider adding the performance of IT$^3$ under batch size=1 in Figure 3 results.\n\n2b. Another strong baseline that is suitable for both classification and regression tasks is ActMad [B]. A direct comparison against this baseline is also necessary is all the presented experiments.\n\n2c. Another line of work that is directly comparable is Test-Time Adaptation (TTA). TTA works under a more conservative setup where no control over the training process is assumed. It is also important to compare against the current SOTA TTA method EATA [C] or more closely the dataset distillation method from [D] to further demonstrate the superiority if the proposed method.\n\n2d. Since this is a 'test-time' method, a discussion on its computational requirements is necessary. How would the performance be when evaluated under computational constraints [E]?\n\n2e. Benchmarks used in this work are somewhat small scale. Experiments on larger benchmarks such as ImageNet-C [F] and ImageNet-3DCC [G] in the classification setting are necessary. Similar arguments follow for regression tasks where for example one can follow the object detection experiments from ActMAD. \n\n3- In section 4.1, I am not sure about the Distribution Shift introduced in this experiment. For instance, the performance of non-optimized does not constantly degrade. Why zeroing out features is a good way of modeling distribution shift instead of adding random noise to the features? Can you please comment on this and provide justification for their choice of distribution shift.\n\n4- Missing references: [B, C, D, E, F, G].\n\n[A] TTT++: When Does Self-Supervised Test-Time Training Fail or Thrive?, NeurIPS 2021\n\n[B] ActMAD: Activation Matching to Align Distributions for Test-Time-Training, CVPR 2023\n\n[C] Efficient Test-Time Model Adaptation without Forgetting, ICML 2022\n\n[D] Leveraging Proxy of Training Data for Test-Time Adaptation, ICML 2023\n\n[E] Evaluation of Test-Time Adaptation Under Computational Time Constraints, ICML 2024\n\n[F] Benchmarking Neural Network Robustness to Common Corruptions and Perturbations, ICLR 2019\n\n[G] 3D Common Corruptions and Data Augmentation, CVPR 2022" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Adapting to distribution shifts at test time by training the model to be idempotent." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024it,\ntitle={{IT}\\${\\textasciicircum}3\\$: Idempotent Test-Time Training},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0hyShAPeBj},\nnote={under review}\n}" }, "abstract": { "value": "This paper introduces Idempotent Test-Time Training (IT$^3$),\na novel approach to addressing the challenge of distribution shift.\nWhile supervised-learning methods assume matching train and test distributions, this is rarely the case for machine learning systems deployed in the real world.\nTest-Time Training (TTT) approaches address this by adapting models during inference, but they are limited by a domain specific auxiliary task. IT$^3$ is based on the universal property of idempotence. An idempotent operator is one that can be applied sequentially without changing the result beyond the initial application, namely $f(f(x))=f(x)$. \nAn idempotent operator is one that can be applied sequentially without changing the result beyond the initial application, that is $f(f(X)=f(X)$.\nAt training, the model receives an input $X$ along with another signal that can either be the ground truth label $y$ or a neutral \"don't know\" signal $\\mathbf{0}$. At test time, the additional signal can only be $\\mathbf{0}$. When sequentially applying the model, first predicting $y_0 = f(X, \\mathbf{0})$ and then $y_1 = f(X, y_0)$, the distance between $y_1$ and $y_2$ measures certainty and indicates out-of-distribution input $x$ if high.\n We use this distance, that can be expressed as $||f(X, f(X, \\mathbf{0})) - f(x, \\mathbf{0})||$ as our TTT loss during inference. By carefully optimizing this objective, we effectively train $f(X,\\cdot)$ to be idempotent, projecting the internal representation of the input onto the training distribution.\nWe demonstrate the versatility of our approach across various tasks,\nincluding corrupted image classification, aerodynamic predictions,\ntabular data with missing information, and large-scale aerial photo segmentation. Moreover, these tasks span different architectures such as MLPs, CNNs, and GNNs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "idempotence;generalization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1ffbba0b731bde24db6dad11af404803012f7da4.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "IT$^3$: Idempotent Test-Time Training" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0iAZYF9hrl
Disentangled representations of microscopy images
main
Active
Microscopy images;Disentangled representations;Transfer learning;Interpretability
applications to physical sciences (physics, chemistry, biology, etc.)
1;3;3;3
5;4;3;5
1;1;2;2
2;1;1;1
1;2;2;2
2.5
4.25
1.5
1.25
1.75
-0.522233
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Can the authors clarify the questions above? Specifically, the extent to which DINO already offers certain degree of disentanglement and how the factors of variation of interest could be identified directly from these representations." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The paper evaluates the recent ideas of disentangled representation learning using weak supervision in a more realistic application.\n* The paper also presents an alternative to learning the disentangled representation from RGB images based on models pretrained at large scale.\n* The paper proposes a new sprites dataset to facilitate the interpretation of microscopy images." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a study of disentangled representation learning on three microscopy image datasets. The representation learning strategy starts by training an Ada-GVAE model using a Textures-dSprites dataset introduced in this work. The dataset is supposed to reflect simple textures that could help interpret information in microscopy images. After training this model in a weakly supervised way, it is used to encode images of another domain, with optional unsupervised finetuning using a beta-VAE. The resulting features are low dimensional, and interpretable, and are used to train classifiers.\n\nThe ideas and the study are generally interesting, but the paper lacks technical novelty, is limited to a small-scale empirical evaluation only, and the experiments are incomplete to fully understand the value of the proposed strategy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The technical contribution is limited. Beyond the sprites dataset and the use of pretrained features, many of the ideas have been presented in previous works.\n* The experimental evaluation is limited to quantifying the impact of classifier types (GBT vs MLP) and input type (RGB vs DINO features). Many questions remain open regarding how much classification accuracy could be obtained without the proposed disentanglement procedure. Can the authors compare results of training a classifier directly with RGB images and another classifier with DINO features without any modifications? These results would help understand how difficult the tasks are and what is the trade-off between using disentanglement vs not using it.\n* It is possible that DINO features are already disentangled and all what the proposed strategy is doing is assigning names to some of the factors of variation that DINO can detect. Therefore, the disentanglement is not really happening in the VAEs but rather obtained from a model pretrained at large scale. What type of experiment can the authors design to test this hypothesis?\n* If the hypothesis above is not rejected, the value of proposed methods is limited to only annotating factors of variation rather than identifying them in a weakly supervised manner to then being transferred." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Referring to the weaknesses noted above, I find the claimed contributions of this paper not sufficiently convincing. Could the authors provide a more compelling explanation of their main contributions, particularly addressing:\n1. Why DRL is specifically suited for microscopy image analysis.\n2. What novel challenges or requirements this domain brings to DRL.\n3. How their approach advances the theoretical or methodological aspects of DRL beyond simple application." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The manuscript is well-written and easy to follow, with clear organization and logical flow.\n2. The application of weakly-supervised DRL to real-world image analysis represents a promising and valuable research direction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the interpretability challenge in microscopy image analysis using deep learning approaches. The authors propose a novel methodology based on Disentangled Representation Learning (DRL) to enhance model interpretability while maintaining classification performance. The approach leverages transfer learning from synthetic features and is validated across three diverse microscopy domains: plankton, yeast vacuoles, and human cells. The growing volume of microscopy images due to technological advances has necessitated automated analysis methods, yet interpretability remains crucial for practical applications in fields such as diagnosis and environmental monitoring. The authors demonstrate that their DRL framework successfully balances the trade-off between model accuracy and interpretability in microscopy image classification tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.The scope of this work appears too narrow, focusing solely on microscopy images. The proposed approach might be more convincing if demonstrated on natural images as well.\n2.The authors fail to adequately justify why DRL should be specifically applied to microscopy image analysis. Furthermore, they do not clearly articulate whether this specific application domain poses new challenges or requirements for DRL that could lead to innovative solutions. The authors' insights into these aspects are not well presented.\n3.Given the lack of compelling insights, this work appears to be primarily an application of existing DRL methods without significant methodological or theoretical innovation. This level of contribution may not align with ICLR's focus on novel methodological and theoretical advances in machine learning.\n4.The paper appears to lack comparative experiments. While the disentanglement scores might be novel evaluation metrics, the absence of comparisons for classification performance is particularly concerning and unreasonable." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Most of my questions are related to major weaknesses.\n\nWhat specific contributions does this paper make beyond applying DRL to microscopy images? It would be helpful if the authors could clarify what is novel in their approach and how it advances the state-of-the-art in microscopy image analysis beyond existing techniques.\n\nWhat are alternative approaches the authors could have used for comparison? \n\nMetric explanations (e.g., OMES, MIG, DCI and balanced accuracy) are mostly missing. Could the authors clarify these metrics, ideally using mathematical notation and provide justification for using them?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The paper explores the application of an existing DRL framework to the specific domain of microscopy images. This idea is interesting as it shows a potential pathway for combining DRL with microscopy image analysis." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a Disentangled Representation Learning (DRL) approach to improve interpretability in microscopy image classification. By pre-training on synthetic data (Texture-dSprite) to capture factors of variation, the authors apply these learned representations to real-world microscopy datasets (Plankton Lensless, Plankton WHOI15, Budding Yeast Vacuoles, and Sipakmed Human Cells). Their method aims to support model interpretability while achieving high classification performance with gradient-boosted trees and MLPs for downstream analysis." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "A significant weakness as it seems, is the absence of a comparison with other similar methods. The paper presents only one framework and does not discuss or evaluate alternative approaches, which weakens the case for this framework’s efficacy or advantage over existing methods.\n\nThe contributions of the paper in terms of novelty are unclear. The study applies an existing DRL approach to a new domain but does not appear to introduce any fundamentally new concepts, techniques, or substantial modifications to existing methods. The only apparent novelty - the application of DRL to microscopy imaging does not suffice. This limits the potential impact and originality of the work.\n\nThe paper’s presentation suffers from numerous issues that impede readability and clarity:\n1. There are instances of informal languages, such as the use of “thanks.”\n2. The text contains multiple errors at the word, sentence, and structural levels, which disrupts the reading experience. Sections like Section 2.2 (“Disentanglement Evaluation”) resemble output generated by ChatGPT and lack rigorous academic polish.\n3. Figures appear low-resolution, with inadequate explanations in captions. Captions should be comprehensive and self-contained, but here, they lack essential details, e.g., explanations of metrics like OMES and balanced accuracy.\n4. The use of multiple highlight types (underscoring, bold, italics) is excessive and distractive. Minimal highlighting would improve readability and make essential points more accessible.\n5. Important metrics are either not explained in the text or lack adequate definitions in the captions, leaving readers uncertain of their meaning. This omission impacts the study’s reproducibility and overall clarity." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- In Fig, 1, what is exactely fine-tuned and how ?\n\n- How is an RGB image directly fed to the classifier (GBT and MLP)?\n\n- In line 322, the authors state \"We can observe that after finetuning, it may change, nicely adapting to the specificity of the dataset, where scale and texture are more relevant.\", It is unclear for me why scale and texture are more relevent then \"scale and shape\", as it is the case before fine-tuning.\n\n- The proposed evaluation metrics (ex:OMES) are unclear.\n\n- The authors do not compare their method to any other work, having a solid baseline is important.\n\n- The used classifiers (GBT and MLP) are very simple, more sophisticated ones should be used (CNNs based for example).\n\n- Inputting an RGB image to the classifer is unclear as it is well-established that deep features (in this cas the features extracted by DINO) have more important patterns.\n \n- To assess the quality of the representation, the authors realied on classification. While a good representation can lead to a better accuracy. A good representation does not necessary mean a disentangled one.\n\n- Using the accuracy only to measure the quality classification performance is not enough.\n\n- The figures are small and the captions are not clear enough.\n\n- In Figure 6, the OMES indicates that the proposed method does not lead to better disentanglement." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The paper addresses a significant challenge in representation learning: disentanglement, which plays a pivotal role in improving the interpretability of classifiers, particularly in the context of biological images." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propose to use a disentangled representation learning framework to enhence model interpretability for microscopy image classifications. The method is based on fine-tuning a model trained on synthetic images, the proposed framework is tested on some microscopy images datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The proposed approach is not well explained. Indeed, the method proposed by the authors learns a disentangled model with weak-supervision using Ada-GVAE on a synthetic dataset and then fine-tune it on\n a microscopy datasets. However, it is unclear why Ada-GVAE is choosed and how is the model fine-tuned.\n\n- The difference between the proposed method and Dapueto et al is unclear.\n\n- The authors claim that the disentanglment learned from a synthetic images can be transferred to microscopy images, such claim should be theoretically and empirically evidenced.\n\n- The paper is not well organized, for instance a \"Related Work\" section should be added. Two different sections (2.2 and 3.5) have the same title \"DISENTANGLEMENT EVALUATION\"." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We transfer a disentangled representation from a Source dataset to real-world Target dataset reaching a compromise between classification accuracy of the downstream task and interpretability in microscopy images" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024disentangled,\ntitle={Disentangled representations of microscopy images},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0iAZYF9hrl},\nnote={under review}\n}" }, "abstract": { "value": "Microscopy image analysis is fundamental for different applications, from diagnosis to synthetic engineering and environmental monitoring. In the last few years, the number of available images has been constantly growing, thanks to technological advancements, pushing toward the development of automatic image analysis methods based on deep learning. Although deep neural networks have demonstrated great performance in this field, interpretability — an essential requirement for microscopy image analysis — remains an open challenge. \nThis work proposes a Disentangled Representation Learning (DRL) methodology to enhance model interpretability for microscopy image classification. \nExploiting benchmark datasets coming from three different microscopic image domains, including plankton, yeast vacuoles, and human cells, we show how a DRL framework, based on transfer learning from synthetic features, can provide a good trade-off between accuracy and interpretability in this domain." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Microscopy images", "Disentangled representations", "Transfer learning", "Interpretability" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/89f2963d7234a9b02fc373335d78dd67c3e84695.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/423740d03cb2fefca8e8ba22ac7026fb26e4a65c.zip" }, "title": { "value": "Disentangled representations of microscopy images" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0iXfS9Smqf
Learning through experience:Episodic memory representation for cognitive agents
main
Active
Episodic Memory;Bio inspired Robot learning;incremental Memory structures
transfer learning, meta learning, and lifelong learning
3;3;3;5
3;4;4;4
3;3;2;3
2;3;3;2
2;1;2;2
3.5
3.75
2.75
2.5
1.75
0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Statistical Estimation Methods for Timestamps: Which specific statistical methods are used for estimating timestamps in the absence of explicit temporal markers?\n\nUnion of Tacoustic and Tvoiced: In combining Tacoustic and Tvoiced, what is the methodology for performing this union? Is Tvoiced identical or related to another variable, such as St?\n\nFormat and Extraction of Visual Details: What is the format of the visual scene details (e.g., Vscene, Vplace, Vtime), and how are these extracted and integrated into the memory graph?\n\nDefining Key Events and Hierarchical Organization: How are key events identified, and what hierarchical structure is used to organize these events?\n\nRelation between Taudio and Tcombined: Is Taudio equivalent to Tcombined, or is there another relationship between these variables?\n\nTask Categories for Text Summarization: How are text summaries grouped into broader task categories (e.g., meetings, lunches)? What criteria and process are used to define these categories?\n\nSimilarity Calculation in Equation 8: Equation 8 is intended to measure similarity between an event and multiple episodes, but it isn’t clear how it accomplishes this. Could you clarify how this calculation works?\n\nLocation Weight Definition: How is \"location weight\" defined, and how does it differ from location similarity in the model?\n\nTemporal Parameter in Equation 14: In Equation 14, should the parameter be (t-k) instead of just t? If not, what purpose does the current form of the equation serve?\n\nMeaning of \"Agent Comprehends\": In line 274, it says the \"agent comprehends\" something. Does this imply processing by a language model, and if so, could you clarify which model is used?\n\nDefinition of the Set Du: How is the set Du defined in the context of the framework?\n\nSimilarity Function in Line 283: Which similarity function is used in line 283, and what factors are considered?\n\nRole of w and l Functions: In line 287, the w and l functions are mentioned. Could you elaborate on their roles within the memory retrieval mechanism?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "I think the paper is trying to address a very important problem of how can an autonomous agent keep memoring new experienes and the recall those flexibly based on the context, question or query. Specifcally I see two main strengths. \nPretrained Models: The model builds on pretrained models that help with extraction of compenents, but this also means the system does not require pre-training on every specific scenario from scratch, which is a significant strength as it allows flexibility across contexts.\nDataset Diversity: The authors evaluated the system on multiple datasets, demonstrating a broad application range, although it should be noted that most of these datasets were developed by the authors." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents Episodic Memory for Cognitive Agents (EMCA), a framework designed to support memory retention and retrieval in cognitive agents. EMCA models episodic memory using a graph-based structure that incrementally stores and organizes multimodal experiences—such as speech, vision, and non-verbal cues—without pre-training on specific scenarios. This approach enables agent to keep adding new experiences continuously from data. This supposedly allows for flexible temporal reasoning across different datasets. EMCA’s dynamic memory graph builds semantic and temporal connections, enabling context-aware retrieval and clustering of memories based on query relevance. The framework aims to improve task execution , and reasoning by recalling contextually significant past events. Empirical tests reported indicate that EMCA adapts to real-world data, demonstrating good recall in unpredictable settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Despite tackling an important problem, the paper suffers from serious clarity and coherence issues that obscure its contributions and weaken its scientific rigor. The presentation is fragmented, key concepts are inadequately explained, and essential technical details are missing, all of which make it challenging to assess the model’s validity and potential impact. Specifically, the weaknesses include:\n1. The authors claim EMCA encodes data in a way that resembles human memory, but there is no evidence or detailed explanation to support this claim from a neural encoding way. Such claims should be toned down. You should instead emphasize on the 'what', 'where' 'when' organisation from a psychological perspective of episodic memory. \n\n2. Insufficient Motivation: The introduction section does not adequately establish the necessity of this system or why it improves upon existing learning frameworks for cognitive agents. Additional motivation for the need of an episodic memory for a cognitive agent would help contextualize EMCA's contributions.\n\n\n3. Minimal Related Work Discussion: Essentialy the model is an encode and retrival model with some dynamic reorganistion. The related work section is sparse and lacks comparisons to key formal methods like Hopfield networks or other established models in episodic memory encoding and retrieval. A more rigorous comparison to established human memory models would also strengthen the paper.\n\n4. Unclear Implementation and Integration Details: Although multiple models and methods are mentioned, the paper lacks a cohesive description of how these components integrate within the system. Critical details such as model architecture, parameter settings, and processing pipelines are absent, making it difficult to assess or replicate the work. A system architecture diagram, a table of key parameters, or pseudocode for the main processing pipeline would help.\n\nVague Statistical Estimation Methods: The paper mentions the use of statistical methods for estimating missing timestamps but does not specify which methods were used, leaving an important aspect of the framework unexplained.\n\nSurface-level Comparison with Temporal and Knowledge Graphs: The comparisons with temporal and knowledge graph structures are brief and lack depth, offering limited insight into how EMCA differs from or improves upon these existing approaches.\n\nUndefined Terminology and Variables: Certain terms and variables (e.g.,\"key events\", \"location weight,\" \"subjective temporal timescales\") are introduced without sufficient explanation or definition, reducing clarity.\n\nOverreliance on Custom Datasets: While the use of various datasets to evaluate EMCA is a strength, most of these datasets were developed by the authors, which could indicate potential biases in testing and validation.\n\nLimited Explanation of Retrieval Policy: The retrieval policy and memory clustering mechanisms, while central to EMCA’s functionality, are described only briefly. A more detailed explanation would clarify how these mechanisms adapt to different query types and scenarios." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- How was the dataset constructed? This is a major contribution that is not discussed.\n- What does a successful or unsuccessful example look like? I would recommend looking at [2] or [3] above to see how to discuss dataset creation in such a setting.\n- In Table 1, what metric is being used? It does not say in the caption or the table. It should be re-iterated in the table itself.\n- Is a forgetting mechanism necessary, or would it be more like a memory aggregation mechanism so that retrieval is still efficient?\n- How does the incremental storage/retrieval scale as the number of episodes change? This result is not displayed in the paper but I would argue is very important. If you only used say 10 episodes of EM instead of 181, is there a difference in performance? This would directly support contribution number 2 in Section 1 of your paper\n- What is \"Time complexity\" in Table 3? Time complexity of BFS is O(V+E), but here a number is used instead. Authors should use \"retrieval time\" or something similar instead of time complexity.\n- Can the authors describe the main takeaways of Section 4? I still do not understand the insights this section is supposed to provide.\n- Also, there are some questions/concerns in the weaknesses section\n\n\nOverall, I like the paper. But I think there are a lot of issues with how to content is shown to the reader that makes the paper's contributions fall flat. In its current state, I would recommend rejecting the paper, but if the authors address my concerns above, I believe I would lean more towards accept." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The Big Bang Theory and the Agent datasets seem very useful! \n- Their results indicate that their method performs better than other graph-based approaches. \n- This area is a growing field, especially as robots and agents become more capable and need better ways to scale their context. And their graph-based approach seems to be a meaningful contribution\n- Table 2 is interesting, as it showcases that some of their questions require access to vision, acoustics, and/or dialogues. This result would be better if we knew what the Big Bang Theory and Agent dataset contained." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce Episodic Memory for Cognitive Agents (EMCA) that models episodic memory based on a graph-structure. This allows them to incrementally store memories and retrieve experience. They also can cluster memories, have dynamic retrieval, and can handle temporal reasoning. Memory is structured as a graph, where each episode contains characters, temporal elements, location, and events. The edges of this graph can be temporal or semantic. They then build a retrieval system that can handle contextual, temporal, and spatial queries." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "## Main weaknesses\n- After reading the full paper, I am not sure what the actual task is. What commands does the master ask? I understand the approach, but not what task it is specifically solving. Is the output the location of a specific memory? Or is it a free-form text answer?\n- In terms of the structure of the paper, I do not think that the text space used for discussing how signals are captured (section 3.1) is very useful. I would argue this is the case with a chunk of this paper's equations, where it simply adds space when it does not need to, and it makes the paper more difficult to read. I would recommend condensing the information and matching ICLR's 9-page recommendation as opposed to 10. Similarly, there is an excessive use of new lines at arbitrary positions.\n- There is no discussion on how the Big Bang Theory and Agent datasets were constructed. This on its own seems like a major contribution on its own. I would recommend the authors remove much of the superfluous equations and newlines (and move that into the appendix), and put more of an emphasis on this dataset component.\n- I think I like what Section 4.1 is implying about combining a \"master's\" memory with that of an agent's, but it is not presented very clearly, and I do not see a connection with temporal graphs like the subsection title suggests\n- Nor do I see how this section is a \"theoretical comparison\"\n- \"Master\" terminology in line 346 is confusing, and should be introduced earlier in section 4.1. Also, the term \"master\" is generally frowned upon in these settings, so I would recommend a different term\n- Results are poorly presented\n\n\n## Formatting/Clarity issues:\n- Check for spaces after periods or colons throughout the paper.\n- Line spacing is odd in much of the paper\n- Figure placement should ideally be on the top or bottom of a page, not in the middle with paper text above and below the paper. This makes the paper difficult to follow\n- Figure 4, the legend has oddly shaped circles\n- Figure 5's result is good, but it should not be a line graph with x axis being method and lines being the dataset. Instead it should be datasets on the x-axis and methods on the y-axis\n- The results in section 5.1.1 are all discussed in a single paragraph. Is that all the main results? I would recommend splitting this up into a few bolded mini-sections and showing the main takeaway of each figure along with highlights on how the method performed, possibly with qualitative results.\n- The focus of the introduction on falls a bit flat. Rather than focusing on the historical definition of episodic memory, focus more on how people have been engineering and building these kinds of systems. Focus on why other systems do not work, and why yours does.\n- I would recommend larger fonts for the figures; they are difficult to read in the paper.\n## Citations\nOther relevant concurrent work on memory in robotics, some of which use graphs while others do not.\n[1] Xie, Quanting, et al. \"Embodied-RAG: General Non-parametric Embodied Memory for Retrieval and Generation.\" arXiv preprint arXiv:2409.18313 (2024).\n[2] Anwar, Abrar, et al. \"ReMEmbR: Building and Reasoning Over Long-Horizon Spatio-Temporal Memory for Robot Navigation.\" arXiv preprint arXiv:2409.13682 (2024).\n[3] Bärmann, Leonard, et al. \"Episodic Memory Verbalization using Hierarchical Representations of Life-Long Robot Experience.\" arXiv preprint arXiv:2409.17702 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Will the code and datasets be made publicly available?\n2. How does the system maintain temporal consistency without explicit time markers? Can you provide quantitative results comparing temporal reasoning accuracy with and without explicit timestamps?\n3. What are the specific model architectures and hyperparameters used?\n4. What are the computational costs for memory retrieval at different scales?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "originality: graph-based episodic memory structure multimodal processing without pre-training, dynamic clustering\n\nquality: comprehensive testing on multiple datasets, benchmarking against exsiting methods, systematic component analysis in ablation study\n\nclarity: clear problem formulation, good visual aids explaining complex concepts\n\nsignificance: adresses crucial challenges in cognitive AI, social robot applications, potential impact on memory assistance systems\n\nkey innovations: \n1) removes pre-training requirements\n2) enables continuous learning\n3) provides real-time processing campability" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents Episodic Memory for Cognitive Agents (EMCA), a novel framework that enables AI systems to retain and utilize past experiences through a graph-based memory structure. The key innovation lies in its ability to: 1) Process multimodal data (vision, speech, non-verbal cues) without requiring pre-training; 2) Dynamically build and update memory representations through a graph structure with semantic and temporal connections; 3) Adapt to complex environments through continuous learning from interactions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Scalability limitations: Does the graph structure grow exponentially with experiences? What are the computational costs?\n2. More details about the Big Bang Theory dataset are needed.\n3. More implementation details about the method in section 5.0.2 should be provided.\n4. Forgetting is one of the key problems in memory systems - how does the paper assess and handle memory 5. retention and decay?\n5. What is the real-time performance?\n6. The paper tries to claim episodic memory for agents and robots, but robotic interaction with the environment is different from and much harder than agent interaction in a virtual environment. It is better to make a clear definition and scope. For instance, in L391, \"robot's episodic memory\" should be \"agent's episodic memory\".\n7. Can authors provide the code and dataset for evaluation?\n8. Repeated paragraph: L054-L062\n9. Subtitle formatting issue in line 480" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "When this method gets deployed, there would potentially be ethics concerns - as pointed out by the authors.\nIn the submission, it just uses datasets (established and generated form a TV series) rather than real user data, so no concerns." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The paper seems to miss all details on how the responses are generated, but the whole experiments are based on evaluating the answers. Please provide a detailed explanation of your response generation process, as this is crucial for understanding and evaluating the experimental results.\n- Sect. 3.2.4: This seems to result in just 'jumping' nodes. Conserving the chronological structure is a nice property, but the real question/challenge rather is on the side of the module that determines if there is any harm in 'skipping a step', e.g. when thinking about some of the uses-cases 'predict ... next activity' you mention\n- Quite a lot of unclear details\n - Eq (1) vs (2): the difference between S_T and T_voiced is unclear\n - How is T_acoustic generated\n - Sect. 3.1.2, 3.1.3, and 3.2, l. 218: at places the paper sounds like everything is represented as 'text', at others it seems to be a mixture of text and other embeddings. It would be great to explain earlier on what it stored as what\n - l. 184: \"organized by place, characters, and events\" raises the question where those come from - that is explained later in the text\n - l. 220: \"relevance of these tasks is assessed\" - again HOW?\n - l. 212: what are the implications/limitations resulting form using a simple metric like cosine similarity?\n - Eq (17) sim seems undefined\n- Fig. 2 and Sect. 4.1: I didn't get the terminology \"master\". Maybe that term can be avoided altogether (similar to https://learn.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/m/master-slave )\n- The paper comes across as unpolished\n - missing and extra spaces - e.g. in the title and abstract\n - the template uses natbib, not using correct commands for references \\citep \\citet (and resulting repeats of names) makes it painful to read\n - LaTeX has different opening and closing quotation marks\n - paragraph above \"Contributions\" is double\n - broken sentences in Sect. 5.1.2\n - a few references with incomplete info (e.g. l. 300 \"as shown in 4\", l. 191, l.468)" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper tackles and interesting problem. The architecture is described in an intuitive way and seems sound and novel. The experiments are reasonably extensive with comparisons to baselines, multiple datasets, and ablations and show very promising results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a novel method for storing multi-modal episodic memories. The design takes inspiration from a model of human memory. More specifically, the paper introduces a novel type of memory graph with different types of connections. The method compares favorably against baselines in experiments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I found the paper to have some \"false advertising\":\n - Some of the introduction, motivation, and discussion circles around 'robots', I didn't see anything specific to robots in this paper. Yes, it could be integrated into a robot, but the method would equally well work for a body cam, social agent, etc. In robotics there is quite an extensive literature on 'lifelong learning' that covers some of the same challenges: what to memorize, how to store and to retrieve, how to generalize, what to forget, etc.\n - The title says 'learning'. At least according to my definition there is no learning in the paper. It proposes a way to store information and to retrieve it, so that would correspond to 'memorization' (=rote learning), while learning implies understanding the information and being able to apply it to new situations. The method could serve as a starting point for learning, but in the current paper that doesn't seem to be present in the method nor in the experiments.\n - The introduction makes it sound like a general method. But I have a few doubts about that. There are quite a lot of design choices, and some of choices in Sect. 3 seem rather specific. In the end the method is evaluated with a question answering task. What would need to change for a different task, say for a robot learning low-level movement control? Please clarify the generalizability of your method. Specifically, it would be nice if you could discuss how your approach might be adapted for different tasks (such as robot movement control), or explain any limitations in its applicability to other domains.\n- Section 3 describes the method. It remains however very much on the level of HOW, rather than providing many insights in the WHY (reasoning behind design choices, consequences of design choices) and it isn't always clear what is a core part of the method and what is an implementation detail. Please provide more explanation for the key design decisions, discussing the rationale behind these choices and their potential implications. Additionally, please clearly distinguish between core methodological components and implementation details.\n- Fig. 5 isn't very convincing (except for Arigraph)\n- Memory requirement would be another interesting metric for comparing methods.\n- The paper mentions the missing forgetting mechanism as a major limitation. Related to that it also leaves the question on 'what to store' unanswered. It reads like everything is stored, even if it is effectively a duplicate. I believe the real challenging question that needs to be solved is the memory management: what to store, what to consolidate/merge, what to forget, etc. Without those a memory representation is of limited value, and it remains unclear to me how suitable the proposed architecture is for extending it in that way - or if we would be better off redesigning it from scratch.\n- The method relies on various models to extract features and to summarize things before storing them. I don't believe there is a 'one size fits all approach' but how to best do that depends on the retrieval task and the type of downstream tasks you have. The paper does not provide any indications on how to deal with that.\n- There are a whole lot of design choices in this paper, an ablation on only the modalities and search methods seems a bit limited. There also is no sensitivity analysis for e.g. the clustering thresholds\n- This seems to be a rather complex system, which makes reproducing results very difficult, with probably quite lot of additional implementation and setting details. I couldn't find any promises to release code (or at least a detailed appendix), which would have alleviated this concern." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning through experience:Episodic memory representation for cognitive agents},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0iXfS9Smqf},\nnote={under review}\n}" }, "abstract": { "value": "As the demand for intelligent robots and cognitive agents rises, the ability to retain and utilize past experiences through episodic memory has become crucial, especially for social companion robots that rely on previous interactions for task execution. To address this, we introduce Episodic Memory for Cognitive Agents (EMCA), a novel framework that advances knowledge representation by integrating real-world interactions. EMCA enables agents to adapt to complex environments by learning from tasks, interacting with humans, and processing multimodal data—such as speech, vision, and non-verbal cues—without pre-training on specific scenarios.\nEMCA models episodic memory through a graph-based structure , allowing for incremental storage and retrieval of experiences. Each interaction or event enriches the memory graph, supporting continuous learning and adaptation without extensive retraining. This human-like memory formation optimizes the agent’s ability to retrieve relevant information for tasks like localization, planning, and reasoning based on prior experiences.Unlike conventional models relying on temporal markers or recurrent patterns, EMCA encodes data like human memory, allowing reasoning across diverse scenarios regardless of temporal patterns. The framework dynamically builds a memory graph with semantic and temporal connections based on the agent’s experiences, promoting flexible temporal reasoning. It also introduces mechanisms for clustering new memories and a dynamic retrieval policy that adjusts based on context or query type, ensuring robustness even in unpredictable scenarios. Empirical tests show EMCA adapts effectively to real-world data, offering reliability and flexibility in dynamic environments." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Episodic Memory", "Bio inspired Robot learning", "incremental Memory structures" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/dc76713003b768e3eb75d5d5ff0cf6fbaeb978ef.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Learning through experience:Episodic memory representation for cognitive agents" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0iscEAo2xB
Comparing Targeting Strategies for Maximizing Social Welfare with Limited Resources
main
Active
social welfare;causality;treatment;treatment effect;targeting;risk;policymaking
alignment, fairness, safety, privacy, and societal considerations
5;5;6;10
3;4;4;4
2;3;3;4
3;3;2;4
3;2;3;4
6.5
3.75
3
3
3
0.420084
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Visually, it seems difficult to reconcile the large difference between, say, Figure 2 top left panel and Figure 1 top left panel. It seems that on the TUP dataset, high risk is close to maximizing treatment effect, the highest values being around percentile 80. However, Figure 2 shows utility 15000 vs 5000 for treatment effect based vs risk vased. Can you explain?\n\nSimilarly, treatment effects for NSW seem to be largely constant around 1000, but Figure 2, second row left panel shows a massive advantage for treatment effect based targeting. This seems to be about a factor 7 (1.75 vs 0.25). Where do these large effect differences come from given that the treatment effect curve is essentially constant? None of the treatment effects seem to differ by more than a factor 2.\n\nI tried to figure this out by looking through the code, but I couldn't find code that generated Figure 2. So, I don't actually know where it came from and what it shows. Maybe I missed it. On that note, it would be very helpful to clean up and document the code for the final version. As is, it's very hard to follow." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "(1)\n\nThe comparison of these two targeting strategies is an important problem. I'm glad to see that the authors study this question. Despite its significance, there hasn't been much reliable insight so far. I'd love to see more work in this direction! I'm weighing this strongly in my evaluation.\n\n(2)\n\nThe paper is very clearly written. The authors narrate a compelling story about the advantages of targeting based on treatment effect estimates, even if they are biased." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors compare the utility of targeting interventions based on estimated treatment effects with the utility of targeting based on predicted risk. The former is the method of choice from a utilitarian perspective. But practitioners and policy makers often choose the latter due to its simplicity and in cases where the normative goal is to assist those in greatest need." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1)\n\nAs compelling as the story is, I find the results less than conclusive. Looking at Figure 1, it looks like the confidence intervals around treatment effects are generally strongly overlapping across the entire x-axis of baseline risk. The one exception is the STAR dataset, where the lowest and highest point estimate have barely non-overlapping confidence intervals. As a result, another story consistent with the data is that effects are generally not so heterogeneous as conventional wisdom from the econometrics literature has it.\n\n\n(2)\n\nThe results in Figure 1 are actually a fair bit more favorable towards risk-based targeting than the introduction of the paper had me believed. Treatment effects generally increase with risk. Targeting the 80th to 90th percentile of risk generally seems to capture high treatment effects across all datasets. So, another story could be that we should exclude the most extreme values of risk from targeting, but other than that risk-based targeting sort of works.\n\n\n(3)\n\nI found it rather confusing to have unnormalized utility values on the y-axis. After all the sample size is rather arbitrary and does not correspond to the population-level utility obtained if the policy maker were to implement the given approach. Along the same lines, I found it difficult to reconcile Figure 1 and Figure 2. See question below.\n\n\n(4)\n\nIt would've been great to include datasets with real world confounding rather than the simulated confounding. My experience is that existing CATE estimation methods don't do very well in non-RCT settings. Might there be an advantage to risk-based targeting in non-RCT data?\n\n\nSuggestion:\n\nIt seems to me that the story is much less certain than the abstract and introduction make it sound. I would therefore appreciate it if you could indicate a greater level of epistemic uncertainty in the writing throughout. I don't think this would hurt the paper at all. As is, though, I'd worry that your writing suggests the question is essentially closed conclusively which would actually discourage additional work in this direction." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Modeling Real-World Confounding: Could the authors expand on how their confounding\napproach aligns with real-world biases encountered in observational data? A discussion on potential limitations in modeling confounding factors might aid readers in interpreting results for specific applications.\n\nEthical Implications: How might the authors' conclusions address ethical concerns,\nparticularly in terms of balancing fairness (people at the most rish) with effectiveness\nwhen treatment effect targeting benefits some groups more than others? Because the paper concludes that a treatment-effect-based method is preferable to a risk-based method, it raises\nthe natural question of whether individuals at the highest risk should be prioritized, or if those with the greatest\npotential treatment outcome should be targeted instead.\n\nData Quality: Could the authors provide more information on the reliability and source of the\nAcupuncture Dataset?\n\nConfidence Interval Estimation: In Equation (5), a biased estimator is used. Could the authors\njustify why this choice was made? Was it primarily for computational efficiency, or are there\nother reasons for using a biased estimator in this context?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Scope: The work’s target is evident in its cross-domain empirical evaluation, making it applicable to multiple policy\nareas. The paper introduces innovative use of biased treatment effect estimates to assess targeting efficacy.\nQuality: Methodologically, the paper is solid, employing credible RCT data and a robust approach to measuring\ntreatment effect heterogeneity. The use of doubly robust estimation and varying social welfare functions adds\ndepth to the analysis." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper explores targeting strategies for allocating limited resources to maximize social welfare in areas like\nhuman services, healthcare, and education. Specifically, it compares risk-based targeting, which prioritizes\nhigh-risk individuals, with treatment-effect-based targeting, which uses machine learning models to estimate who\nmight benefit the most from interventions. Using five real-world randomized controlled trials (RCTs) across diverse\ndomains, the paper concludes that even biased estimates of treatment effects generally outperform risk-based\ntargeting. This finding suggests in addition to the widespread reliance on risk-based approaches, policymakers\ncould incorporate treatment effect estimation when feasible." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Grammatical errors: there are notable grammatical errors in spelling and following the formatting of ICLR\ninstructions.\n\nScope of Treatment Effect Estimates: The reliance on simulated confounding could be more thoroughly justified;\nreal-world application requires consideration of domain-specific biases that could vary across different policy areas\nPotential Ethical Implications: Since the work could influence resource allocation in sensitive domains, further\ndiscussion on ethical considerations regarding inequality and bias would strengthen its impact and application." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Feel free to further discuss the first two weaknesses mentioned above.\n\nI have seen papers, such as [1], that discuss the complexities of estimating allocation welfare under budget constraints. The issue is that these constraints introduce dependencies between individual estimations, complicating the evaluation process. How is this relevant to your setting?\n\nShouldn't the results also depend on how many individuals are treated in each scenario? My intuition (supported by studies like [2]) suggests that the relative budget plays a significant role in determining which type of allocation mechanism is optimal.\n\nSince you studied a wide range of settings, did you find any recommendations that vary across these contexts? For instance, are there conditions under which risk-based targeting performs better?\n\nCould you elaborate on the choice of outcome of interest in the TUP dataset? It seems there’s an implicit assumption that individuals who show a larger increase in expenditure are more deserving of intervention. Why should this be the case? In contrast, a risk-based approach might assume that those who would have lower expenditures in the absence of interventions are more deserving. Could this actually be a more appropriate assumption?\n\n[1] Improved Policy Evaluation for Randomized Trials of Algorithmic Resource Allocation\n[2] Allocation Requires Prediction Only if Inequality Is Low" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This is an important issue, and I’ve personally never found a clear guideline on how best to approach it. I found the study original and of good quality.\n\nThe paper does a commendable job of incorporating datasets from diverse domains. \n\nIt also simulates the impact of unobserved confounders and considers the possibility of policymaker bias toward risk-based targeting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper compares ''risk-based targeting'' and ''treatment effect-based targeting'' across five datasets from RCTs in different domains. In risk-based targeting, the planner targets individuals based on their baseline risk, which is the expected outcome in the absence of treatment. This approach, commonly used by practitioners, does not account for causal effects. In contrast, treatment effect-based targeting involves first estimating the Conditional Average Treatment Effect (CATE) and then prioritizing individuals accordingly, often with the help of machine learning. A key concern is that these methods may introduce bias in the presence of unobserved confounding. The paper aims to provide empirically grounded guidance for navigating this tradeoff.\n\nFirst, the paper investigates the extent to which baseline risk serves as an effective proxy for targeting. To do this, treatment effects are estimated at different levels of baseline risk, revealing a surprisingly complex and not necessarily monotonic relationship between treatment effect and baseline risk. Second, the paper examines the potential additional gains from targeting based on treatment effect. Both utilitarian and Nash welfare are used to compare these mechanisms, showing that targeting based on estimated treatment effect can be up to three times more effective in nearly all cases. This remains true even when confounding is introduced into the CATE estimation and when the policymaker's preference for risk-based targeting is encoded in the social welfare function." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main concern is the reliability of comparing the estimated welfare from risk-based versus treatment effect-based targeting. The issue with the latter approach, particularly in resource-scarce settings, is that it first estimates CATE, then selects individuals with the highest estimated CATE given the available budget, and finally estimates the welfare of treating these individuals using the same estimated CATE. Even if the CATE is unbiased, this procedure can lead to severely biased welfare estimates in resource-constrained settings.\n\nIn Figure 1, the paper provides treatment effect estimates for each baseline risk percentile, along with confidence intervals. What exactly are we looking for here? If the goal is to rule out a monotonic relationship between baseline risk and treatment effect, none of the figures offer statistically significant evidence to support this.\n\nI also think the results should intuitively depend on the budget but this is not discussed in the paper. For instance, if the budget is very small, treatment effect-based allocation can be even more effective in exploiting heterogeneity. \n\nOverall, the writing is strong. I noticed the following minor typos: page 2 (potentially-based -> potentially-biased), page 4 (policy -> policy), page 5, second equation, page 7, Equation 4.5 is referenced which does not exist" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Fig. 2 is great, almost telling the story of the paper without the audience needing to read anything else. What would make it even better is replacing \"Percentage of data removed\" with something like \"Higher values mean more confounding in CATE estimates\". Currently readers have to read the methodology to understand the x axis, but a better label would mean they could understand the x axis without knowing exactly how the confounding was introduced, and read the methodology if they wanted more details." }, "rating": { "value": 10 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "This is a simple, well-executed paper making a very important point. As the authors note, most algorithmic decision making approaches are based on risk or some manipulation of risk in pursuit of fairness goals. This paper demonstrates the problem with that approach across a wide range of interventions.\n\nThe authors anticipate the key criticisms of the policy based approach (that RCT data is hard to find and so CATEs may be biased, and that equity preferences might provide a non-utilitarian reason to prefer risk-based targeting), and provide compelling evidence that even under significant confounding or equity preferences one should still prefer allocations based on CATEs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies a common risk-based approach to treatment allocation applied to a diverse range of treatments with randomized controlled trial data available. The authors find that allocating treatment according to risk (ie, bad outcomes under the no-treat baseline) produces worse outcomes than allocating according to conditional average treatment effect estimates, even when CATEs are biased or the decision maker has a preference for treating high-risk individuals." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The weakest part of this paper is probably the discussion of Fig 1. The authors cite a \"unique trend for each dataset\" (in CATE conditional on risk score) as evidence that the risk-based policy is flawed. Visually, to me, the trends look pretty similar: they look like there is basically no relationship between CATE and risk for most RCTs studied. This is still evidence for the risk-based policy being flawed (so it doesn't invalidate the claim being made), but I think it's a more accurate way to describe the results. I think Fig 2 is more compelling than Fig 1 anyway, so it probably make sense to lead with that result for the most impactful paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024comparing,\ntitle={Comparing Targeting Strategies for Maximizing Social Welfare with Limited Resources},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0iscEAo2xB},\nnote={under review}\n}" }, "abstract": { "value": "Machine learning is increasingly used to select which individuals receive limited resource interventions in domains such as human services, education, development, and more. However, it is often not apparent what the right quantity is for models to predict. In particular, policymakers rarely have access to data from a randomized controlled trial (RCT) that would enable accurate estimates of treatment effects – which individuals would benefit more from the intervention. Observational data is more likely to be available, creating a substantial risk of bias in treatment effect estimates. Practitioners instead commonly use a technique termed “risk-based targeting” where the model is just used to predict each individual’s status quo outcome (an easier, non-causal task). Those with higher predicted risk are offered treatment. There is currently almost no empirical evidence to inform which choices lead to the most effect machine learning-informed targeting strategies in social domains. In this work, we use data from 5 real-world RCTs in a variety of domains to empirically assess such choices. We find that risk-based targeting is almost always inferior to targeting based on even biased estimates of treatment effects. Moreover, these results hold even when the policymaker has strong normative preferences for assisting higher-risk individuals. Our results imply that, despite the widespread use of risk prediction models in applied settings, practitioners may be better off incorporating even weak evidence about heterogeneous causal effects to inform targeting." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "social welfare", "causality", "treatment", "treatment effect", "targeting", "risk", "policymaking" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/dfa365b0c13e73fc971d6779370139dadc5f42f3.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/b27328a21ff613c511371761b4e0a16628cb18cd.zip" }, "title": { "value": "Comparing Targeting Strategies for Maximizing Social Welfare with Limited Resources" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0jJ94VVgzi
Criteria and Bias of Parameterized Linear Regression under Edge of Stability Regime
main
Active
Edge of Stability;gradient descent;implicit bias
optimization
5;5;5;8
2;4;3;2
3;3;3;4
3;2;3;3
3;3;3;4
5.75
2.75
3.25
2.75
3.25
-0.522233
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- How does the level of overparameterization affect the EoS phenomenon? This might be an easy extension of Figure 5, by checking if increasing the level of overparameterization, i.e., the ratio $\\frac{d}{n}$, changes the EoS phenomenon. Figure 5 contains $\\frac{d}{n} = 2, 1$, but another experiment on a larger value of $\\frac{d}{n}$ might show if large overparameterization helps in EoS. Note that this is not strictly required but might be interesting." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- **Novel insights**: There are several novel and non-trivial insights -- i) quadratic losses can lead to EoS, ii) the diagonal NN model fulfils this requirement, iii) EoS can have regimes with oscillations with increasing magnitude but still converge later, iv) overparameterization might be necessary for EoS on quadratic losses. \n\n- **EoS on diagonal neural networks**: Diagonal neural networks serve as a simple to analyze but still expressive model. Using these models has 3 important advantages -- i) as these are real networks, their claimed phenomenon occurs not just on some theoretically well-crafted model, ii) diagonal NNs remain a good testbed for theoretical analysis of complicated deep learning phenomena, iii) they have provided a proof for EoS on diagonal neural networks, which was missing from existing works (Even et al 2023), and can now be used to verify claimed empirical phenomenon (Even et al 2023). Note that the proof of EoS on diagonal neural networks is highly non-trivial especially for the $\\mu\\eta > 1$ case.\n\n\n- **Presentation**: The paper is easy to read inspite of the heavy notation. The key insights are clearly explained and the figures are very helpful in understanding them. Figure 6, in particular, is a good example to intuitively explain the proof sketch.\n\n\n\n\n**References**--\n- (Even et al 2023) (S)GD over Diagonal Linear Networks:\nImplicit Bias, Large Stepsizes and Edge of Stability. NeurIPS." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors analyze the Edge-of-Stability (EoS) phenomenon for gradient descent on linear regression with the loss function $\\ell(\\langle x, \\beta\\rangle - y)$, where $(x, y)$ is the datapoint with $x\\in \\mathbb{R}^d$ and $y\\in \\mathbb{R}$. EoS phenomenon is observed when GD is run with a stepsize $\\eta > \\frac{2}{L}$ where $L$ is the smoothness constant of the loss. In the EoS regime, GD oscillates rapidly but still converges to the minima, under certain conditions.\n\nExisting works (Ma et al 2022, Ahn et al 2022, Song & Yun 2023) show that for sub-quadratic $\\ell$, GD can enter the EoS regime. This paper shows that for a particular quadratic parameterization, namely diagonal neural networks, GD can enter the EoS regime even for quadratic $\\ell$. The parameterization is $\\beta = \\beta_{w} = w_{+}^2 - w_{-}^2$ and $w= [w_{+}^\\top, w_{-}^\\top]^\\top$, with gradient updates on $w$.\n\n\nFrom Claim 1, for quadratic $\\ell(s) = s^2/4$, they obtain EoS for GD under a single-sample regime for $d\\geq 2, y\\neq 0$ and $x$ non-degenerate. Further, for $d=2, x= (1,x')$ and sparse realizable model $\\beta^\\star = (\\mu, 0), y= \\mu$, with initialization scale $\\alpha$, they obtain two separate regimes even for EoS. \n\nIn the first regime, from Theorem 1, for $\\mu \\eta <1$ and constant $\\alpha$, GD results in both the traditional Gradient Flow regime (GF), without oscillations, and the EoS regime. Here, EoS occurs with damped oscillations. Further, the final solution of GD has generalization error dependent on initialization $\\alpha$.\n\nIn the second regime, from Theorem 2, $\\eta \\mu \\in (1,2)$, with initialization, $x'$, and generalization error dependent on $\\eta\\mu$, GD in EoS might initially have diverging amplitude of oscillations, however, it eventually dies down and after a point converges at a linear rate.\n\n\nEmpirically, the authors show that their model requires overparameterization, as for $d=n$, EoS doesn't occur but for $d>n$, it does. For their case of single sample $n=1$, justifying their choice of $d=2$. \n\n\n**References**--\n- (Ma et al 2022) Beyond the Quadratic Approximation: the Multiscale Structure of Neural Network Loss Landscapes. Arxiv.\n- (Ahn et al 2023) Learning threshold neurons via the “edge of stability”. NeurIPS.\n- (Song & Yun 2023) Trajectory Alignment: Understanding the Edge of\nStability Phenomenon via Bifurcation Theory. NeurIPS." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Is subquadratic growth \"necessary\" or \"sufficient\" for observing EoS**? It might be beneficial to state the exact reference for the subquadratic condition. Note that the subquadratic condition was introduced in (Ma et al 2022), which the authors have not mentioned. Further, I'm not sure if (Ahn et al 2023) actually state that subquadratic growth is \"necessary\" for EoS. Assumption A2 and A3 in (Ahn et al 2023), show that subquadratic growth is sufficient for EoS. Similarly, the results in Section 4 in (Ma et al 2022), and assumptions 2.4 and 4.2 in (Song & Yun, 2023) are sufficient for EoS. If I'm missing some details and subquadratic condition is indeed necessary for EoS, can the authors should probably specify the exact theorem, assumption or an argument for this?\n\n- **How large is $\\mathfrak{t} - t_0$** ? In Theorem 2, there are $3$ phases for GD. In the first phase, it has not started oscillations, which lasts until $t_0$. From Lemma 7, $t_0 \\geq \\Omega_{\\mu, \\eta}(\\log(1/\\alpha^2))$, but only for $\\mu\\eta \\in (0,2)$, which includes $\\mu \\in (1, \\frac{3\\sqrt{2} - 2}{2})$. In the second phase, which lasts from $t_0$ to $\\mathfrak{t}$, the oscillations finally start decreasing in magnitude. Lemmas 14 and 15 establish that there exist such a $\\mathfrak{t}$, but not how large it is. As after $\\mathfrak{t}$, we see linear convergence, how long do we need to wait for it becomes an important question. If the authors cannot establish it theoretically, they might argue empirically that $\\mathfrak{t}$ not very large.\n\n- Typo in Line 1537: $|\\lambda_{1,2}| < 1$.\n\n**References** --\n- (Ma et al 2022) Beyond the Quadratic Approximation: the Multiscale Structure of Neural Network Loss Landscapes. Arxiv.\n- (Ahn et al 2023) Learning threshold neurons via the “edge of stability”. NeurIPS.\n- (Song & Yun 2023) Trajectory Alignment: Understanding the Edge of\nStability Phenomenon via Bifurcation Theory. NeurIPS." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. For convergence in the regime $\\mu\\eta<1$, the current analysis assumes period-2 flip sign in residual, and the result essentially states ${\\ell(w_{2t})}_{t\\geq 0}$ converges. I am wondering if the analysis can be extended to deal with more general oscillations without specific periods, as they are closer to the EoS phenomenon in practice [1]. \n\n2. The paper assumes a specific scaling initialization (lines 218 and 238) for subsequent analysis and the authors claim this initialization is to align with the literature on diagonal linear networks. Regarding this initialization, I have a few questions: \n(a) Why in the theory $w_+$ and $w_-$ are of the same scale while in experiment $w_+$ has larger scale than $w_-$? (b) How important is this initialization for ensuring convergence and EoS? (c) How easy is the analysis to be extended to more general or random (e.g. Gaussian) initializations? Just for quick comparison, some existing works can handle general initializations as long as the initial weights satisfy certain conditions [2,3] while they mainly focus on the 1-d case. \n\n3. The result in the $\\eta\\mu>1$ regime (Figure 4, left) suggests that smaller step size allows smaller generalization errors. This seems to contradict a common belief that a large learning rate is beneficial to generalization as it forces the minimizer to be in a flat region. Could authors provide some insights about this difference?\n\n[1] Cohen, Jeremy M., Simran Kaur, Yuanzhi Li, J. Zico Kolter, and Ameet Talwalkar. \"Gradient descent on neural networks typically occurs at the edge of stability.\" arXiv preprint arXiv:2103.00065 (2021).\n\n[2] Ahn, Kwangjun, Sébastien Bubeck, Sinho Chewi, Yin Tat Lee, Felipe Suarez, and Yi Zhang. \"Learning threshold neurons via edge of stability.\" Advances in Neural Information Processing Systems (2023).\n\n[3] Wang, Yuqing, Zhenghao Xu, Tuo Zhao, and Molei Tao. \"Good regularity creates large learning rate implicit biases: edge of stability, balancing, and catapult.\" arXiv preprint arXiv:2310.17087 (2023)." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is overall well-written and easy to follow. It introduces the settings clearly and provides sufficient illustrations to help present results in different conditions/regimes. The main findings are organized clearly. The proof overview is intuitive for grasping the essence of the proof technique. \n\n2. The paper studies implicit bias in the EoS regime for diagonal linear networks in multi-dimensions, which is a significant step forward as most of the prior works on EoS only deal with minimalist examples where the data is assumed to be 1-d, and it is interesting to see the empirical occurrence of EoS depends on the data dimension (non-degeneracy). Moreover, the diagonal linear network setting is closely related to the GD implicit bias literature, so it can potentially connected to wider works." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the implicit bias of GD on quadratic loss and linear models with diagonal linear network parameterization under the EoS regime. The paper first empirically shows that when data is in multi-dimension ($d\\geq 2$), EoS can occur for quadratic loss, while prior works on EoS suggest that subquadratic loss is necessary for EoS to happen. The experiments show that different choices of step size lead to different oscillation types, from GF regime to EoS regime to chaos to divergence. The paper then theoretically studies the parameter convergence (directly related to generalization) in a sparse solution 2-D diagonal linear network setting and provides non-asymptotic rates in various settings. The results show that in the EoS regime when the step size is not too large ($\\eta\\mu<1$), smaller initialization ($\\alpha$) yields a better generalization, while when step size is large ($\\eta\\mu>1$) there will be an error floor that cannot vanish as $\\alpha\\to 0$." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My major concern is regarding the \"EoS regime\". In the paper, there is no rigorous definition of the EoS regime. Instead, a flip sign condition $r_tr_{t+1}<0$ is used throughout the theoretical statements. However, I doubt whether the sign flips in residual indeed correspond to the commonly referred EoS phenomenon ($\\eta L_t>2$ and oscillations in loss happen along the trajectory). For example, if we minimize $f(x)=\\frac{1}{2}x^2$ with GD, the sharpness is $1$ and the residual $r_t=x_t$ is flipping its sign if we choose step size $\\eta\\in(1,2)$, but the loss is actually monotonically decreasing and the sharpness is always below $2/\\eta$. \nIn the proof overview (Section 5), it is discussed that $|r_{t+2}|\\leq (1-\\alpha)^2|r_t|$ is possible, but there is no statement comparing $|r_{t+1}|$ and $|r_t|$, and there is no statement on the relationship between step size $\\eta$ and sharpness $L$, so we are not sure if EoS is happening or not. In my understanding, it might be the case that only the $\\eta\\mu>1$ case corresponds to the EoS that people refer to. \n\nMinor typos (not exactly weaknesses): \n\n1. In Figure 4 the x-axis has no label, which I guess is $\\alpha$." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The authors argued that the model considered in this paper is the depth 2 diagonal linear network. Is that possible to extend the theoretical analysis to the model with higher depths?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "In this paper, the authors consider running GD with a large constant step-size to find linear interpolators that admit quadratic parameterization for the one-sample linear regression task. They theoretically proved the on-sample case and extend the one-sample results by empirically finding conditions in the more general n-sample cases. Theses theoretical analysis and empirical results are presented with clear structures." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors consider the task of finding interpolators for the linear regression with quadratic parameterization and study the convergence of constant step-size GD under the large step-size regime. They focus on the non-trivial question of whether a quadratic loss can trigger the Edge of Stability phenomena. The authors show through both empirical and theoretical aspects that, when some certain condition is satisfied, EoS indeed occurs given quadratic loss." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The major weakness of this submission is that the authors only provide the theoretical analysis and mathematical proof for the one sample case. The empirical results from the numerical experiments in section 3 shows the convergence of GD under the EoS regime when the loss function is quadratic. The authors only present the theorems characterizing the convergence of GD when the model considered has dimension $d = 2$. The authors should extend these theoretical analysis to the high dimension cases. The one sample analysis and teo dimensional proof is not enough to explain the empirical results from the numerical experiments." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What is the behind insight in selecting a specific formula of $\\beta$ as $w_+^2-w_-^2$?\n\nDoes the theoretical analysis hold in the more general $n$-sample case, or what are the difficulties in this analysis?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The work seems to be solid.\n\nThe paper is polished very clearly.\n\nThe paper focuses on understanding the convergence of the GD algorithm, breaking through the limitations of traditional smoothness analysis (the step-size exceeds the threshold of $2/L$), and extending previous conclusions to more complex scenarios (from subquadratic growth objective functions to quadratic growth objective functions)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the Edge of Stability (EoS) phenomenon. They study GD and focus on a constant step-size exceeding the typical threshold of $2/L$, As the contributions, they observe that EoS occurs even when the loss $l$ is quadratic under proper conditions while the existing works require a subquadratic $l $. Due to the close relationship between the quadratic $l$ and the depth-2 diagonal linear network, the findings provide some explanations for the implicit bias of diagonal linear networks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Does the conclusion of the more general $n$-sample case in the paper apply to SGD? Due to the importance of stochastic optimization, I expect to see similar conclusions under stochastic optimization as well.\n\nI am sorry I am not very familiar with this topic. I will revise my score based on the comments of other more senior reviewers." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024criteria,\ntitle={Criteria and Bias of Parameterized Linear Regression under Edge of Stability Regime},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0jJ94VVgzi},\nnote={under review}\n}" }, "abstract": { "value": "Classical optimization theory requires a small step-size for gradient-based methods to converge. Nevertheless, recent findings (Cohen et al., 2021) challenge the traditional idea by empirically demonstrating Gradient Descent (GD) converges even when the step-size $\\eta$ exceeds the threshold of $2/L$, where $L$ is the global smooth constant. This is usually known as the \\emph{Edge of Stability} (EoS) phenomenon. A widely held belief suggests that an objective function with subquadratic growth plays an important role in incurring EoS. In this paper, we provide a more comprehensive answer by considering the task of finding linear interpolator $\\beta \\in \\mathbb{R}^{d}$ for regression with loss function $l(\\cdot)$, where $\\beta$ admits parameterization as $\\beta = w^2_{+} - w^2_{-}$. Contrary to the previous work that suggests a subquadratic $l$ is necessary for EoS, our novel finding reveals that EoS occurs even when $l$ is quadratic under proper conditions. This argument is made rigorous by both empirical and theoretical evidence, demonstrating the GD trajectory converges to a linear interpolator in a non-asymptotic way. Moreover, the model under quadratic $l$, also known as a depth-$2$ \\emph{diagonal linear network}, remains largely unexplored under the EoS regime. Our analysis then sheds some new light on the implicit bias of diagonal linear networks when a larger step-size is employed, enriching the understanding of EoS on more practical models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Edge of Stability", "gradient descent", "implicit bias" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0cfb563bfb4a71e63523c2b1abec9c390548d930.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Criteria and Bias of Parameterized Linear Regression under Edge of Stability Regime" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0jUeqlQxMi
Open Vocabulary Panoptic Segmentation With Retrieval Augmentation
main
Active
Panoptic Segmentation;Open Vocabulary;Retrieval Augmentation
applications to computer vision, audio, language, and other modalities
3;3;5;5
4;4;5;3
2;1;2;2
2;2;2;2
3;2;2;2
4
4
1.75
2
2.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The reviewer likes the integration of retrieval-based classification with CLIP-based scores to address the domain shift issues between masked images and natural images. It clearly improves the model's ability to recognize unseen classes without additional training.\n\n- The paper's approach to construct a feature database from widely available paired image-text data is interesting. This setup enables adaptability without requiring pixel-level annotations.\n\n- The paper is well-organized and well-written." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents an approach for open vocabulary panoptic segmentation by combining retrieval-based classification with standard image segmentation. In particular, the authors introduce a retrieval-augmented segmentation method that utilizes a database of paired image-text features. During inference to address the challenge of domain shift between masked and natural images, the model retrieves relevant features from this database using masked segment features from the input image as queries. This retrieval-based score is combined with scores from a vision-language model (CLIP) to enhance classification accuracy for unseen classes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The reviewer feels that the retrieval-based classification relies heavily on the quality and diversity of the feature database constructed from paired image-text data. If the database lacks sufficient variety or coverage, the method may struggle to classify certain unseen classes accurately, particularly in real-world scenarios with a wide range of objects.\n\n- Further, the reviewer observed that the method uses Grounding DINO and SAM for generating masks in the training-free setup. However, SAM can produce suboptimal masks without human input which may degrade segmentation accuracy. This dependence on mask quality can limit the method’s effectiveness in fully automated settings.\n\n- The authors may want to include methods such as ODISE for a more comprehensive analysis." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Could the authors elaborate on how their method generalizes to completely unseen classes that are not represented in the feature database?\n2. As the number of classes in the feature database grows, how does the retrieval process scale in terms of computational resources and accuracy?\n3. Are there any plans to compare the proposed method with other leading approaches in the field to contextualize the improvements?\n4. The paper mentions that the quality of mask proposal generation is crucial. Could the authors provide more details on how variations in mask quality affect the final segmentation results?\n5. Is there potential to integrate this method with other modalities, such as depth information or video data, to further improve performance?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper introduces a creative solution to the open vocabulary panoptic segmentation problem by combining retrieval-based classification with CLIP, which is an original approach not commonly seen in the literature.\n2. The paper is well-structured, with clear explanations of the methodology." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel approach to address the challenge of segmenting arbitrary classes in images, a task known as open vocabulary panoptic segmentation. The authors propose a retrieval-augmented method that leverages a masked segment feature database constructed from image-text pairs. During inference, the system uses masked segment features from the input image to retrieve similar features and class labels from the database, combining these retrieval-based classification scores with CLIP-based scores to produce the final output. The method is evaluated on the ADE20k dataset and shows significant improvements over the baseline, particularly when fine-tuned on the COCO dataset, with absolute improvements of +4.5 PQ, +2.5 mAP, and +10.0 mIoU." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the paper demonstrates improvements over the baseline, it does not provide a direct comparison with other state-of-the-art methods in the field, which could provide additional context for the significance of the results.\n2. The discussion on how the proposed method generalizes to unseen classes could be expanded, as this is a critical aspect of open vocabulary segmentation.\n3. The paper could further discuss the limitations of the retrieval-augmented approach, especially regarding the reliance on the quality of the feature database and the potential scalability issues as the number of classes increases." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the weaknesses. I think the current version is not ready for publication. More experiment results are expected." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "Applying Retrieval Augmentation to vision tasks is a promising direction. The proposed way of constructing a database is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper enhances open-vocabulary panoptic segmentation by leveraging retrieval augmentation to address the challenges of classifying unseen objects. The authors propose a framework that integrates masked segment features with a retrieval-based method to improve performance for unseen classes. The model builds a feature database using paired image-text data and retrieves similar features during inference to classify masked segments. These retrieval-based scores are combined with CLIP-based scores to enhance accuracy. When applied to FC-CLIP, the proposed method demonstrates improvements in unseen classes on the ADE20k dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the method builds on FC-CLIP, the authors do not provide an introduction to FC-CLIP, which makes the paper hard to follow during reading.\n2. The feature database should be introduced prior to discussing the retrieval method to improve the flow and clarity of the paper.\n3. Since retrieval augmentation is intended to be a more general approach, the paper would benefit from presenting a more generalized framework to reflect its broader applicability.\n4. The method of constructing the feature database itself serves as a strong baseline. How does the performance of the proposed retrieval-augmentation approach compare to Grounding DINO?\n5. The paper lacks essential evaluations (the method is only evaluated on a single dataset with a single base model) and ablation studies." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "There is no ethics concerns." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What is the difference between IV Classification and OOV Classification cia CLIP in cross-dataset panoptic segmentation? What is the significance of this distinction? From Figure 1, it appears that the former only differs from the latter by including a linear projection.\n2. What is the fallback dataset, and how the author build it?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper explains related concepts clearly and details the methodology comprehensively, making the overall article easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a retrieval-based method to enhance the performance of open vocabulary panoptic segmentation by constructing a feature database from paired image-text data. During inference, the model uses masked segment features from the input image to query the database for similar features and associated class labels, which are then combined with CLIP-based scores. This approach leads to improvements in Panoptic Quality in both training-free and cross-dataset settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty of the paper is limited, primarily building upon the feature retrieval idea from Gui et al.[1]. Compared to Gui et al. [1]., the main modifications only include using a single CLIP backbone instead of two backbone. Please explain how these contributions can meet the strict requirements of top-level conferences.\n\n2. The authors use open vocabulary object detection combined with SAM to build the feature database, which limits the model's performance to the capabilities of the object detection component. Please explain how to handle classes that are not included in both the feature database and the fallback dataset during inference, or discuss the limitations of their approach for truly open-vocabulary scenarios.\n\n3. The definitions of IV Classification and OOV Classification are confusing. Why is it considered that the segment features and text embedding after linear projection in Figure 1 are equivalent to IV Classification? Please provide a more detailed explanation of the distinction between these two classifications and why the linear projection is significant for IV Classification.\n\n4. The experimental section lacks a critical component: comparisons with state-of-the-art methods, such as Gui et al. [1]., HIPIE [2], ODISE [3], OPSNet [4]. Please explain why these specific comparisons are not included and how your method compares theoretically to these state-of-the-art approaches.\n\n5. How does this method perform on open vocabulary semantic segmentation tasks, such as testing on ADE20K-847, ADE20K-150, Pascal Context-459.\n\n6. The paper claims to achieve performance improvement by utilizing a completely different dataset with only image level annotations. However, using the ADE20K training set to construct a feature database and evaluating it on the ADE20K validation set in the experiment lacks persuasiveness for open vocabulary. Please clarify how to ensure the open vocabulary nature when using the same dataset for both feature database construction and evaluation.\n\n7. There is irrelevant content in the lower-left corner of Figure 2. Please redraw the figure and ensure that the image is complete and free from irrelevant content\n\n\nreference:\n\n[1] Zhongrui Gui, Shuyang Sun, Runjia Li, Jianhao Yuan, Zhaochong An, Karsten Roth, Ameya Prabhu, and Philip Torr. knn-clip: Retrieval enables training-free segmentation on continually expanding large vocabularies, 2024. URL https://arxiv.org/abs/2404.09447.\n\n[2] Wang X, Li S, Kallidromitis K, et al. Hierarchical open-vocabulary universal image segmentation[J]. Advances in Neural Information Processing Systems, 2024, 36.\n\n[3] Xu J, Liu S, Vahdat A, et al. Open-vocabulary panoptic segmentation with text-to-image diffusion models[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 2955-2966.\n\n[4] Chen X, Li S, Lim S N, et al. Open-vocabulary panoptic segmentation with embedding modulation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 1141-1150." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024open,\ntitle={Open Vocabulary Panoptic Segmentation With Retrieval Augmentation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0jUeqlQxMi},\nnote={under review}\n}" }, "abstract": { "value": "Given an input image and set of class names, panoptic segmentation aims to label each pixel in an image with class labels and instance labels. In comparison, Open Vocabulary Panoptic Segmentation aims to facilitate the segmentation of arbitrary classes according to user input. The challenge is that a panoptic segmentation system trained on a particular dataset typically does not generalize well to unseen classes beyond the training data. In this work, we propose a retrieval-augmented panoptic segmentation method that improves the performance of unseen classes. In particular, we construct a masked segment feature database using paired image-text data. At inference time, we use masked segment features from the input image as query keys to retrieve similar features and associated class labels from the database. Classification scores for the masked segment are assigned based on the similarity between query features and retrieved features. The retrieval-based classification scores are combined with CLIP-based scores to produce the final output. We incorporate our solution with a previous SOTA method (FC-CLIP). When trained on COCO, the proposed method demonstrates 30.9 PQ, 19.3 mAP, 44.0 mIoU on the ADE20k dataset, achieving +4.5 PQ, +2.5 mAP, +10.0 mIoU absolute improvement over the baseline." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Panoptic Segmentation", "Open Vocabulary", "Retrieval Augmentation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5778cf98bbda7a07ddd88560ada1fbb2c6994d19.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Open Vocabulary Panoptic Segmentation With Retrieval Augmentation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0je4SA7Jjg
Spatiotemporal Learning on Cell-embedded Graphs
main
Active
Spatiotemporal Dynamics;Graph Learning;Physics-embeded Learning
learning on time series and dynamical systems
5;5;5;8
3;3;5;4
3;3;2;4
3;2;2;4
3;3;3;3
5.75
3.75
3
2.75
3
0.174078
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "Ablation study is pretty great to justify the proposed framework." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In general, this paper tackles interesting and meaningful problems governed by PDE. It is well written and the results are sound." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The methodology of feature-enhanced is not sufficient, the authors should write down in the appendix more equations with more explanation. Most importantly, why do authors propose such Algorithm 1, is there any physical or mathmatical meaning/inspiration? or any hypothesis? It would be better to show the train of thoughts of how did author propose this FE instead of just showing its working better." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness above" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is easy to follow.\n2. We can see detailed description of the technical details\n3. This paper touches the core problem in this area." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors proposed a cell-embedded GNN model (aka,CeGNN) to learn spatio-temporal dynamics. They claim that their learnable cell attribution to the node-edge message passing process better captures the spatial dependency of regional features." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The most puzzling aspect of this paper is that the discussion of related work and the selection of baselines are all based on studies from before 2022. In fact, there have been many breakthrough studies on both graph-based methods and other neural operator approaches in recent years [1].\n\n2. This work appears more like a straightforward extension of MP-PDE, both in terms of methodology and experiments. The paper proposes a cell-based method for extracting spatial relationships, but how much improvement could be observed if this feature were integrated into MP-PDE?\n\n3. The main experimental results are somewhat confusing. Since the code is not available, it is unclear whether the training data was generated from the authors' own simulations or from public datasets, and what the training dataset size is. If the data is self-generated, the comparison with a few simple baselines is not convincing. Furthermore, the authors mention long-term simulations, yet all experiments are based on one-step predictions, which is clearly insufficient.\n\n4. Regarding the core innovation of this paper, the cell feature is merely a learnable feature initialized by the distance to the cell center. Can its significance be verified by theoretical analysis or by measuring the distance between cell features and node features? The benefit here might simply be from adding position awareness, which makes the model fit specific data better. It could even be considered to replace the distance to the cell center with the distance to the nearest PDE boundary for each point, which might also yield improvements.\n\n[1] Wang, Haixin, et al. \"Recent Advances on Machine Learning for Computational Fluid Dynamics: A Survey.\" arXiv preprint arXiv:2408.12171 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please address the weakness in novelty that I have raised!" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Good empirical results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a new model, the Cell-Embedded Graph Neural Network (CeGNN), for simulating spatiotemporal dynamics across different physical domains. CeGNN introduces learnable cell attributions to the traditional node-edge message-passing process, upgrading it to a higher-order scheme that captures volumetric information and improves spatial dependency learning. Additionally, the Feature-Enhanced (FE) block enriches feature representations, tackling the over-smoothness issue common in Graph Neural Networks (GNNs). Extensive experiments demonstrate that CeGNN achieves superior performance and generalization in predicting physical dynamics, particularly for Partial Differential Equations (PDEs) and real-world datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weakness is lack of novelty. The main idea of this paper is the proposal of cell-attribution. In the field of topological learning, there have been several prior works proposing the idea of higher-order message passing, cell / simplicial complex neural networks. Please check the following literature:\n\n(1) Topological Deep Learning: Going Beyond Graph Data (this is a great survey of Topological Deep Learning)\nhttps://arxiv.org/abs/2206.00606\n\n(2) Cell Complex Neural Networks\nhttps://arxiv.org/abs/2010.00743\n\n(2) Weisfeiler and Lehman Go Topological: Message Passing Simplicial Networks (for simplical complexes)\nhttps://arxiv.org/abs/2103.03212\n\nHowever, application of these topological methods in the domain of learning physical systems is new." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please address the weaknesses mentioned above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well-organized and easy to follow.\n\n- The authors present abundant experimental results and visualizations to validate their ideas.\n\n- CeGNN achieves superior performance compared to the compared methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an end-to-end graph-based framework called CeGNN to address limitations of existing Graph Neural Networks in learning complex spatiotemporal dynamics, particularly the over-smoothing issue, and aims to enhance prediction accuracy and generalization ability. The authors introduce two key components: Cell-embedded MPNN block and Feature-Enhanced (FE) block. Through experiments on several PDE systems, the paper demonstrates that CeGNN outperforms other baseline models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The baseline methods are relatively weak. The authors did not include recent advancements in the field from 2023-2024, raising concerns about the effectiveness of the proposed method.\n\n- Modeling with higher-order graphs is a widely studied topic. Can the authors more explicitly summarize the contributions of CellMPNN compared to existing approaches?\n\n- The paper lacks a theoretical discussion on the effectiveness of CeGNN. Can the authors discuss the source of CeGNN's effectiveness from a theoretical perspective?\n\n- FE modules are not clearly defined." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Proposed a cell-embedded GNN model (aka, CeGNN) to learn spatiotemporal dynamics with lifted performance." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024spatiotemporal,\ntitle={Spatiotemporal Learning on Cell-embedded Graphs},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0je4SA7Jjg},\nnote={under review}\n}" }, "abstract": { "value": "Data-driven simulation of physical systems has recently kindled significant attention, where many neural models have been developed. In particular, mesh-based graph neural networks (GNNs) have demonstrated significant potential in predicting spatiotemporal dynamics across arbitrary geometric domains. However, the existing node-edge message passing mechanism in GNNs limits the model's representation learning ability. In this paper, we proposed a cell-embedded GNN model (aka, CeGNN) to learn spatiotemporal dynamics with lifted performance. Specifically, we introduce a learnable cell attribution to the node-edge message passing process, which better captures the spatial dependency of regional features. Such a strategy essentially upgrades the local aggregation scheme from first order (e.g., from edge to node) to a higher order (e.g., from volume to edge and then to node), which takes advantage of volumetric information in message passing. Meanwhile, a novel feature-enhanced block is designed to further improve the performance of CeGNN and alleviate the over-smoothness problem, via treating the latent features as basis functions. The extensive experiments on various PDE systems and one real-world dataset demonstrate that CeGNN achieves superior performance compared with other baseline models, significantly reducing the prediction errors on several PDE systems." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Spatiotemporal Dynamics", "Graph Learning", "Physics-embeded Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/35cad410f1728c39de6c69ca833c7fd80ef73eb6.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Spatiotemporal Learning on Cell-embedded Graphs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0jmFRA64Vw
FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models
main
Active
Federated Learning;Compression;Sparsity;Quantization;Communication Efficiency;Local Training
optimization
3;3;3
4;4;4
2;2;1
1;2;1
2;2;2
3
4
1.666667
1.333333
2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Why was Scaffnew chosen specifically for this study? What unique characteristics of Scaffnew make it suitable for the proposed compression techniques compared to other algorithms?\n1. Do the authors expect the empirical observations to generalize to other algorithms using the same compression schemes? If so, could the paper include a discussion on the expected performance of these compression schemes when applied to other popular algorithms?\n1. If the goal is to develop a communication-efficient algorithm that outperforms the state-of-the-art (SOTA), what are the current SOTA methods in communication efficiency? The proposed method integrates compression schemes with one particular algorithm. Are there other approaches that might achieve comparable communication efficiency in federated learning? A review and comparison with SOTA methods could strengthen the context of this work.\n1. Are the sub-captions in Figure 1 accurate, or could there be some typos that need correction? Please confirm and revise if necessary to ensure clarity." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The paper reports and discusses the empirical performances of FedComLoc in different configurations and hyperparameter settings, including heterogeneity, quantization bits, and TopK sparsities.\n- The paper presents some interesting observations based on the numerical results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents an empirical study on a new algorithm, FedComLoc, which extends Scaffnew by integrating compression techniques: TopK and quantization. Three settings of the proposed algorithm are evaluated: (i) compressing the communication from client to the server; (ii) compressing the local model itself; and (iii) compressing the communication from the server to each client. The paper reports the empirical performances for these configurations by using FedMNIST and FedCIFAR10 with varying degrees of heterogeneity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The scope of the paper is narrow, focusing solely on one algorithm, Scaffnew, with a basic integration of existing compression schemes. The paper lacks justification for why Scaffnew was chosen over other potential algorithms.\n- The paper provides an insufficient review of existing communication-efficient FL approaches (e.g., FedEF [1]). These existing approaches were also missing in the numerical experiment.\n- The experimental setup is not extensive, relying only on image datasets and simple models: MLPs and CNNs. Given the paper's empirical focus without theoretical analysis or in-depth discussion, it is challenging to generalize the findings. Moreover, the numerical comparison is limited and lacks breadth.\n- The paper offers no new insights or findings beyond empirical results and observations. The main conclusion appears to be: \"we can apply compression schemes to Scaffnew.\"\n\n**Reference**\n1. Li and Li. Analysis of error feedback in federated non-convex optimization with biased compression: Fast convergence and partial participation. ICML, 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the weaknesses section for details." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is clearly well-written, concise, and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores communication reduction techniques for a federated optimization method called Scaffnew. The proposed approach demonstrates that Scaffnew can be combined with various communication reduction techniques on both the local and global sides. Empirical results illustrate the effectiveness of FedComLoc in reducing communication overhead while maintaining comparable performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. My major concern is the novelty and technical contribution of this paper. Model compression techniques, such as top-k and quantization, are already widely used and well-established. Integrating these compression methods with an FL algorithm appears to be an incremental contribution. While this approach does address the communication cost challenges in Scaffnew, it is not immediately clear to me how applying model compression introduces new challenges or is non-trivial. Therefore, the technical contribution seems relatively weak to me.\n\n2. It appears that the proposed algorithm and experiments are conducted under a partial participation setting in FL. This could lead to potential “asynchronous” issues in FedComLoc-Global: since there is no model initialization step in FedComLoc, a client that has not participated in the previous $t-1$ steps would begin local training in the $t$-th step with an outdated model. This lack of updated model initialization may result in poorer convergence, particularly when only a small fraction of clients participate in the training.\n\n3. There are several issues with the presentation of the experimental results:\n\na. The caption of the subfigures in Figure 1 mentions sparsity, but the curves also represent sparsity.\n\nb. The results in the table in Figure 6 conflict with those in the subfigure within the same figure.\n\nc. The caption of Figure 8 states that there is a comparison with FedDyn, but the subfigures do not include this baseline.\n\nd. What is the purpose of K=100% (no sparsity) in Figure 8? It seems this is intended to compare Scaffnew with FedAvg and Scaffold." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. In line 14 and in line 24 when you write 'heterogeneous clients' and 'heterogeneous settings', respectively, it is unclear if you mean to heterogeneity in data or local hardware or both. \n2. In lines 28-29: \"Privacy concerns and limited computing resources on edge devices often make centralized training impractical, where all data is gathered in a data center for processing.\" Maybe you mean \"Privacy concerns and limited computing resources on a data center often make...\"?\n3. In lines 38-39: \"Our primary objective is to solve the problem (ERM) and deploy the optimized global model to all clients.\". Actually, in FL, it is often the server who wishes to obtain the optimal model rather than the local users. The users are contributing their local private datasets to the process of learning, \"serving\" the centralizing server.\n4. In line 40 there is a typo, it should be 'is' instead of 'are'.\n5. In lines 54-55: \"Quantization is another efficient model compression technique..., though its application in heterogeneous settings is limited\". Quite a harsh statement. Do you have any references supporting it?\n6. In lines 71-74 you claim that FedComLoc is specially designed for heterogenous environments. My question is why do you claim that, as your adopted compression methods, encompassing TopK and quantization, are generic tools in compression; and furthermore, your numerical evolutions involve the non-iid local datasets scenario, as in standard FL experimental studies.\n7. In lines 75-78 you mentioned that the integration of compression communication into Scaffnew is studied in either of client-to-server, server-to-client, and client-storage possibilities. Actually, as also covered by the majority of compressed FL works, it is the client-to-server communication bottleneck that is the most crucial one to be relaxed. \n8. Algorithm 1 is almost unreadable without being closely familiar with Scaffnew. For the paper to be a stand-alone one, you should specify in the accompanied text the not-intuitive-usages therein; e.g., control variates, the role probability, etc.\n9. In theoretical studies of compressed FL, it is typically revealed that the integration of compression slows downs the convergence rate obtained without it. Can you explain how in the rightmost column of Fig. 3 the reversed is evidenced?\n10. In lines 365-366: \"Observe the accelerated convergence of sparsified models in terms of communicated bits...\". It is not clear how this is being calculated. That is, to translate K into bits one can do, e.g., if K=10% set R=0.1*b when b it the used-bits in full-precision, mostly 32 or 64. What the x-axis of 'Communication Bits' measures in your case? \n11. It would be interesting to compare the performance of quantization and sparsification for the same bit rate...\n12. In lines 370-371: \"This indicates that sparsity training requires more data and benefits from either increased communication rounds...\". Benefits? according to my understanding more communication rounds imply slower overall convergence (that takes longer time); which is not wanted. Can you explain that?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Clarity:\n- The paper is overall well written and organized.\n- The experiments' accompanied details and explanations are overall well presented. \n- The ablation study is quite comprehensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this work the authors suggest to incorporate compression, via sparsification (TopK) and (random) quantization, into the established Scaffnew algorithm of Mishchenko et al. (2022); where the latter considerably advanced the reduction of communication complexity in federated learning. The integration of compressed communication into Scaffnew is studied in either of client-to-server, server-to-client, and client-storage possibilities; and experimentally verified using MNIST and CIFAR10 datasets in the non-iid local datasets scenario." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Significance & Quality:\n- Compression in FL has been extensively studied. It is of lessen motivation to incorporate it into multiple SGD local iterations, what already by itself relaxes the communication overhead compared to single SGD local iteration, as communicating the local gradients to the server occurs less often. \n- As stated by the authors in lines 177-179, this work's integration of compression in Scaffnew is merely heuristic and solely provides numerical evaluations for CompressedScaffnew (Condat et al., 2022); where the latter studies the theoretical aspects of general lossy compression integrated into Scaffnew in convex settings. \n- The majority of FL papers with provable convergence guarantees employ convex settings in their theoretical evolutions and non-convex ones in their experimental studies; having deep neural networks fit into that regime. \n- The CompressedScaffnew presents the idea of compressed Scaffnew, provides analytical study under the convex setting, and presents simulations that, unusually to their respective related works, covers only simplistic logistic regression model rather than diverse deep learning architectures. The significance of this paper is thus equivalent to the importance of a full-length comprehensive experiments section of the work CompressedScaffnew; i.e., one that includes simulations on general neural networks beyond the simplified logistic regression model. As a result, the merit of this paper to a conference such as ICLR is of minor contribution.\n\nOriginality:\n- The idea learned in this paper is not new and was already presented, and also analytically analyzed, in CompressedScaffnew (Condat et al., 2022). The numerical simulations of this idea are claimed to be the novelty of the presented work, and by themselves are insufficient.\n- The authors further claim in lines 169-171 that the studies of CompressedScaffnew (Condat et al., 2022) are not practical as they require\nshared randomness. Yet, a bulk of works studying compressed FL utilize pseudo-random methods upon which sharing a common seed can overcome the necessity of shared randomness." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "FedComLoc is a novel FL algorithm that significantly reduces communication costs by incorporating compression techniques into efficient local training, validated by thorough experiments." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024fedcomloc,\ntitle={FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0jmFRA64Vw},\nnote={under review}\n}" }, "abstract": { "value": "Federated Learning (FL) has garnered increasing attention due to its unique characteristic of allowing heterogeneous clients to process their private data locally and interact with a central server, while being respectful of privacy. A critical bottleneck in FL is the communication cost. A pivotal strategy to mitigate this burden is Local Training, which involves running multiple local stochastic gradient descent iterations between communication phases. Our work is inspired by the innovative Scaffnew algorithm, which has considerably advanced the reduction of communication complexity in FL. We introduce FedComLoc (Federated Compressed and Local Training), integrating practical and effective compression into Scaffnew to further enhance communication efficiency. Extensive experiments, using the popular Top-K compressor and quantization, demonstrate its prowess in substantially reducing communication overheads in heterogeneous settings." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Federated Learning", "Compression", "Sparsity", "Quantization", "Communication Efficiency", "Local Training" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/20d4440de6f806bdad76c181e97d2aeb33e4a9a2.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/0c61515fed64185292d08c2b6ecd314350693304.zip" }, "title": { "value": "FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0k7pbSxNOG
Fast Crystal Tensor Property Prediction: A General O(3)-Equivariant Framework Based on Polar Decomposition
main
Active
$O(3)$ group tensor equivariance;polar decomposition;tensor properties
applications to physical sciences (physics, chemistry, biology, etc.)
3;5;6;6
4;5;3;3
2;3;3;3
2;2;2;3
2;3;3;2
5
3.75
2.75
2.25
2.5
-0.492366
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What are the limitations of using polar decomposition for this application? Are there edge cases where it might not work well?\n\n2. How does the method perform on different types of crystal structures beyond those tested?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Novel use of polar decomposition for handling tensor equivariance\n2. Strong theoretical foundation with clear mathematical proofs\n3. Clear illustrations and explanations of complex concepts" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents GoeCTP, a novel O(3)-equivariant framework for predicting tensor properties of crystalline materials. The key innovation is using polar decomposition to handle tensor equivariance through an external rotation and reflection (R&R) module, rather than building it into the network architecture." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited discussion of potential limitations or failure cases\n2. Only two datasets are used.1." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Is it the case that all tensor properties need to satisfy Equation 2, or only certain tensor properties? Why?Please provide specific examples of tensor properties, indicating which properties need to satisfy Equation 2 and which do not, along with an explanation of the underlying reasons for this distinction. This would help deepen the understanding of the method's applicability and limitations." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. GoeCTP is plug-and-play as it can be readily integrated with any existing single-value property prediction network for predicting tensor properties. \n2. GoeCTP does not introduce excessive computational overhead." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose an O(3)-equivariant framework, GoeCTP, for crystal tensor prediction. GoeCTP utilizes polar decomposition to rotate and reflect the crystal into a standardized invariant position in space. The orthogonal matrix obtained from the polar decomposition is used to achieve equivariant tensor property predictions. The GoeCTP method achieves higher quality prediction results and runs more than 13× faster on the elastic benchmarking dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The article has limited contributions in terms of methodological innovation, as the methods and main structure used by the authors are derived from DiffCSP++[1] and Comformer[2]. For detail, the polar decomposition method used may have been inspired by DiffCSP++, while the code implementation adopts the structure of Comformer. \n2. The article does not clearly explain why Equation 2 needs to be satisfied. It is suggested that the authors provide more background or explanation regarding the physical or mathematical significance of Equation 2 in relation to tensor property prediction. This would help readers better understand the importance of this equation within the proposed framework. \n3. There are some citation issues in lines 339-340 of the article.\n\n\nReferences: \n[1] Rui Jiao, Wenbing Huang, Yu Liu, Deli Zhao, and Yang Liu. Space group constrained crystal generation. In The Twelfth International Conference on Learning Representations, 2024. \n[2] Keqiang Yan, Cong Fu, Xiaofeng Qian, Xiaoning Qian, and Shuiwang Ji. Complete and efficient graph transformers for crystal material property prediction. In The Twelfth International Conference on Learning Representations, 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "As listed above in Weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. An alternative approach is proposed to achieve tensor O(3) equivariance. If used, it is faster than equivariant network based O(3) equivariant predictions like GMTNet." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel approach intended to achieve O(3) tensorial equivariance by transforming crystal structures into standardized positions, so that neural networks do not need to satisfy equivariance during prediction. The invariant predictions are then mapped back to equivariant outputs. While the goal of O(3) equivariant predictions is notable, prior work has already addressed this problem with established solutions, including ETGNN's vector outer product and GMTNet's equivariant networks. Additionally, existing techniques, such as frame averaging and minimal frame averaging, can be employed to achieve O(3) tensor equivariance effectively. Moreover, this work does not consider space group constraints, which are crucial for tensorial properties in crystallography. As such, the novelty and contribution of this work are limited and do not meet the standards for acceptance at current form." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Lack of Consideration for Space Group Constraints**\n\nSpace group constraints, which are fundamental in determining the tensor properties of crystals, are not accounted for in this work. No experimental results are provided to verify whether the proposed method can generate predictions that align with these constraints. Crystals exhibit unique tensor characteristics that are intrinsically tied to their crystal class or space group, and ignoring these symmetries is a significant oversight.\n\n2. **Limited Improvement in Performance**\n\nThe integration of the proposed module does not enhance eComformer’s performance beyond achieving O(3) equivariance, as shown in Table 4. Other alternatives, such as ETGNN, frame averaging, and minimal frame averaging, also achieve O(3) equivariance but were not discussed or compared in this work. A more thorough discussion and comparative analysis of technical contributions and novelty would be beneficial.\n\n3. **Absence of Experiments on Piezoelectric Tensors**\n\nThe work lacks experiments on piezoelectric tensors, which are especially sensitive to space group constraints. Including these experiments would strengthen the evaluation of the proposed approach’s applicability across tensorial properties with varying sensitivity to symmetry constraints.\n\n4. **Performance on Elastic Tensors**\n\nThe performance of the proposed method on elastic tensors is significantly lower than GMTNet's original results. This suggests potential limitations in the approach’s effectiveness.\n\n5. **Efficiency Gains Not Attributable to Proposed Method**\n\nThe efficiency gains claimed largely derive from the use of the lightweight eComformer, not the proposed approach. Similar speed-ups could be achieved by combining eComformer with other O(3) equivariant methods such as ETGNN’s vector outer-product approach or minimal frame averaging." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Were there qualitative case studies where the proposed method's predictions were compared to known real-world material properties, such as elastic or dielectric responses of specific materials?\n2. Were there plans to test the proposed method on other datasets, especially those involving more complex or extreme tensor property cases (e.g., materials with highly anisotropic properties or rare crystal structures)? \n3. Why was the dataset \"Piezo\" that was tested in a prior work [Yan et al.] not tested in this paper?\n4. What explains the discrepancy in the number of samples in the \"Elastic\" dataset between the prior work [Yan et al.] and this paper? The prior work [Yan et al.] reports 14,220 samples in Table 1, while this paper reports 25,110 samples in Table 1 on line 364.\n5. Could the specific requirements of O(3) equivariance for crystalline materials limit the use of polar decomposition? Additionally, how did these differences impact the potential generalisation of the proposed method to molecular systems, where O(3) equivariance is defined differently?\n\n[Yan et al.] [A Space Group Symmetry Informed Network for O(3) Equivariant Crystal Tensor Prediction, In ICML 2024](https://openreview.net/forum?id=BOFjRnJ9mX)." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The pre-processing step of standardizing crystal positions through polar decomposition is significant for multiple reasons. It ensures equivariance, simplifies model architecture, and preserves tensor properties across different orientations through an additional step.\n2. The method achieves 13x speed improvement over prior methods in predicting tensor properties.\n3. The paper effectively explains O(3)-equivariance in crystal tensor prediction through clear organisation, helpful diagrams, and accessible mathematical explanations, making sophisticated concepts understandable even to non-specialists." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a method for predicting how crystals react to forces applied from different directions, a challenge that requires maintaining consistency regardless of the crystal's orientation in space. \n\nThe proposed method uses a sound mathematical technique (polar decomposition) to standardize crystal positions, enabling faster and more accurate predictions of these directional properties (known technically as tensor properties) while respecting the physics principle of orientation independence (technically called O(3) group equivariance). \n\nThe proposal is significantly faster and more accurate than existing approaches, especially for predicting how materials deform under stress or respond to electric fields." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper focuses primarily on quantitative metrics (e.g. Frobenius norm and EwT percentages) to demonstrate the method's effectiveness. However, there is a lack of qualitative insights into how the method affects real-world predictions. Including case studies or qualitative analyses where the method's predictions are compared to known physical properties of materials would strengthen the practical significance.\n2. Evaluation is carried out on two specific datasets for dielectric and elastic tensor prediction. While the results are promising, these datasets may not cover the full range of tensor property prediction challenges. Testing on a broader variety of materials, including more extreme cases, would strengthen the claim of generalisability.\n3. As the authors discuss between lines 137 and 142, \" the requirements for O(3) equivariance typically differ from the O(3)-equivariance\ndefined in the general molecular studies.\" Because of these specific requirements, the scope and the applicability of the proposed polar decomposition are limited." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024fast,\ntitle={Fast Crystal Tensor Property Prediction: A General O(3)-Equivariant Framework Based on Polar Decomposition},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0k7pbSxNOG},\nnote={under review}\n}" }, "abstract": { "value": "Predicting tensor properties of the crystalline materials is a fundamental task in materials science. Unlike single-value property prediction, which is inherently invariant, tensor property prediction requires maintaining $O(3)$ group tensor equivariance. This equivariance constraint often introduces tremendous computational costs, necessitating specialized designs for effective and efficient predictions. \nTo address this limitation, we propose a general $O(3)$-equivariant framework for fast crystal tensor prediction, called {\\em GoeCTP}. \nOur framework is efficient as it does not need to impose equivalence constraints onto the network architecture. Instead, {\\em GoeCTP} captures the tensor equivariance with a simple external rotation and reflection (R\\&R) module based on the polar decomposition. The crafted external R\\&R module can rotate and reflect the crystal into an invariant standardized crystal position in space without introducing extra computational cost. We show that {\\em GoeCTP} is general as it is a plug-and-play module that can be smoothly integrated with any existing single-value property prediction network for predicting tensor properties. Experimental results indicate that the {\\em GoeCTP} method achieves higher prediction performance and runs 13$\\times$ faster compared to existing state-of-the-art models in elastic benchmarking datasets, underscoring its effectiveness and efficiency." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "$O(3)$ group tensor equivariance", "polar decomposition", "tensor properties" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/798e53d8f111406e46e7b0c5fe83ea7ab6e1d09d.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Fast Crystal Tensor Property Prediction: A General O(3)-Equivariant Framework Based on Polar Decomposition" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0koPj0cJV6
A Watermark for Black-Box Language Models
main
Active
watermarking;large language models;black-box
foundation or frontier models, including LLMs
3;5;5;6;6
4;3;3;3;4
2;2;2;4;3
1;3;2;2;3
1;2;2;3;3
5
3.4
2.6
2.2
2.2
-0.372678
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "All questions I list here are repeated from the \"Weaknesses\" section above:\n- Can the authors elaborate on the decision to introduce the AUC until fixed FPR metric?\n- Do the authors believe a false positive rate of 1% is a practical setting for real-world deployment?\n- Can the authors extend their paraphrasing robustness evaluation to include longer texts and demonstrate that their watermark is as robust as best variants from prior work?\n- Can the authors comment on the discrepancy between the blackbox-focused framing of the earlier sections of the paper, and the key results demonstrated and discussed in Sec. 5 being in the whitebox case?\n- Can the authors comment on the statement that $k$ below text length $L$ is not as practical in the blackbox case, and include some experiments in the $k=L$ case?\n- Can the authors compare their method to cited blackbox baselines or explain why this is not feasible?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- While it is based on a generalization of ideas from existing schemes, the exact scheme proposed is to the best of my knowledge novel. The authors do a good job of exploring different variants of the scheme (e.g., CDF) in a principled way. \n- The theoretical results are sound. I especially appreciate that Theorem 4.2 is carefully placed into context and analyzed for various input values to demonstrate its implications. \n- Experiments are very thorough, involve important aspects such as quality evaluation with LLM judges and paraphasing attacks, and explore various scenarios and scheme ablations, making interesting observations.\n- Whitebox results seem convincing (up to some reservations below), making the case for significance.\n- While I have some issues with the method section (see below), the theory and experiments parts of the paper are very well written." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes an LLM watermarking scheme that is applicable in black-box scenarios, i.e., when the party watermarking the text does not have access to the sampling procedure, but also in standard white-box cases. The authors prove the distortion free property and the lower bound on AUC. Extensive experiments among else evaluate watermark TPR/FPR, text quality, and robustness under token replacement and paraphrasing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "As a meta point, the authors are using the 2024 style file and should update it to the latest version to avoid desk rejection. I understand that this is an honest mistake, but in particular the lack of usual line numbers is making it hard to refer to particular parts of the writeup.\n\nThe weaknesses of the paper are in my view:\n\n(1) Limitations of the evaluation setup\n- The authors recognize that AUC is not the most practically relevant metric yet resolve this by proposing a new metric (AUC below fixed FPR), instead of using the more standard TPR @ fixed low FPR. As this is instantiated with a still high FPR of 1% the metric is still dominated by results at impractical FPRs. Can the authors elaborate on the decision to introduce this metric? Do the authors believe a false positive rate of 1% is a practical setting for real-world deployment?\n- Prior work (Kirchenbauer 2023b among else) has already shown that short texts such as those studied here (~300 tokens) are not robust to paraphrasing, while (passive adversaries that do not learn the watermark beforehand) start being much less able to remove the best variants of KGW at above ~600 tokens. Can the authors extend their evaluation to include this setting and demonstrate that their watermark is equally or more robust?\n\n(2) Despite being the title and the central framing of the paper, the practicality of the blackbox watermark is underdiscussed and not well substantiated. Perhaps framing the paper around the whitebox variant would have been more convincing. Namely: \n- As authors say, it can be hard to control token lengths of chat API responses. Further, and more importantly, it is not always possible to prefill the first $k$ tokens of the assistant response. This implies that the variant where $k$ is equal to text length is the most practical for blackbox models, yet is not evaluated, and there is no detailed discussion of this. As already for $k=50$ we can at most get 70 pAUC, it is likely that the practical variant would either not obtain good results, or need very high $m$. \n- The limitation of the blackbox setting that could be more explicitly mentioned/analyzed is that $len/k * m$ queries are needed to produce 1 text. For the practical setting above with high $m$ this can be prohibitively expensive. \n- The baselines (PostMark and Yang et al.) are not evaluated, yet they study the exact same blackbox setup. Can the authors explain this decision? Baselines being costly does not seem like a sound rationale, as they could still be evaluated along with their cost, which can then be compared to the cost of the proposed watermark. \n\n(3) Minor writing issues around the method description. In particular Sec. 3 is quite dense and not very friendly to readers aiming to understand the high-level idea behind the watermark. For example, $u_t$ is simply introduced but its components could be explained more intuitively, perhaps even through an example or supporting figures which are notably missing. Detail: $g(w)$ is introduced but not used later. \n\nMinor writing suggestions that are not treated as weaknesses:\n- For consistency with prior work, it would be good to use the more standard scheme names such as KGW self-hash and ITS/EXP instead of introducing new aliases KB and K.\n- It would be beneficial to label $m$ and $\\delta$ in Table 1 as it is not immediatelly clear what they represent. \n- In \"hyperparameters\" section of the evaluation, it should be explicit that $F_k$, if I am not mistaken, is not chosen, but simply follows from the choice of $F$." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The \"Related Work\" section appears to suggest that Aaronson and Kirchenbauer et al. were the first to embed information in LLM outputs.\nHowever, the paper \"Neural Linguistic Steganography\" did this as early as 2019." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The paper seems to do a good job of optimizing both their scheme, and the schemes they compare against.\nIn particular, it is interesting that making the watermark detector of Aaronson length-aware improves performance as much as it does." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "A method of generating watermarked text using query access to a language model is described.\nThe method works by auto-regressively sampling short sequences of tokens, selecting the sequence with the highest watermark score.\nThe watermark score is similar to Aaronson's." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The ideas and method are straightforward adaptations of existing work.\nThe technique is essentially identical to Aaronson's, except that they use rejection sampling instead of the Gumbel-max trick.\nThe scheme is also only distortion-free under certain assumptions about the text, which essentially translate to it having consistently high entropy." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Could you provide an example of watermarked text?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The method is effective in a black-box setting. It only requires to sample sequences from LLMs.\n\nThe paper provides formal guarantees for detection performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method for watermarking language models in a black-box setting. It only requires sampling output sequences from language models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper’s motivation could be articulated more clearly. The main motivation stems from the security risks associated with providing API access that exposes logits to third-party users for applying their own watermark. However, simpler methods could enhance security; for instance, instead of exposing logits, LLMs could offer APIs to gather specific information users want to integrate. Furthermore, the paper presents a zero-bit watermarking technique, which only detects whether a text is watermarked but cannot infer additional information from the watermark.\n\nThe paper could also benefit from a more comprehensive evaluation. For example, comparing the time complexity of the proposed method with baselines and providing examples of watermarked text would strengthen the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Questions are in weaknesses above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper shows a solid theoretical analysis of the proposed scheme, as well as the distortion-free property that was claimed, establishing that the watermarked text is statistically indistinguishable from the original model's output. They also provide a lower bound on detection performance, connecting it to the entropy of the language model's output and the number of samples used.\n- The experimental results presented in the paper support the theoretical claims and demonstrate the effectiveness of the proposed scheme. The authors conduct experiments on two popular LLM models, MISTRAL-7B-INSTRUCT and GEMMA-7B-INSTRUCT, and show that their scheme is competitive with or even superior to existing white-box watermarking schemes in terms of detection performance, text quality, and perplexity.\n- The paper explores the robustness of the scheme to adversarial attacks - the impact of random token replacement and paraphrasing attacks. While paraphrasing proves to be a significant challenge, the scheme shows resilience to random token replacement. This analysis of robustness provides a realistic assessment of the scheme's strengths and limitations in practical settings. \n- The proposed framework is versatile and allows for various extensions and adaptations. For instance, it can be applied recursively, allowing multiple users to watermark the same model without interfering with each other. The scheme can also be adapted for white-box settings when next-token probabilities are available." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, a black-box watermarking scheme for LLMs is proposed. The idea is to enable watermarking with only sampling access i.e., without requiring white-box access to a model’s next-token probability distribution. The scheme allows third-party users with API-only access to embed watermarks without altering the distribution of generated text, achieving a distortion-free watermark (generated content is indistinguishable from the original output). It supports multiple secret keys, making it possible for different users to watermark the same model recursively without interference. The authors also provide theoretical guarantees on detection performance, false-positive rates, and robustness against adversarial attacks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Practicality: What do the authors mean when they claim their method enables end users with only API access to embed watermarks? I am unclear about the motivation behind this approach. Is it practical for users to watermark a model that they do not own? What is the reasoning here, particularly if watermarking serves as a security measure to prevent model misuse? Wouldn't this imply that the method could also allow potential attackers access to the watermark?\n- Experiments and General Format of the Paper: The paper lacks clarity and structure, making it difficult to fully grasp the motivation behind the proposed approach. While there may be a valuable contribution here, the current format obscures its impact. Figures and tables are largely separated from the sections where they are referenced; it would improve readability to place these closer to the relevant results. The theoretical guarantees could be moved to the end or even to an appendix, allowing more space for additional results in the main body. The motivation behind the approach needs clearer explanation—if the goal is to \"give power back to the people,\" it should clarify why this is relevant, considering that users are not model owners, and watermarking aims primarily to prevent misuse. A well-articulated motivation would strengthen this section. Section 5.3 isn't necessary and could be integrated into the experimental results or discussion rather than standing as a separate section (optional).\n- Results: The results presented are somewhat unconvincing. My primary baseline for comparison is KB, the initial paper to propose watermarking for LLMs. Although this approach targets black-box settings while aiming to remove distortions, it does not outperform KB, which was introduced nearly two years ago. Could the authors provide further insight into this? This issue may partly relate to the paper's structure, but I believe the authors need to highlight their main advantage more convincingly. For instance, it would be helpful to illustrate the tradeoff between distortion and text quality by comparing texts generated by KB and the proposed method, possibly using LLM-Judge. Additionally, if feasible, demonstrating the tradeoff between distortion and robustness would add value to the analysis.\n- Finally, regarding the distortion-free claim, while the theoretical guarantees support this assertion, it would be beneficial to include qualitative results that demonstrate the distortion-free nature of the approach. Consider displaying examples of the unwatermarked text, the text watermarked by the proposed approach (using optimal hyperparameters), and the text watermarked by KB (also with optimal hyperparameters) for a clear, comparative illustration." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "My main question is about the novelty of the paper’s setting and its final results. As mentioned above, the work Christ et al already presents provably secure distortion free black-box watermark that is also robust to adversarial attacks (under a formal definition). Can you compare your work with them (and perhaps other similar previous works using crypto and rejection sampling) and explain what exactly the set of features that your work adds?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The problem of black-box watermarking is an important problem, and having new schemes in this direction would be interesting. However, as I explain below, the schemes should be clear in what they offer and what is their advantage over previous work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents watermarking schemes for LLM’s outputs, in the setting that we only have black-box access to the model’s “next token generation” function.\n\nThey claim their scheme is “distortion free” and “can be used in a nested way”.\n\nIn a bit more detail, the paper’s scheme is based on a scoring function, which in turn is based on a secret key. Then, when the LLM’s output is being generated, at each step, multiple samples are gathered. Then, the scoring function is applied to them all and the one with the highest score is chosen." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weakness of the paper is that it is barely readable, when one actually wants to understand the scheme and the arguments. The presentation of the scheme is super dense and lacks formality. Instead of introducing ideas one by one, they are jammed and one gets no intuition as to what is goin on, beyond the high level description of “using scores”.\n\nIn fact, the paper’s main setting (which seems to be the main novelty) is already used in previous work published in learning venues. For example, this (cited work) from more than year ago (published in COLT) https://eprint.iacr.org/2023/763 exactly studies the setting that the paper does: black-box access to the token generation function, and does use a similar idea of using a hash function to pick the next token by rejecting some. It is also provably robust (under certain conditions) as opposed to the weaker model studied here (random substitution) and comes with clear theorems that prove undetectability (which implies distortion free-nes and utility both).\n\nOne main comment for improving the writing: \n\n- Try to define everything formally and at the right pace.\n- There are also issues with using crypto terms without clarity. For example, F is a CDF, and then F[s] is a “single draw from a pseudorandom number generator for F seeded by integer seed s” . I know cryptography well, but I have no idea what this sentence means. Then, it is assumed that F[h(K,w)] is a PRF. What is the citation that this is a PRF whenever F is PRG? (I don’t think this is true actually).\n- What is the role of n-gram, l-gram, and their relation with tokens. Sentences like “where we allow the left endpoint to spill over only to…” are super informal and cannot be formally understood and checked.\nTheorem 4.1 : what is F, and why should it be continuous? When it comes to efficient algorithms none are actually continuous (everything is discrete) so this is a strange assumption to make." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "a new scheme for watermarking black-box language models" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024a,\ntitle={A Watermark for Black-Box Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0koPj0cJV6},\nnote={under review}\n}" }, "abstract": { "value": "Watermarking has recently emerged as an effective strategy for detecting the outputs of large language models (LLMs). Most existing schemes require \\emph{white-box} access to the model's next-token probability distribution, which is typically not accessible to downstream users of an LLM API. In this work, we propose a principled watermarking scheme that requires only the ability to sample sequences from the LLM (i.e. \\emph{black-box} access), boasts a \\emph{distortion-free} property, and can be chained or nested using multiple secret keys. We provide performance guarantees, demonstrate how it can be leveraged when white-box access is available, and show when it can outperform existing white-box schemes via comprehensive experiments." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "watermarking", "large language models", "black-box" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6cdbff9434fe067243b9dc5e6d2866d557f5d303.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "A Watermark for Black-Box Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0lMhptUGxP
Large Language Model Alignment via Inverse Reinforcement Learning from Demonstrations
main
Active
Large Language Model Alignment;Alignment from Demonstration
alignment, fairness, safety, privacy, and societal considerations
1;5;5;5
4;5;4;4
2;3;3;3
1;2;3;2
2;4;2;2
4
4.25
2.75
2
2.5
0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Could the authors clarify on why this work falls into the inverse reinforcement learning category? I might be wrong, but my understanding of inverse reinforcement learning is about uncovering the underlying reward function from expert trajectories. This work is not about finding the hidden \"true reward function\" but about matching the demonstration distribution. Thus I am confused by the use of the term \"inverse reinforcement learning\".\n\n2. In a typical alignment pipeline, learning from annotated preference data like RLHF comes after SFT. RLHF often yields mode-seeking distributions in contrast to SFT. Could the authors comment on the compatibility of the proposed method and RLHF? Should we expect RLHF to provide further improvement given that the AfD method is already mode-seeking?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper does a good job at interpreting LLM alignment under the learning from demonstration framework. It successfully frames the alignment problem in the language of distribution matching. The idea of using the reverse KL distance follows naturally.\n\nThe authors identify the heterogeneity problem in the naive adoption of the discriminator-as-reward method and propose the Init-SFT RM as a solution to mitigate the heterogeneity gap. Init-SFT RM demonstrates strong performance in the experiments. This idea provides insights into learning from demonstration and can be applied to a broader class of problems beyond LLM alignment." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work studies the large language model (LLM) alignment problem in the learning from demonstration framework. Under this framework, the widely adopted supervised finetuning (SFT) paradigm can be interpreted as matching the policy distribution and an unknown expert demonstration distribution with the forward KL distance measure. The authors then propose to consider distribution matching with the reverse KL distance. This problem has been studied in the imitation learning literature. A standard method is to train a discriminator to distinguish the policy trajectories and the expert trajectories and then train the policy by reinforcement learning with reward signals derived from the discriminator's outputs. This work adopts this method in the context of LLM alignment and evaluates it empirically on the Harmless and Helpful dataset. Experiment results show that the proposed method performs better than or on par with the SFT baseline and learning from human annotated preference data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Important details of the method is missing from the main text. Section 3.2 talks about extrapolating the learned learned reward models but does not provide any detail on how it works in the context of alignment. Perhaps as a consequence, the results presented in Section 4.3 are confusing to me. It looks like the only difference to Section 4.2 is the evaluation metric being GPT4 as a judge rather than the golden reward model.\n\nAnother weakness of this work is the lack of understanding of the behavior of the proposed method. The distinction between forward KL distance and reverse KL distance lead to two different methods in SFT and discriminator-as-reward. The authors also discussed the mass-covering and mode-seeking behavior in Section 3. One natural question to ask here is how it impacts the behavior of the alignment algorithms and if they yield different outcomes. However, the discussion in Section 4 is rather hand-wavy. The authors simply characterize the harmless dataset and the helpful dataset as less divergent and more divergent. I think a deeper analysis on the mass-covering and mode-seeking behavior in alignment can greatly improve this work.\n\nIn terms of writing, citation format is misused through the manuscript. Please proofread the paper carefully and use the proper citation command (e.g., \\citep{} vs \\citet{}) in the revision." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Could the authors clarify why they chose to rely primarily on the golden reward model for evaluation rather than using GPT-4 as a critic throughout in Section 4.1? Would the golden reward model alone provide a sufficiently fair or robust assessment of alignment performance, especially given GPT-4’s nuanced evaluation capabilities?\n\n2. Could the authors clarify the key distinction between AfD and SPIN, particularly regarding their reliance on reward models? From my understanding, SPIN uses a DPO-like objective to align LLMs directly without a reward model, whereas AfD relies on a reward model for alignment. Given this, could the authors elaborate on the specific advantages AfD provides over SPIN in terms of contribution to the field?\n\n3. The authors mention that AfD is more efficient than traditional RLHF methods, but it would be helpful to understand precisely where these efficiency gains come from. Could the authors specify which parts of the AfD process contribute to this claimed efficiency, particularly in comparison to standard RLHF?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. **Innovative Use of RL Concepts:** The authors effectively integrate RL concepts—such as inverse RL, reverse KL, and forward KL divergence—into the LLM alignment framework. This combination with RLHF provides a fresh, rigorous perspective on alignment, enriching AfD’s theoretical foundation and adaptability.\n\n2. **Reduced Dependence on Preference-Based Data:** By bypassing preference data requirements, AfD proposes a scalable alternative that minimizes interaction with human annotators while still achieving alignment, making it potentially more feasible for large-scale applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents Alignment from Demonstrations (AfD), a new method for aligning large language models (LLMs) using high-quality demonstration data rather than traditional preference-based reinforcement learning from human feedback (RLHF). AfD addresses issues of noisy labels, cost, and privacy by framing alignment within a Markov Decision Process, applying insights from forward and inverse reinforcement learning to create a computationally efficient, trajectory-based mechanism. The approach is validated through theoretical and empirical analyses, showing improvements on “Harmless” and “Helpful” tasks with the Anthropic HH-RLHF dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Overly Complex Presentation:** The paper’s presentation is somewhat dense, with extensive theoretical detail that can make it harder to grasp the core contributions. A more streamlined focus on the main insights and practical implications of AfD could enhance clarity and accessibility for readers.\n\n2. **Potential Overlap with Existing Methods:** The unique contribution of AfD relative to SPIN isn’t entirely clear. SPIN leverages a DPO-like objective to align LLMs directly without relying on a reward model, while AfD introduces alignment through a reward model. Clarifying the specific advantages or improvements AfD provides over methods like SPIN would strengthen the paper’s case for its distinct value.\n\n3. **Efficiency Clarification Needed:** Although the paper suggests that AfD offers greater efficiency than traditional RLHF, it’s unclear where these efficiency gains are realized. The pseudocode presented appears similar to RLHF workflows, with steps involving reward model training and optimization. Providing more concrete details on how AfD reduces computational overhead or training time compared to RLHF would clarify the practical benefits of this approach." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. If it is computationally feasible, could you compare to the closed form for the optimal discriminator in your BoN experiments?\n\n2. If I am understanding correctly, if you used the \"golden\" RM for BoN, you'd get a win rate of 1?\n\n3. Also, is the model you're sampling from here just the result of SFT on the demos from $\\pi_{\\beta}$, aka $\\pi_{SFT}$? If so, is their a theoretical interpretation of the effect of the BoN procedure with the \"closed form\" discriminator I mention above?\n\n4. Could you provide more explanation for why the win rate goes down with higher N for several lines in figure 4?\n\n5. If you have space, could you move up the comparison to SPIN to the main paper? I think it is quite interesting and under-appreciated in the broader community -- I have struggled to convince people of precisely the point you are making." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This is an exceptionally well-written paper with crystal-clear exposition and take-aways -- kudos to the authors!" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors cast LLM alignment as an imitation learning problem, opening up the possibility of learning from demonstrations alone and leveraging insights from inverse RL." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- (Minor) RLHF is usually framed as KL-constrained /MaxEnt RL, rather than standard RL problem formulation in Eq. 2.\n\n- (Minor) Another good citation for intransitive preferences in RLHF might be https://arxiv.org/abs/2401.04056.\n\n- I would argue that the fact that SFT is BC is fairly well known. It also doesn't seem that surprising that doing SFT on data generated by a super high quality model works well -- the question is of course how we train such a powerful supervisor model in the first place, for which preference data still appears to be neccesary. So, it's hard for me to give many points for novelty for that section of the paper.\n\n- For the most preferred RM strategy (comparing $\\pi_{SFT}$ to $\\pi_{init}$) , we know the optimal discriminator in closed form -- it is precisely $d^{\\star}(x, y) = \\log \\pi_{SFT}(y|x) - \\log \\pi_{init}(y|x)$ (if a logistic loss is used, otherwise could be the density ratio in Eq. 9). I don't see the added value in actually learning a separate discriminator for the best-of-N sampling procedure -- it seems like we could only do worse rather than using the log ratio.\n\n- It is a bit disappointing that the final policy requires a BoN step -- I would have liked to see the results of proper policy optimization procedure on the learned RMs." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Can the authors summarize the main contribution of the work? Is there something I am missing?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The work attempts to unify a number of diverse ideas, which is helpful\n* The work makes nice use of different colored boxes so things can easily be found." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents AfD, a sort of framework for learning from demonstrations in LLMs. The authors do a number of different experimenst within this framework on a number of different methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Novelty**\nI am unsure what exactly is novel in this work. To my knowledge nothing the authors introduce is explicitly new, or has new experiments.\n* Sec 2.2: This MDP breakdown for LLMs is well known \n* Sec 3.1: It is well known that SFT = BC\n* Sec 3.1: I have not looked into the descriminator objective to see if it is in prior work, but the authors don't use it in experiments.\n* Sec 3.2: The idea of using the model generations as negatives is done in SPIN and in DITTO (Show don't tell, Shaikh et al.) DITTO also does something similar to this paper by SFTing the model first before sampling.\n* Sec 4.1: These experiments show SFT > RLHF on the same number of demos. I don't find this surprising, similar results are also in Shaikh et al.\n* Sec 4.2: I think the section may be where the authors find novelty?\n\nOverall, the paper seems to focus a lot on unifying different ideas that have existed for a while. While this is OK, the paper is not written as if it were a survey and at present it sounds like the authors are claiming AfD to be some new framework that has not been extensively studied before.\n\n**Writing**\nThe paper is a bit hard to follow since there are so many subjects. I was initially confused as to what was being evaluated in each experimental sesction. For example, it was initially unclear to me what the different baselines were in Sec 4.1. \n\n**Experiments** \n* the experimental results at present do not seem compelling.\n* Sec 4.1: It makes sense that SFT with demos does better than RLHF. The amount of data isn't reported on however, and so its unclear what the cost of data collection vs performance tradeoff is.\n\n**Missing Citations**\nThis work brings together a lot of different ideas, which is great, but the authors seem to miss a ton of related work which has already covered very similar ideas:\n\n* IRL: Ziebart is the OG in maxEnt IRL.\n* From r to Q* by Rafailov et al. - Token level MDP\n* Show don't tell: Aligning LLMs with demonstrated feedback by Shaikh et al has very similar ideas\n* Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data by Tajwar et al. covers mode seeking behavior.\n* Imitating language via scalable inverse reinforcement learning by Wulfmeier et al for IRL on LLMs\n\n\n## Recommendations\nI would recommend that for a future draft the authors either a) refocus the draft to be a survey on applying concepts traditionally used in IRL to language models or b) focus on the reward modeling experiments." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024large,\ntitle={Large Language Model Alignment via Inverse Reinforcement Learning from Demonstrations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0lMhptUGxP},\nnote={under review}\n}" }, "abstract": { "value": "Aligning Large Language Models (LLMs) is crucial for enhancing their safety and utility. However, existing methods, primarily based on preference datasets, face challenges such as noisy labels, high annotation costs, and privacy concerns. \nIn this work, we introduce **_Alignment from Demonstrations_** (AfD), a novel approach leveraging high-quality demonstration data to overcome these challenges. We formalize AfD within a sequential decision-making framework, highlighting its unique challenge of missing reward signals. Drawing insights from forward and inverse reinforcement learning, we introduce divergence minimization objectives for AfD.\nAnalytically, we elucidate the mass-covering and mode-seeking behaviors of various approaches, explaining when and why certain methods are superior.\nPractically, we propose a computationally efficient algorithm that extrapolates over a tailored reward model for AfD. We validate our key insights through experiments on the Harmless and Helpful tasks, demonstrating their strong empirical performance while maintaining simplicity." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Model Alignment", "Alignment from Demonstration" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d15ef645b1bb8895b5b62c2297a78fb43eed6923.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Large Language Model Alignment via Inverse Reinforcement Learning from Demonstrations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0lVQBMhsPG
ETC: Towards Training-Efficient Video Synthesis with Exploiting Temporal Capabilities of Spatial Attention
main
Active
Efficient Video Generation;Video Diffusion Model
generative models
3;3;5;5;5
4;5;5;5;5
1;1;2;2;2
1;1;2;2;2
2;2;2;3;3
4.2
4.8
1.6
1.6
2.4
0.612372
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In Figure 3, why is it necessary to rearrange frames of videos into a single image?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Proposes a highly efficient framework that eliminates temporal attention, reducing computational cost, which is an interesting idea.\n- Innovatively uses a temporal-to-spatial transfer strategy and spatial-temporal embedding to enable video generation without sacrificing temporal consistency.\n- Demonstrates superior performance with fewer training samples, achieving quality comparable to or better than current state-of-the-art methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces ETC, a novel text-to-video synthesis model focused on training efficiency by exploiting spatial attention for temporal modeling. Unlike existing models that add temporal attention layers, ETC leverages only spatial attention with a temporal-to-spatial transfer strategy and spatial-temporal mixed embedding. This design reduces data dependency, allowing high-quality, efficient video generation using significantly smaller datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The authors use filtered high-quality video data to train their model, whereas the baseline methods do not incorporate this filtration step, potentially creating an uneven comparison. This difference in data quality could give the proposed model an advantage that does not solely stem from its architectural innovations.\n- The paper claims that “We demonstrate that spatial attention modeling a linear mapping and alternating between spatial and temporal attention modeling another linear mapping, which does not model complex derivative or quadratic relationships.” However, this statement does not fully consider the inherent non-linearities of the model, nor does it account for the potential effects of stacking multiple spatial-temporal layers, which could enhance the model’s capacity to capture more complex relationships, including quadratic ones.\n- Limited exploration of possible visual artifacts that may arise from removing explicit temporal modeling layers leaves open questions regarding the visual consistency and quality of generated videos. Additionally, relying primarily on FVD and CLIP scores limits the evaluation, as these metrics do not adequately capture human preference for smooth and realistic motion in videos. More human-centric evaluation metrics would improve the assessment of model performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see above. If the author solves my problems, I will consider raising the score. Thanks." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper discusses how to generate high-quality videos using only a pre-trained text-to-image model, which is very interesting.\n2. The structure of this paper is well-organized and easy to follow. \n3. The experimental results show the effectiveness of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper demonstrates that the spatial attention in T2I has a strong capability of temporal modeling and can boost the efficiency of training. Furthermore, this paper also propose a training-efficient framework, called ETC." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are some questions.\n1. In the area of text-to-video generation, GridDiff adopts a similar approach. What distinguishes this work from GridDiff?\n2. In lines 836 and 837, the authors claim that the primary components in the attention mechanism are linear operations. However, there are also some non-linear layers present in the whole network. If we take these non-linear layers into account, do equations (9) through (13) still hold?\n3. In lines 191 to 192, the authors claim that single spatial attention has a larger receptive field than spatial and temporal attention combined. However, I think it is not appropriate to consider spatial and temporal attention in isolation from the rest of the network. If spatial and temporal attention are treated as a unified block for video modeling, would their receptive field still be considered smaller?\n4. From Section 4, it appears that all video frames should be arranged into a single grid image. However, in Figure 3(a), there seem to be empty spaces. Why is this?\n5. In the Spatial-Temporal Mixed Embedding section, the authors use absolute positional encoding. If the goal is to generate videos of varying resolutions and different video lengths, would it be necessary to include videos with diverse resolutions during the training phase?\n6. For a more comprehensive quantitative evaluation of video generation, I recommend that the authors use a broader set of metrics, such as Vbench. Additionally, I suggest that the authors provide a video demo, allowing reviewers to more intuitively assess the quality of the generated videos." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the two major concerns listed in **Weaknesses**." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- Studying the data efficiency in learning T2V models deserves a pat." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work aims at improving the data efficiency in training T2V models via reusing spatial attention for temporal modeling. In particular, the authors propose to rearrange a sequence of frames into a \"stitched\" huge frame. The authors claim that they achieve better synthesis quality than existing alternatives yet using less data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- From the motivation (or say theoretical foundation) part, I believe there exist **technical flaws**.\n\n - Intuitively, removing the temporal module and reusing the spatial module to handle both spatial and temporal information will definitely affect the model capacity. From this perspective, the so-called \"temporal capabilities\" of spatial attention does not convince me.\n - I will explain my concern with a toy example. Let $A = (a) \\in \\mathbb R^{1 \\times 1}, B = (b_{ij}) \\in \\mathbb R^{2 \\times 2}$, and $X = (x_1, x_2) \\in \\mathbb R^{1 \\times 2}$. Assuming that $A, B$ are invertible, as required by the authors, there does **not** exist $A' = (a')$ such that $AXB=A'X$ for any $X$. First, note that $AXB = (a(b_{11}x_1 + b_{21}x_2), a(b_{12}x_1 + b_{22}x_2))$, and $A'X = (a'x_1, a'x_2)$. Then if $b_{11} = b_{22} = 0$, $AXB = (ab_{21}x_2, ab_{12}x_1)$ cannot be equal to $A'X = (a'x_1, a'x_2)$ for any $x_1 \\neq x_2$. This clearly contradicts the claim in Line 878, which means **the theoretical foundation of this work does not hold**.\n\n- From the empirical part, the quality of videos generated by ETC are not as good as those generated by previous approaches. I believe the reason is just that the modeling capacity of spatial attention struggles to handle the temporal information.\n\n - The frames in the last row of Figure 5 and those in Figure 6a are blurry.\n - The motion in all presented videos seems to be really small (Figure 6b, Figure 21, Figure 22, Figure 23).\n - There are even no videos provided in the supplementary material, which is very strange for a submission working on video synthesis.\n - Given the above observations, I wonder why the FVD metric from ETC is so small compared to other competitors." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the Weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "● The paper presents a new perspective by leveraging spatial attention for temporal modeling. It is interesting as this approach not only simplifies the architecture but also reduces training costs, providing new insights for video generation tasks.\n\n● If all results are true under a fair comparison, the performance improvement is significant." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces ETC, a framework aimed at training-efficient text-to-video (T2V) synthesis by exploiting spatial attention for temporal modeling. The authors propose to eliminate temporal attention layers, typically used in T2V models, by using spatial attention from pre-trained text-to-image (T2I) models. The framework introduces techniques like temporal-to-spatial transfer and spatial-temporal mixed embedding to handle video frames within a spatial grid. Extensive experiments demonstrate superior performance in terms of quality and efficiency over several state-of-the-art (SOTA) methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "● It lacks convincing explanation for superior performance. While the authors attempt to explain why spatial attention can replace temporal attention, the reasons behind the significantly better results remain unconvincing. It is unclear why the proposed approach would outperform existing models to such an extent, especially considering the limited training resources used (8 NVIDIA 3090 GPUs).\n\n● The model’s performance raises concerns about its generalization to more complex datasets or scenarios, especially given the small-scale training. The absence of detailed discussions about potential limitations, such as the restricted ability to model large motions due to implicit spatial relation modeling, weakens the validity of the results.\n\n● Lack of visual evaluation. While the quantitative results are compelling, there is no video evaluation provided to visually demonstrate the effectiveness of the ETC framework. Also, the code in the supplementary materials is too basic to allow a direct assessment of the model’s qualitative improvements.\n\n● In the supplementary materials, the authors claimed they include comparisons with many baselines, while the main paper does not provide sufficient detail on all these baselines or whether the comparisons were fair. This raises questions about the reported results, given that other well-recognized SOTA models typically use more data and computational resources. It would be beneficial to clarify how the proposed model achieves consistently the best results under such limited training conditions (as shown in Table 1)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Questions in Weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The motivation and writing of this paper are very clear, making it easy to follow.\n- From a quantitative perspective, the paper achieves good metrics at a relatively low training cost." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a training-efficient approach to train text-to-video (T2V) models. It explores how to transfer text-to-image (T2I) models to the T2V task without introducing a temporal model. Additionally, it proposes a data-efficient hybrid training method that allows the model to achieve favorable FVD metrics with relatively low training costs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The novelty is somewhat limited, as the approach in this paper aligns closely with [1], which also uses a grid-based approach to convert videos into images. The method in [1] originates from [2], which restricts the novelty of this paper.\n- Although the paper proposes the spatial-temporal mixed embedding method, in essence, it is equivalent to adding a positional embedding. I am curious about how it prevents disrupting the T2I model’s weights at the beginning—this is an important point.\n- The FPS embedding design is also not novel; it was first introduced in MagicVideo. The mixed FPS ranges from 0 (pure image) to 120 (single-frame replication). This design lacks significant originality.\n- What bothers me most is the qualitative results. Although the quantitative metrics are promising, the qualitative results fall behind recent state-of-the-art video generation models like DiT architectures, OpenSora, Opensora-Plan, CogvideoX, etc. The failure cases, in particular, perform poorly.\n- The paper does not validate any scaling laws in terms of data or model scalability.\n- The authors should analyze more thoroughly where the quantitative advantages come from. Given the generally unimpressive visual quality, I can only assign a borderline rejection score for now.\n\nReferences: [1] Lee T, Kwon S, Kim T. Grid Diffusion Models for Text-to-Video Generation [C] // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 8734-8743.\n\n[2] Fan Q, Panda R. Can an image classifier suffice for action recognition? [J] arXiv preprint arXiv:2106.14104, 2021." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024etc,\ntitle={{ETC}: Towards Training-Efficient Video Synthesis with Exploiting Temporal Capabilities of Spatial Attention},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0lVQBMhsPG},\nnote={under review}\n}" }, "abstract": { "value": "Recently, synthesizing video from the text, i.e, Text-to-Video (T2V), has demonstrated remarkable progress by transferring the pre-trained Text-to-Image (T2I) diffusion models to the video domain, whose core is to add new temporal layers for capturing temporal information. However, these additional layers inevitably incur extra computational overhead, as they need to be trained from scratch on large-scale video datasets. Instead of retraining these costly layers, we conjecture whether temporal information can be learned from the original T2I model with only Spatial Attention. To this end, our theoretical and experimental explorations reveal that Spatial Attention has a strong potential for temporal modeling and greatly promotes training efficiency. Inspired by it, we propose ETC, a new T2V framework that achieves high fidelity and high efficiency in terms of training and inference. Specifically, to adapt the video to the spatial attention of T2I, we first design a novel temporal-to-spatial transfer strategy to organize entire video frames into a spatial grid. Then, we devise a simple yet effective Spatial-Temporal Mixed Embedding, to distinguish the inter-frame and intra-frame features. Benefiting from the above strategy that actually reduces the model's dependence on the text-video pairing dataset, we present a data-efficient strategy, Triple-Data (caption-image, label-image, and caption-video pairs) fusion that can achieve better performance with a small amount of video data for training. Extensive experiments show the superiority of our method over the four strong SOTA methods in terms of quality and efficiency, particularly improving FVD by 49% on average with only 1% training dataset." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Efficient Video Generation", "Video Diffusion Model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9bae2fff7523e606ffff941654767c91635aa788.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/e5ff80a3923e80877bc1aa6f404525dd7b80151e.zip" }, "title": { "value": "ETC: Towards Training-Efficient Video Synthesis with Exploiting Temporal Capabilities of Spatial Attention" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0m27tvXkNm
Robust EEG Classification via Graph Neural Networks
main
Withdraw
EEG Classification;Graph Neural Networks;Dynamic Time Warping
learning on time series and dynamical systems
Nourhan Ahmed;Johannes Burchert;Vijaya Krishna Yalavarthi;Maximilian Stubbemann;Lars Schmidt-Thieme
~Nourhan_Ahmed1;~Johannes_Burchert1;~Vijaya_Krishna_Yalavarthi1;~Maximilian_Stubbemann1;~Lars_Schmidt-Thieme1
0
0
0
0
0
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": { "value": "@misc{\nahmed2024robust,\ntitle={Robust {EEG} Classification via Graph Neural Networks},\nauthor={Nourhan Ahmed and Johannes Burchert and Vijaya Krishna Yalavarthi and Maximilian Stubbemann and Lars Schmidt-Thieme},\nyear={2024},\nurl={https://openreview.net/forum?id=0m27tvXkNm}\n}" }, "abstract": { "value": "Electroencephalogram (EEG) classification has gained prominence due to its applications in medical diagnostics and brain-computer interfaces. However, EEG data is known to have a low signal-to-noise ratio, resulting in high variance in predictions across similar instances. To overcome this issue, we introduce RoGra, a novel approach leveraging residual graph convolutional networks for robust EEG classification. Our model incorporates dynamic time warping (DTW) to align temporal information and capture meaningful neighborhood relationships, enhancing robustness against artifacts. Experiments on three well-established EEG datasets demonstrate that RoGra outperforms baseline methods by up to 25\\%, marking the largest improvement in EEG classification accuracy since the introduction of the seminal EEGNet. Our code is publically available." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Nourhan_Ahmed1", "~Johannes_Burchert1", "~Vijaya_Krishna_Yalavarthi1", "~Maximilian_Stubbemann1", "~Lars_Schmidt-Thieme1" ] }, "authors": { "value": [ "Nourhan Ahmed", "Johannes Burchert", "Vijaya Krishna Yalavarthi", "Maximilian Stubbemann", "Lars Schmidt-Thieme" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "EEG Classification", "Graph Neural Networks", "Dynamic Time Warping" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "ahmed|robust_eeg_classification_via_graph_neural_networks" }, "pdf": { "value": "/pdf/fa651afc70ede5bed522c3162db313a4f81e547f.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/9e568f8d8d44d3654b9d48994ffd38a3abb5ddab.zip" }, "title": { "value": "Robust EEG Classification via Graph Neural Networks" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0mJZplhexS
Speeding Up Image Classifiers with Little Companions
main
Active
model compression;computer vision;efficiency
applications to computer vision, audio, language, and other modalities
3;5;5;5
3;4;3;3
3;2;2;3
2;2;3;2
3;4;3;3
4.5
3.25
2.5
2.25
3.25
0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "(1) The authors could also plot a figure for top-1 accuracy vs. latency.\n\n(2) How long does it take to load and unload models compared with the inference latency for each batch? I'm wondering whether the method can be used for online inference." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "(1) The motivation and method of Little-Big is very simple and straightforward. \n\n(2) It seems that Little-Big is very easy to implement. In addition, Little-Big is model-agnostic which can be applied to models with different scales and architectures.\n\n(3) Little-Big can accelerate a pre-trained model without introducing additional training cost." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method named Little-Big to accelerate image classification with neural networks. Little-Big uses a light-weight model to quickly classify all of the samples and selects the \"hard\" samples which get low confidence behind the threshold. Then, it uses a large model to update the prediction for each hard sample. Little-Big can significantly reduce the inference cost and latency for many advanced large classification models without sacrificing the accuracy. The authors provide many experiments with different pairs of large and small models to validate the effectiveness of Little-Big." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1) Lack of novelty. As the authors say, Little-Big is an embarrassingly simple method, which adopts a large model and a light-weight model for image classification. It's the major advantage but also the major disadvantage of Little-Big. Many previous works share the similar motivation with Little-Big which uses different networks for accelerating, such as early existing and speculative decoding as you mentioned in the paper. While these works mostly include specific and delicate designs. I understand that Little-Big is very simple, but i don't think it's novel.\n\n(2) The proposed method only focuses on the classification tasks. While the authors provide the example about how to extend it to video classification, it's hard to directly apply the method for other popular tasks (e.g., object detection and segmentation), which limits its use.\n\n(3) The authors could include more classification tasks to further prove the generation ability of Little-Big, such as multi-label classification, binary classification." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Please clarify the relationship between \"hardness\" and model confidence. Is low confidence always indicative of a hard sample, or are there cases where this assumption does not hold?\n- Can you provide results where the threshold T is determined using only the training set or a held-out portion of the validation set? This would help to assess the potential for overfitting to the validation set and the generalizability of the method." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The proposed Little-Big algorithm is conceptually straightforward and easy to implement. It requires minimal modifications to existing models and training pipelines.\n- The paper demonstrates significant MACs reduction across a range of model architectures (CNNs, transformers, hybrids) and scales, suggesting broad applicability.\n- Experiments are conducted on multiple datasets (ImageNet-1K, ImageNet-ReaL, ImageNet-V2) to evaluate the robustness and generalizability of the method.\n- The Little-Big approach addresses a critical issue in deploying large vision models: their high computational cost. The proposed method offers a practical solution for model compression without retraining or complex modifications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a simple yet effective two-pass algorithm called \"Little-Big\" to speed up image classifiers. The core idea is to leverage a smaller, less computationally expensive \"Little\" model to pre-screen input samples. Only samples for which the Little model exhibits low confidence are then passed to a larger, more accurate \"Big\" model. The paper claims that this approach significantly reduces the computational cost (measured by Multiply-Accumulate operations or MACs) for a variety of model architectures and scales, without sacrificing accuracy on ImageNet-1K and other datasets. The authors demonstrate MACs reductions of up to 80% while maintaining or even improving accuracy compared to the single Big model baseline. They also argue that this approach is more effective than existing model compression and adaptive computation methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "# Major\n- The method seems to rely on finding an optimal threshold T on the test set (Imagenet validation set) to determine which samples are passed to the Big model. This raises concerns about potential overfitting to the validation set and its impact on generalization performance. Results should be provided using a threshold determined on the training or a held-out portion of the validation set to address this concern.\n- The paper could benefit from a more comprehensive discussion of related work, particularly in areas like cascade models and dynamic inference methods. Specifically, work on early-exit models [1] and confidence-based dynamic routing [2] appears closely related and should be discussed. This would help to better contextualize the novelty and contributions of the proposed approach. \n- Experiments solely focuses on ImageNet dataset. More experiments needed to understand the robustness of the proposed method. \n- I also find it difficult to parse the results presented in huge tables: specifically Table 3: there are multiple baselines for DeiT models. Are you comparing results with different baseline accuracies? \n\n[1] https://github.com/txsun1997/awesome-early-exiting?tab=readme-ov-file\n[2] https://arxiv.org/pdf/2102.04906\n\n# Minor\n- [L132] I can't see the definition of \"w and l\".\n- [Section 2.3] Quantization is a key method for compression and not mentioned here. Also Mixture of Depths (https://arxiv.org/abs/2404.02258)\n- The paper's use of \"hardness\" and its relationship to model confidence is not always clear. In some sections, low confidence is equated with hardness, while in others, the opposite is implied. This needs clarification. For example [L199] \"which allows us to approximate a \"hardness\" axis with prediction confidence.\" hardness means low confidence, no?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Q1) Can you develop better ways to explain your idea’s novelty, or do you have any ideas to further enhance the novelty?\n\nQ2) Can this idea be applied to language models as well? Particularly, I'm interested to see how it compares with speculative decoding." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "S1) The proposed method is practical. It is easy to implement and does not require any modification or additional training of existing models. \n\nS2) It is widely applicable. For any classification problem, it’s readily available. We could apply it to other tasks as well if we could come up with confidence estimation methods for them.\n\nS3) Extensive experimental results show that the proposed method is robustly performing well in the ImageNet classification task." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a technique called the Little-Big algorithm. It combines a small and large pre-trained model to improve the trade-off between cost and accuracy. It first applies the small model to a given sample. When the confidence is high, the prediction is returned. Otherwise, the large model is applied, and its prediction is used.\n\nIn experiments, the authors focused on the ImageNet-1K image classification task. They demonstrated that the proposed method boosts efficiency for various pairs of models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1) The proposed approach lacks novelty. The idea of using multiple models with different cost-accuracy tradeoffs is highly common, to name a few, such as speculative decoding for language models and cascade ranking systems for recommendation and information retrieval. \n\nW2) The experiments are weak. All the experiments are about ImageNet-1k image classification task, so it is quite uncertain whether this method works well for other tasks as well." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "My main suggestion is regarding the fairness of comparisons with baselines (mentioned above).\n\nI also would like to understand the justification for the choice of Little model (mentioned above)." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper studies an important topic - efficiency of visual recognition models.\n- The speedup claimed by the paper is substantial. At a fixed accuracy, their method improves speed by 30%-80% (Figure 1).\n- The paper is written clearly and is easily understandable. The investigation presented studies the natural questions that arise with threshold tuning for the Little model's confidence. Figure 3 clearly demonstrates how the accuracy and efficiency change as a function of the threshold.\n- The paper accounts for generalization across different datasets by fixing the threshold on ImageNet and analyzing results on ImageNetv2 and ImageNetReal. This is an important aspect of the investigation, as choosing the threshold based on the validation set can create bias." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper discusses improving the speed of visual recognition systems using a \"Little-Big\" model setup. The Little model is a smaller architecture that processes examples first. If the confidence is below a predefined threshold, the sample is reprocessed by a \"Big\" model. This simple setup improves the speed of visual recognition systems on ImageNet, ImageNetv2, and ImageNetReal significantly (without loss in accuracy). The authors experiment with both CNNs and Transformer models. They also study the fraction of examples processed by the Little and the Big network, showing the accuracy as a function of threshold." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main concern is the comparison with prior art. As line 437-439 states, \"even with tricks that effectively retrained models, many pruning methods are not competitive... with modern baselines... which in essence are better trained ViTs\". It seems that the Little-Big method is evaluated using modern architectures and training recipes, whereas other baselines (pruning, etc.) are using older architectures or training recipes. I'm worried that the gains of this method are primarily attributable to the use of newer architectures or training recipes. A fair comparison would use the same architectures as previous works.\n- For example, Table 3 shows a datapoint with T=0, meaning the Big architecture is never used.\n- Additionally, the baseline architecture in Table 3 (\"Our Baseline\") is significantly more accurate than previous work's baseline (Yin et al.).\n\nThe choice of \"Little\" network seems arbitrary in some cases. In Table 2, EfficientNet-B2-288 uses EfficientViT as a little network, but most other EfficientNet variants do not. And EfficientNet-B2-288 is not used with EfficientNet-B1, but most other variants are. I have similar thoughts on most of the rows in Table 2. Can you please justify the choice of Little architecture?\n\nLine 132: Equation 2 should have a reference, and there should be some more specific qualifications as to what types of models this equation applies to. Similarly, the characteristic width w_j is not well defined and doesn't have a reference.\n\nLine 150: I recommend also discussing quantization briefly here.\n\nLine 172-173: \"ingest gigabits/s of raw visual information and compress it to tens of bits/s\" <- this needs a reference\n\nIn Table 3, it would help the reader if you mark the baselines by their general approach (e.g. which ones are pruning, etc.).\n\nLine 202: \"confidence > 0.5-0.7\" <- what does this mean? How can a confidence be greater than a range? Did you instead mean \"0.5 < confidence < 0.7\"?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024speeding,\ntitle={Speeding Up Image Classifiers with Little Companions},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0mJZplhexS},\nnote={under review}\n}" }, "abstract": { "value": "Scaling up neural networks has been a key recipe to the success of large language and vision models. However, in practice, up-scaled models can be disproportionately costly in terms of computations, providing only marginal improvements in performance; for example, EfficientViT-L3-384 achieves <2% improvement on ImageNet-1K accuracy over the base L1-224 model, while requiring 14× more multiply–accumulate operations (MACs). In this paper, we investigate scaling properties of popular families of neural networks for image classification, and find that scaled-up models mostly help with “difficult” samples. Decomposing the samples by difficulty, we develop an embarrassingly simple model-agnostic two-pass Little-Big algorithm that first uses a light-weight “little” model to make predictions of all samples, and only passes the difficult ones for the “big” model to solve. Good little companions achieve drastic MACs reduction for a wide variety of model families and scales. Without loss of accuracy or modification of existing models, our Little-Big models achieve MACs reductions of 76% for EfficientViT-L3-384, 81% for EfficientNet-B7-600, 71% for DeiT3-L-384 on ImageNet-1K. Little-Big also speeds up the InternImage-G-512 model by 62% while achieving 90% ImageNet1K top-1 accuracy, serving both as a strong baseline and as a simple practical method for large model compression." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "model compression", "computer vision", "efficiency" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9eacc4378762f2f9896546642c2a47a2df383462.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/f22dc030d01873b7c0c09744159630b87aa13da5.zip" }, "title": { "value": "Speeding Up Image Classifiers with Little Companions" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0mdUV1pLGP
Hawkes process revisited: balancing interpretability and flexibility with contextualized event embeddings and a neural impact kernel
main
Active
Event sequence;Hawkes Process;Interpretability;Embedding Space
interpretability and explainable AI
3;3;3;5
4;5;4;4
3;2;2;2
1;2;1;2
3;2;2;3
3.5
4.25
2.25
1.5
2.5
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "-" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the Weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "(1) Introduce a generalized Hawkes process where impact functions are defined via a flexible, neural network-based impact kernel within an event embedding space.\n\n(2) The proposed method is flexible to incorporate transformer encoder layers to contextualize event embeddings based on the historical sequence of events, which can explicitly manage the balance between interpretability and model complexity.\n\n(3) The authors show that the transformer encoder layers are often unnecessary to achieve state-of-the-art performance and demonstrates the competitive performance of proposed method with existing models while maintaining interpretability with real data experiments" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Neural network-based HPs offer greater flexibility and improved performance in modeling event sequences with self-reinforcing dynamics, but at the cost of interpretability. This paper proposes to address this challenge by leveraging a neural impact kernel in event embedding space, which allows to capture complex event dependencies without assuming specific parametric forms, while still retaining the core interpretability of traditional Hawkes processes. Real data experiments are conducted to demonstrate the competitive performance with existing models while maintaining interpretability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1) The core idea is simple, which introduces a neural network-based impact kernel within an event embedding space to improve interpretability while keeping competitive performance. It would be better to diccuss the effects of the impact kernel on the modeling performance in details, and also illustrate how to choose or design the appropriate kernels in applications for better balance between interpretability and model complexity.\n\n(2) Generally, increased model complexity may lead to higher model likelihood value. So, it's not adequate to only compare the likelihood between the proposed method and other models. It would be necessary to also compare the out-of-sample metrics such as out-of-sample prediction performance and so on, for all the related comparisons between the proposed method including the variant with transformer encoder layer and existing methods. \n\n(3) The authors states that \"Given the large size of many of these datasets, we believe it is unlikely that this is the result of insufficient data, and more likely that ENHP is already sufficiently flexible to capture the underlying data distribution\" in line 395-397 of page 8. The statement is not adequate and convincing, and it's better to add some necessary experiments based on simulated data for illustration." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See above weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper provides a comprehensive overview and classification of current models for event sequence prediction, covering traditional HPs, RNN-based HPs, attention-based HPs, and so on. \n2. This paper is easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a novel approach for modeling Hawkes processes (HPs), where a deep neural network is used to model the influence function, making it not only more flexible than traditional HPs but also enhancing model interpretability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This paper is an incremental work of TPPs. There are many related works to investigate the interpretability of TPPs. Adopting neural networks to model impact kernels are also quite common. The core contribution of model novelty is limited. \n2. The assumption that the influence between events is always positive is too strong, and many real-world scenarios do not fit this assumption, which limits the model’s flexibility. \n3. The experimental improvement is not evident. Compared with current methods, the improvement of this work is marginal." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethics review needed." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to Strengths and Weaknesses for more details." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The motivation is clear, and the issue of enhancing the interpretability of the neural Hawkes process is of considerable significance.\n\n2. This paper proposes three designs for neural kernel functions, each balancing model flexibility and interpretability to different extents.\n\n3. The article is well-structured and easy to follow" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on understanding and improving the trade-off between the flexibility and interpretability of the Hawkes process. The authors replace the Hawkes process's parametric kernel functions with a neural network-based impact kernel within an event embedding space, thereby enhancing the model’s flexibility. This neural network-based impact kernel retains some properties of the Hawkes process, such as positive intensity and additive influence, thereby enabling good interpretability. Additionally, to manage the balance between model complexity and interpretability, the authors introduce optional transformer encoder layers to contextualize event embeddings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There are generally three metrics for evaluating point process models: log-likelihood, accuracy (acc), and root mean square error (RMSE) [1]. Among these, log-likelihood measures the model’s goodness-of-fit, while accuracy and RMSE measure the model’s event prediction performance. This paper only uses log-likelihood. Furthermore, in terms of log-likelihood, the proposed method does not demonstrate a significant advantage over other baseline models.\n\n2. Equation 8 seems to imply an assumption that the influence between events is always a positive excitation (because the softplus function is applied to all components, including W , K(t), and μ_k). What if the influence of events is \"inhibition\" rather than \"excitation\"?\n\n3. The neural kernel function seems capable of modeling only the influence of one event on another, but in some scenarios, multiple events occurring together may be required to trigger a subsequent event, as in the case of synergy [2].\n\nReference: \n\n[1] EASYTPP: TOWARDS OPEN BENCHMARKING TEMPORAL POINT PROCESSES (ICLR'24)\n\n[2] CAUSE: Learning Granger Causality from Event Sequences using Attribution Methods (ICML'20)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In line 215, the authors state that the computational complexity of the Monte Carlo method is $O(L^2NK)$, while the computational complexity of the numerical method is $O(LNK)$. Is there an error here? Based on my understanding, the computational complexity of the numerical method should be $O(L^2K)$.\n\n- In line 133, the authors claim that SAHP does not explicitly model decaying temporal effects. However, I would argue that their model also does not capture decaying temporal effects. Specifically, in line 271, the authors cannot ensure that the output $K$ decreases as $\\Delta t$ increases.\n\n- How were the results in Figure 3c obtained? From my understanding, the \"dimension\" in Figure 3c represents \"topics\" (which includes multiple event types, as described in Table 3), whereas the \"dimension\" in Figures 3a and 3b pertains to \"event types.\"" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper provides a detailed analysis of the interpretability of the proposed model, as discussed in Sections 4.5 and 4.6.\n- The writing is clear, making the paper easy to follow, and the results straightforward to reproduce." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper reformulates the classic Hawkes process by incorporating neural networks to enhance its expressive power. Specifically, the authors propose three types of neural Hawkes processes: one based on one-hot vectors, one based on event representations, and one utilizing latent vector representations. Through extensive experiments, the authors demonstrate that the event representation-based neural Hawkes process generally achieves strong predictive performance while maintaining excellent interpretability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper lacks significant innovation. The authors should refer to Equation 11 in reference [1] and Equations 3, 4, and 5 in reference [2]. The approach in this paper closely mirrors these works, specifically the use of neural networks to parameterize the impact kernel of the Hawkes process.\n\n1] Song Y, Lee D, Meng R, et al. Decoupled Marked Temporal Point Process using Neural Ordinary Differential Equations. In The Twelfth International Conference on Learning Representations.\n[2] Zhou Z, Yu R. Automatic Integration for Fast and Interpretable Neural Point Processes. In Learning for Dynamics and Control Conference. PMLR, 2023: 573-585." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024hawkes,\ntitle={Hawkes process revisited: balancing interpretability and flexibility with contextualized event embeddings and a neural impact kernel},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0mdUV1pLGP},\nnote={under review}\n}" }, "abstract": { "value": "The Hawkes process (HP) is commonly used to model event sequences with selfreinforcing dynamics, including electronic health records, stock trades, and social media interactions. Traditional HPs capture self-reinforcement via parametric impact functions that can be inspected to understand how each event modulates the intensity of others. Neural network-based HPs offer greater flexibility, resulting in improved fit and prediction performance, but at the cost of interpretability, which can be critical in medicine and other high-stakes settings. In this work, we aim to understand and improve upon this tradeoff. We propose a novel HP formulation in which impact functions are modeled by defining a flexible impact kernel, instantiated as a neural network, in event embedding space, which allows us to model large-scale event sequences with many event types. This approach is more flexible than traditional HPs, because we do not assume a particular parametric form for the impact functions, yet more interpretable than other neural network approaches, because self-reinforcing dynamics are still entirely captured by the impact kernel, which can be inspected. If needed, our approach allows us to trade interpretability for flexibility by contextualizing the event embeddings with transformer encoder layers. Results show that our method accurately recovers impact functions in simulations and achieves competitive performance on real-world datasets even without transformer layers. This suggests that our flexible impact kernel is often sufficient to capture self-reinforcing dynamics effectively, implying that interpretability can be maintained without loss of performance." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Event sequence", "Hawkes Process", "Interpretability", "Embedding Space" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b920b8d768e14800b32f0ac451337153affb9dcb.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Hawkes process revisited: balancing interpretability and flexibility with contextualized event embeddings and a neural impact kernel" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0mo2yqOS6Z
Enhancing Accuracy and Parameter Efficiency of Neural Representations for Network Parameterization
main
Active
Implicit Neural Representations;Parameter Generation;Network Prediction;Distillation
other topics in machine learning (i.e., none of the above)
5;5;5;6;6
5;4;4;4;3
1;3;2;3;3
2;3;1;3;3
3;2;2;3;3
5.4
4
2.4
2.4
2.6
-0.645497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Fig. 1: \"While one expects that the reconstruction error must approach zero to recover the true performance\" How was the \"low error\" defined? empirically searched?\n1. Does the method only works with CNN? I thought no? Maybe only empirically no experiments were done. It is worth trying on transformers." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. the topic is of interest and benefits the community\n2. the proposed \"separation\" is flexible" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The author study the fundamental trade-off regarding accuracy and parameter efficiency in neural network weight parameterization using predictor networks. They present a finding where the predicted model not only matches but also surpasses the original model’s performance through the reconstruction objective (MSE loss) alone. Experiments are done on CIFAR, STL and ImageNet." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- the \"low reconstruction error\" seems to be a bit arbitrary and I do not see a very good way to find it.\n- The reasoning/intuition on why the proposed method even \"improve\" the performance is lacking\n- see more in questions.\n\nminor issues:\n1. typos near line 509-510\n2. Fig 5 can be earlier?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethical concerns" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The problem setup in this paper is not clear; Is the original network with parameters $W$ pretrained for the target task? How is the dataset split used to train $W$ and evaluate, for instance, the accuracy vs. reconstruction error results? Which task are these models evaluated on? and what models are used? For instance, what is the size of the predictor network in the experiments in Sec. 3.1? From some of the figures (e.g., figure titles or legends) later in the paper I could see some of these details but they should be summarized before describing the results in Sec. 3.\n\n- In Figure 1 (left) the authors show an expected behavior of the tradeoff between accuracy and reconstruction error; what is this figure based on? why is the expected reduction linear? why is the expected accuracy at a reconstruction error of 0.015 around 20%? I assume the authors used the observed results to build the expected plot, but then again, why the linear behavior instead of the approximate negative quadratic observed in the right plot? Additionally, could the slightly increased accuracy be a product of the variance in the results? error bars and axis labels of the zoomed-in crop of the right plot would be helpful. \n\n- Is the goal of the reconstruction loss to learn to predict the parameters of a network previously trained? If so, why is the accuracy higher with a non-zero reconstruction error? Can we then assume that the \"original\" model was not optimally trained? For instance, if smoothing the weights of the network increases its accuracy on a given task, should not the network be trained with a different regularization? e.g. a higher weight decay, which would also smooth weights and suppress high-frequency components. \n\n- How does the proposed iterative smoothing using predictor networks compare to other weights smoothing techniques like training regularizations or smoothing constraints (e.g., weight decay, dropout, etc.), and the mentioned methods in lines 157-162 (e.g., NeRN with regularization-based smoothness)? \n\n- It has been widely studied that weight smoothing increases performance, generalization, robustness to noise, etc., and many techniques have been proposed to achieve this, so I don't think it is that surprising of a find, and getting an increased accuracy by smoothing parameters via a predictor network seems overkill to me.\n\n- It would be helpful to have results comparing the proposed method with distillation and using distillation to train a smaller network directly (i.e., instead of training a predictor network for a larger model). It seems to me that the advantages of using a predictor network are not that great if the model has to be unwrapped to produce the predictions. Currently, GPU memory is a greater issue than model storage. \n\nMinor comments:\n- In lines 214-215 the notation can be confusing, i.e., predictor $P$ with $Q$ learnable parameters and the original network $O$ with $P$ learnable parameters." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper is mostly well-written and has some interesting ideas. It also provides a concise and clear overview of the problem, the literature review is comprehensive, and the experiments are thorough. Finally, the problems that this work tries to address, like model compression and improved representation learning, are relevant to the community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this work, the authors propose using predictor networks to achieve increased accuracy in two ways. The first one is to train the predictor network iteratively using only a reconstruction loss. The increased accuracy is a product of the weight smoothing caused by the reconstruction loss. In the second part, the authors propose to detach the reconstruction loss and distillation loss (as used in previous works) and do it sequentially. They argue that the distillation loss gains are limited to the reconstruction loss when used simultaneously." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I have major concerns related to the technical contributions and the practicality of the proposed approach. The method achieves increased accuracy because of the smoothing of the weights due to the reconstruction (or iterative reconstruction), which is not surprising, and I think there are several other (easier) ways to increase accuracy, generalization, robustness, etc., by smoothing weights, which makes me believe this approach would not be very practical (In the paper there are no comparisons to these easier alternatives). Another reason I believe this approach is not practical is that reducing model storage is not as big of a concern as reducing the memory needed for the model during training or deployment. \n\nFor instance, let's assume we have an optimally trained model with weight smoothing regularization. For the proposed approach to achieve similar performance to the original model, it will require two networks, i.e., the original and the predictor (e.g., ~40% of the DoF of the original), then we need to have several rounds of reconstruction where the predictor learns to predict the original weights, and then, when need a training stage using only distillation loss. The only real gain will be the reduced memory for model storage (which has to be unwrapped to a larger model to produce predictions), which currently is not a real concern, at least on the applications described in the paper.\n\nDespite this work having some interesting ideas, in its current version, I don't see any major benefits in using it. Please see the questions below for more specifics." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written with a clear and reasonable motivation. The problem setup and comparison with other methods are well articulated.\n2. Extensive validation is conducted on multiple datasets, showing consistent improvements. The figures and tables are presented clearly and logically.\n3. The proposed two-stage strategy not only compensates for the shortcomings of the baseline but also achieves better performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This study explores the trade-off between accuracy and parameter efficiency in neural networks using predictor networks. It reveals that the predicted model can exceed the original's performance solely through reconstruction objectives, with improvements accumulating over successive reconstructions. The research also proposes a new training scheme that separates reconstruction from auxiliary objectives, leading to significant enhancements in both model accuracy and predictor network efficiency compared to existing methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The method presented in the paper targets a trade-off between accuracy and compression rate. Can the advantages gained over the baseline pre-trained weights, as demonstrated in Table 2, generalize to a broader range of downstream tasks?\n2. Will the progressive enhancement of the teacher network lead to corresponding progressive improvements in performance?\n3. Can these advantages extend beyond CNN-based architectures, such as to the pre-training of Vision Transformers (ViT) or hybrid architectures combining ViT and CNN?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "As mentioned in the weaknesses section, I think the points for the rebuttal are:\n1. Improve the motivation and context in the introduction.\n2. Show an example of the method generalizing to ViTs (if indeed there are no underlying assumptions preventing the method from generalizing).\n3. Adding the relevant citations and other small issues.\n\nAn additional question I had is whether the iterative refinement also works for smaller INRs (e.g., 280).\n\n___ \n\nIn summary, the paper addresses a less-explored aspect of weight-space learning, offering improvements over existing methods with a simpler approach. Since the motivation and presentation could need improving (and if possible, the generalization), I currently assign the paper a score of 5. I will consider increasing my score if the authors satisfactorily address my concerns." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper addresses a potentially interesting task within the emerging field of weight-space learning, where neural networks are treated as data points to train other networks. While it may not yet compete with state-of-the-art quantization methods, the paper explores a promising direction that could inspire future advancements. The analysis and motivating experiments are comprehensive and intuitive, and the experiments are thoughtfully designed, demonstrating tangible improvements while simplifying the method relative to the baseline." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the task of learning an implicit neural representation (INR) of a trained neural network’s weights. When the INR is smaller than the original model, it offers a potential approach to model compression. The authors conduct an in-depth analysis of both a naive baseline and the current state-of-the-art method (NeRN), yielding key insights: (i) increasing the INR size enhances the performance of the reconstructed model, (ii) iterative training of INRs (on the previous INR) can sometimes exceed the original model’s performance, and (iii) NeRN, despite having three objectives, is primarily driven by the reconstruction objective. These findings inform the proposed method, which separates the reconstruction and distillation phases. The approach introduces one or more reconstruction-only stages, followed by a distillation phase (focusing solely on logits, not features), enabling knowledge transfer from potentially stronger teachers. Extensive experiments validate the approach’s effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper’s primary weaknesses are in presentation, particularly in the introduction, where motivation and context are insufficiently developed, and in its focus on ResNets alone. Both issues are potentially addressable in the rebuttal, as detailed below.\n\n**Presentation**\n\nThe paper lacks a compelling motivation for learning implicit representations of neural networks. I believe the introduction should clearly explain why this is an interesting and valuable topic (I think this should be done even if the main reasons are purely academic and not commercial). Additionally, it presumes familiarity with NeRN, which may make it difficult for readers to follow without proper context. For instance, lines 44-46 discuss the contradictions in the multi-objective loss, yet the paper does not explain that multi-objective losses represent the current state-of-the-art or clarify what these objectives entail. Consequently, the value and relevance of the proposed method in decoupling objectives are unclear. Similarly, terms like “compression efficiency” (line 49) are introduced without context, leaving readers uncertain about their meaning within this work. Defining the motivation and the specific context of the compression task would make these points much more accessible.\n\n**Generalizing to Other Architectures**\n\nWhile the paper briefly mentions that the method is only applicable to CNNs, it is unclear why this limitation exists. Are there underlying assumptions preventing the method from generalizing to other architectures like ViTs? If there are no such constraints, presenting results on a ViT model would benefit the paper. Although the work remains relevant if limited to CNNs, its scope and applicability would be reduced if it cannot generalize to architectures beyond ResNets.\n\n___\n**Smaller issues**\n\nIn addition to the above primary weaknesses, below I describe a few additional issues and limitations, these are mostly smaller issues that do not carry a large weight in my decision but should nevertheless be addressed:\n\n1. While the paper focuses on implicit representations for neural networks, there is a growing body of research for learning semantic representations of neural networks and performing other tasks on weights of neural networks, the paper did not cite any of these works, which I think should be done. See [1-12] for a few such examples.\n2. The term “inception like” is written a few times (e.g. line 44), what does this mean?\n3. A small mismatch in notation, in Eq. 1, in the FMD definition you use $a^{l}$ while in line 92 you use $a^{\\ell}$.\n4. In Fig. 1, maybe show by how much the performance is improved (in the zoomed in part).\n5. Line 252 references Eq. 2, but the actual equation is unnumbered.\n6. Lines 264-266 are not very clear, and if they are important enough to be bold they should probably be rewritten. It took me a few passes to understand them.\n6. Just making sure, in 3.3, you first perform iterative refinement and then distill? The figure looks like they are simultaneously done and not sequentially.\n7. In Fig. 5, the 3.3 part, both the arrows are red, shouldn’t one be red and one blue? \n9. I would replace the term “baseline” in the tables with “NeRN” so that a reader can clearly understand what baseline you are using (and to give NeRN the credit it deserves).\n\n____\n\n[1] Predicting neural network accuracy from weights, 2020, Unterthiner et al.\n\n[2] Towards Scalable and Versatile Weight Space Learning, 2024, Schurholt et al.\n\n[3] Self-supervised representation learning on neural network weights for model characteristic prediction, 2021, Schurholt et al.\n\n[4] Hyper-representations as generative models: Sampling unseen neural network weights, 2022, Schurholt et al.\n\n[5] Learning Useful Representations of Recurrent Neural Network Weight Matrices, 2024, Herrmann et al.\n\n[6] Learning to learn with generative models of neural network checkpoints, 2022, Peebles et al.\n\n[7] Graph metanetworks for processing diverse neural architectures, 2023, Lim et al.\n\n[8] Equivariant deep weight space alignment, 2023, Navon et al.\n\n[9] Equivariant architectures for learning in deep weight spaces, 2023, Navon et al.\n\n[10] Graph neural networks for learning equivariant representations of neural networks, 2024, Kofinas et al.\n\n[11] Neural functional transformers, 2024, Zhou et al.\n\n[12] Permutation equivariant neural functionals, 2024, Zhou et al." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- The method compresses the model storage-wise only if one saves the implicit representation model alone. Does this mean that in order to use the compressed model one would have to thoroughly reconstruct every weight? If so, this means the method wont save RAM space at all, and even worse will require much more inference time. Could the authors show how many FLOPs it takes to rebuild a model compared to a standard inference of it?\n- Did the authors try to check if smoothing other layer types (e.g. Fully-connected layers) also works when training an implicit representation for them? While simple smoothing strategies might be ineffective due to the permutations of neurons in neural networks [1,2], training an INR on them could still perform some form of smoothing.\n- This is a bit out of scope for this work, but did the authors try to train a model from scratch with some smoothing objective on the weights? Did it improve the results?\n\n[1] Navon, Aviv, et al. \"Equivariant architectures for learning in deep weight spaces.\" International Conference on Machine Learning. PMLR, 2023.\n\n[2] Kofinas, Miltiadis, et al. \"Graph neural networks for learning equivariant representations of neural networks.\" arXiv preprint arXiv:2403.12143 (2024)." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The paper is well written and easy to follow.\n- Despite the relatively minor technical change from the baseline, the new approach is significantly better at model compression without losing performance. \n- The analysis performed on model weights smoothness is interesting (Sec. 3.1 and App. A)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an analysis and improved method for representing the weights of neural networks by implicit neural representations (INRs). I.e., It aims to compress model weights (or very slightly improve model performance), while keeping the same accuracy. The authors first analyze a reconstruction of model weights by MSE, showing that it enforces some form of smoothing on the weights. Additionally, they claim that with a big enough INR they are able to slightly improve the performance over the original model. They then propose a new method for model compression by INRs that decouples the distillation and reconstruction objectives into 2 separate stages, leading to better results. Lastly, the authors show that by distilling the model using a larger and more capable backbone can improve the compressed model performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Claim on Accuracy Improvement:** My main concern is that the paper's main claim is not well supported. The authors argue that their method enhances model performance compared to the original uncompressed model through weights smoothing. While this sounds promising and the presented analysis is interesting, in practice, the results reveal that the improvement from smoothness is minimal and almost negligible. The performance gain is limited to small-scale benchmarks like CIFAR-100 and STL-10, where the models’ accuracy increases by only 0.1-0.6%. Furthermore, when tested on ImageNet, the trend reverses, with all experiments performing worse than the original model. I believe that as is, this claim is misleading given these results.\n- **Missing Baseline:** The limitation on data usage presented in lines 82-84 about the NeRN baseline, are also true for the distillation loss used in the proposed method. I.e., it becomes impractical for model compression if data is not available. In that case, a knowledge distillation to a smaller architecture should also be considered as an additional baseline, as it has the same goal: compressing the model with minimal performance drop.\n- **High-Performing Teacher:** I believe the experiment done with a higher-performing teacher is a bit unfair for the scenario. It seems as if one could perform some knowledge distillation from the stronger teacher before the compression or just compress the stronger teacher in the first place.\n- **Combining with Other Compression Approaches:** While the authors claim their method is orthogonal to other compression types, Tab. 5 shows otherwise. Specifically, quantization of the compressed model heavily degrades performance (in CIFAR-100 it decreases from 70.84% to 51.72%).\n- **Figure 1(a):** Figure 1(a) is confusing as it only presents an expected trend, not real results. I think this part of the figure should be simply removed from the paper.\n- **Intuition in Smoothness Analysis:** While the analysis on weight smoothness presents cases where it slightly improves performance, it does not explain why this happens. I.e., why smoother weights might be better. I can conjecture it is related to the memorization of training examples (overfitting), which can be represented in higher frequencies. If this is the case, an explicit measurement of overfitting (generalization gap) with and without weights smoothing could greatly benefit this analysis. \n\nMinor Remarks which did not affect the grading:\n- Most of the citations should probably be in parentheses and not in line (as all of the paper citations are).\n- In line citation at L64 is strange (reference seems to be in the wrong place).\n- L251: reference to equation has a wrong number.\n- L264-266: These lines should probably be revised to a more accurate version as some terms are unclear (e.g., in which layer does the decision making start and feature extraction end?)\n- Tab. 1: All the first lines are in bold. It's a bit confusing for which result to focus on. Should probably choose the best result as in the other lines of the table.\n\nI am open to reconsidering my score given a revised manuscript which addresses the concerns I mentioned." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We enhance the accuracy and efficiency of neural represenations that predict neural network weights" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024enhancing,\ntitle={Enhancing Accuracy and Parameter Efficiency of Neural Representations for Network Parameterization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0mo2yqOS6Z},\nnote={under review}\n}" }, "abstract": { "value": "In this work, we investigate the fundamental trade-off regarding accuracy and parameter efficiency in neural network weight parameterization using predictor networks. We present a surprising finding where the predicted model not only matches but also surpasses the original model's performance through the reconstruction objective (MSE loss) alone. Remarkably this improvement can be compound incrementally over multiple rounds of reconstruction. Moreover, we extensively explore the underlying factors for improving weight reconstruction under parameter-efficiency constraints and propose a novel training scheme that decouples the reconstruction objective from auxiliary objectives such as knowledge distillation that leads to significant improvements compared to state-of-the-art approaches. Finally, these results pave the way for more practical scenarios, where one needs to achieve improvements in both model accuracy and predictor network parameter-efficiency simultaneously." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Implicit Neural Representations", "Parameter Generation", "Network Prediction", "Distillation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e5b84a18882f129c79c7d7db271ce441f5c20dfe.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Enhancing Accuracy and Parameter Efficiency of Neural Representations for Network Parameterization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0mtz0pet1z
Incremental Causal Effect for Time to Treatment Initialization
main
Active
Causal Inference;Positivity;Incremental intervention;Incremental Causal Effect;Inverse probability weighting
causal reasoning
3;6;6;6
3;4;3;2
3;2;3;3
2;2;2;3
1;3;3;3
5.25
3
2.75
2.25
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Can you explain which pieces of this work are novel/provide a contributions list? From the introduction, it looks like the move to continuous time to treatment is new. However, the abstract makes it sound like avoiding the positivity assumption is also novel, while the paper makes it seem like that was something that was already shown as a property of incremental causal effects.\n\nI'm not used to seeing L chosen as the variable for covariates/potential confounders. (I've seen W, X, V, C....) It's not a problem, but is the choice of L based on any particular subset of the literature?\n\nAs per my confusion in the Weaknesses section, can you clarify what you mean by an incremental intervention being one that is \"a function of the observed treatment distribution\"?\n\nIn the second paragraph of the introduction, you state that, in a static intervention, \"the subjects are either all treated or all untreated\", in contrast to a dynamic intervention, where treatment could depend on covariates. This makes it sound like the entire population is either all treated, or all not treated. This is a valid scenario to consider (e.g., a state government policy that then affects everyone living in that state), but you describe static interventions as being \"typically the case when considering an average treatment effect (ATE)\", in which case, you typically need examples of both treated and untreated subjects. I would assume that you meant \"each subject is either treated or untreated, assigned independently of their covariates\", but that's not particularly close to what you wrote, so I must be misunderstanding something. Actually, reading the abstract of Bonvini et al (2021), they describe ATE as relating to \"the effect of everyone deterministically receiving versus not receiving treatment\" - as in, the counterfactual question. Is that what you're referring to here?\n\nEspecially in medical examples (such as the MTX arthritis example), individual covariates can change over time. Does your model take into account that an individual's covariates L could change over the timesteps before they get treated, which could in turn affect the probability of treatment?\n\nI'm not following the explanation at the beginning of Related Work about how incremental intervention avoids making the positivity assumption. Summarizing Kennedy (2019), you say that, for subjects with 0 or 1 probability of treatment, we can see positivity as always satisfied \"because perturbing the odds does not change their degenerate probabilities\". How does that follow?\n\nIn the line before Theorem 1, it says \"We prove that ([1]) can be identified\". What is ([1]) referring to here? Are you referring to Theorem 1? Assumption 1? Equation 1?\n\nAre there any baselines you can use for comparison in the experimental results? Some other effect estimation method, or at the very least some naive baseline that could provide some calibration for the experimental results?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "While there are some clarity issues with the narrative of the paper, the individual sections and descriptions in the paper are clear and easily readable. The literature review on incremental causal effects and time to treatment is very thorough, situating the paper nicely in the literature. The examples chosen, especially the rheumatoid arthritis example, are strong, and the analysis of the rheumatoid arthritis experiment (the reasoning about doubling the hazard decreasing joint pain) is especially compelling and highlights very nicely how this method could be used in practice." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In many settings, whether or not a subject receives a treatment at any given time point may be a function of their covariates. In these settings, we can reason about the time to treatment from when an individual becomes treatment-eligible to when they actually receive treatment. Such a model can allow us to reason about how changes to covariates can affect time to treatment and, through treatment, some relevant outcome. This model has the benefit of not requiring the positivity assumption. The authors define this model in terms of hazard functions and provide an estimator. They then assess their model on both synthetic and empirical data and demonstrate how it can be used to inform policy/medical practice decisions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The biggest weakness of this paper in my eyes is that the problem being solved is not clearly defined. Some of this seems to be due to language issues (while the occasional grammatical issue or awkward phrase don't generally impede understanding, there are a few parts where the intended meaning isn't clear), and some of this is due to the lack of a clear motivating example\". Specifically:\n\n- In the introduction, the authors define an incremental intervention as an intervention \"that is not pre-specified, but rather a function of the observed treatment distribution.\" Without defining what is meant by \"the observed treatment distribution\", I assume that it means which units in the sample population received treatment and which didn't (P(T)), or maybe the conditional probability distribution in the sample population (P(T|L)). I'm having a hard time understanding what this means, even after having read the whole paper. The example in this section, as well as the MTX use case in the experiments, seem to deal with an intervention on treatment (such as being assigned to a behavioral health program or being prescribed MTX) that is, presumably, informed based on the individual's covariate values. So this is an intervention that is a function of the observed covariates, not \"of the observed treatment distribution.\" Am I misunderstanding something about your approach here?\n\nThe organization of the introduction seems a bit backwards to me. The example in the third paragraph (probationers being assigned to behavioral health services) is a great motivating example for the general \"time to treatment initialization\" problem setting. The first paragraph contains two good examples, but the narrative about \"time to treatment\" is not made clear. For example, the first 4 sentences of the paper talk about a tech team struggling to keep up with review requests and asks the reader to consider the effect of doubling the number of reviewers on the processing time of requests. Coming into this paper with causality in mind, this sounds an awful lot like reasoning about intervening on a treatment (the number of reviewers) and measuring the effect on an outcome (the processing time). However, as becomes clear later in the paper, the \"processing time\" is not, in fact, the outcome, but the time until treatment. And if that's time until treatment, then I suppose treatment = somebody reviewing a request. But then I'm not sure what the outcome is....backlog size??\n\nIf you want to open with that example, you should start by clearing explaining how it maps to your problem. An example flow, assuming I'm understanding the problem correctly: \"We're interested in understanding how long it takes people on a tech team to respond to review requests. The time until review is not static, but depends on many features, such as how many reviewers the help desk has at that time. \n After a system outage, the number of requests has increased, creating a large backlog. The scheduler wants to decrease the size of this backlog, which they plan to do by decreasing the time until review. The number of reviewers has a large effect on the time until review, so the scheduler decides to double the number of reviewers, which then doubles the likelihood of a request being reviewed at any given time, as requests are often selected for review at random. This process - reasoning about percentage changes to covariates to determine their effect on time to treat and, thus, the effect of treatment - is called \"incremental causal effects\".\"\n\nThe second paragraph of the introduction also oddly placed. Digging into the different types of interventions is interesting, and the distinctions brought up are quite relevant, but without having a clear problem statement yet, it's unclear how to fit your proposed method into that framework. I think you're missing a paragraph in the introduction where you define (not technically, but in straightforward language) your problem statement. (i.e., the time until treatment for each subject is based on some measured covariates; the outcome is an effect of that treatment and starts be recorded as soon as treatment is applied to that subject; we want to reason about how changes in the probability of treatment function affect outcome).\n\nSection 1.1 is focused on, and named after the positivity assumption, and highlights bypassing the positivity assumption as a key advantage of incremental causal effects. However, this section, from what I can tell, never actually explains how it bypasses positivity. (Also, the phrase \"avoids the positivity\" is weird - reword that) It's only the introduction, so I don't expect an in-depth explanation yet, but given that it's a whole section in the intro about positivity, at least a sentence giving an intuition about why we can ignore positivity would help. Following on from that about positivity, it looks like it's addressed in the first paragraph of the Related Work section. However, the explanation in the related work is not very clear or detailed (and again, especially given how prominently positivity was just highlighted in the introduction, I expected a deeper/clearer explanation).\n\nSome terminology explanation is missing. Line 171 defines $\\lambda(t|l)$ and $\\Lambda(t|l)$ as just \"its hazard function and cumulative hazard function at time t given L = l, respectively.\" I assume \"its\" here refers to T. However, from what I can see, you never actually define either hazard function, despite them being fairly core to your method.\n\nI like the setups chosen for both the synthetic and empirical experiments, but the lack of a baseline makes interpreting the results near-impossible. For example, in the simulation results, you say that your results illustrate that the incremental causal effects \"perform well with small biases.\" I'm struggling to see how you came to that conclusion from Table 1 alone. Looking at the numbers in the \"Bias\" row, they look low, but are they actually low for that problem? Did you use additional visualizations to come to the conclusion that these numbers represent good performance?\n\nBetween the clarity issues throughout and the difficulties in interpreting the experimental results, I don't feel comfortable voting for acceptance. If these issues are adequately addressed, though, I'm open to increasing my score." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1- How is it possible to analyze the estimator with finite sample data? For example, is there a high-probability guarantee for it?\n\nI am not completely familiar with the area covered in the paper, and I’m uncertain about its contribution to the field. I may revise my score after considering the feedback from other reviewers." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1- The paper addresses an important problem and proposes an algorithm to solve it.\n\n2- The proposed approach has been analyzed both theoretically and empirically." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper extends INCREMENTAL CAUSAL EFFECT to continuous-time treatment. To this end, the author shows that the target quantity is identifiable under certain assumptions, excluding the well-known positivity assumption. An estimator is then proposed that is consistent. The effectiveness of the estimator has been validated through empirical experiments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1- Some definitions in Subsection 3.1, such as the hazard function and related concepts, are not clear to the reader. It would be beneficial to provide more detail, as there is still enough space available." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. If there is unmeasured confounding, is it natural to apply sensitivity analysis or proximal causal inference in this setting? How might these approaches integrate with your proposed IPW estimator?\n2. Are the regularity conditions outlined in Theorem 2 and Theorem 3 considered trivial or standard in common survival models?\n3. Is there any pattern in the estimator performance (bias, variance, or stability) as $\\theta$ changes?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper extends incremental causal effects, which do not rely on the traditional positivity assumption, to a new setting. This advancement allows for new approaches to studying time-to-treatment problems in fields such as public health and policy-making.\n2. Theoretical guarantee is provided.\n3. The presentation and flow of this paper are clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies a novel setting in causal inference: incremental causal effect of intervening the continuous time to treatment initiation by shifting the hazard function. It introduces an IPW estimator with proofs of consistency and asymptotic normality. It is also validated through empirical simulations and a real-world study." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Additional experiments could provide deeper insights into the behavior and robustness of the proposed approach under different scenarios. For example exploring different shift interventions, hazard functions, and comparative analysis against any alternative estimators.\n\nMinor comments:\n\n2. The clarity of Theorems 2 and 3 could be improved by stating all conditions and notations explicitly.\n2. In the simulation, it can be made more clear to state the true effect and whether the outcome is censored in the DGP.\n3. Typos: in line 59, line 340, and Theorem 3 proof." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses.\n\n1. Line 65: The paper states, \"In general, the incremental causal effect has the interpretation of a policy effect on the population, instead of the therapeutic effect on an individual.\" This reminds me the field of reinforcement learning (RL), which is specifically designed for learning policy rewards. Could there be potential benefits in using RL to learn incremental causal effects?\n2. Could you provide a concrete example to illustrate the difference between \"time to treatment initiation\" and \"continuous time to initiating treatment\"? These terms appear several times in the paper, but their distinction remains unclear to me. Since this paper focuses on the continuous version, clarifying this difference could help motivate the choice.\n3. Line 340: typo 25$ should be 25%." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper addresses a gap in causal inference by extending incremental causal effects to continuous time-to-treatment initiation, an area that has not been extensively studied before.\n2. The authors provide consistency and asymptotic normality theorems for their estimand, adding theoretical rigor to their approach.\n3. The paper is well-written and easy to follow. I particularly like the related work section which offers a comprehensive background that is beneficial for readers." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel methodology for estimating incremental causal effects in continuous-time settings, with a specific focus on time-to-treatment initiation. This approach intervenes on the intensity (hazard function) of treatment initiation without relying on the positivity assumption. By shifting the hazard function through a multiplicative factor theta, the authors develop an identification strategy using inverse probability weighting. Theoretical justification, along with both synthetic and real-world experiments, is provided." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. A major strength claimed by the authors is that the new estimand avoids the positivity assumption. However, as noted in related work (line 125), \"Kennedy (2019) proposed an incremental intervention that fully resolved the positivity issue.\" Since this paper also focuses on incremental causal effects, this aspect of the contribution may lack novelty. \n2. In the synthetic experiment, only a single feature L is used, following a simple uniform distribution. The experiment would be more robust if it included multiple features and more common distributions, such as Gaussian. Additionally, the experiment would benefit from comparisons with other baseline methods, as the paper currently presents only the performance of their model without comparative analysis.\n3. In Section 4.2, the authors mention that the decreasing trend in the average number of tenders aligns with findings from a 2002 paper. While it is always impossible to know the true causal evidence in real-world cases, a quantitative comparison of the estimated causal effects with results from other studies would strengthen the paper beyond trend consistency alone." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a novel causal estimand for studies with continuous time to initialize treatment." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024incremental,\ntitle={Incremental Causal Effect for Time to Treatment Initialization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0mtz0pet1z},\nnote={under review}\n}" }, "abstract": { "value": "We consider time to treatment initialization. This can commonly occur in preventive medicine, such as disease screening and vaccination; it can also occur with non-fatal health conditions such as HIV infection without the onset of AIDS; or in tech industry where items wait to be reviewed manually for spam or abusive contents, etc. While traditional causal inference focused on `when to treat' and its effects, including their possible dependence on subject characteristics, we consider the incremental causal effect when the intensity of time to treatment initialization is intervened upon. We provide identification of the incremental causal effect without the commonly required positivity assumption, as well as an estimation framework using inverse probability weighting. We illustrate our approach via simulation, and apply it to a rheumatoid arthritis study to evaluate the incremental effect of time to start methotrexate on joint pain." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Causal Inference", "Positivity", "Incremental intervention", "Incremental Causal Effect", "Inverse probability weighting" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4f6d7f10ed23132fbad3e28d741ec761b906b202.pdf" }, "presentation": null, "primary_area": { "value": "causal reasoning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Incremental Causal Effect for Time to Treatment Initialization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0n4bS0R5MM
VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control
main
Active
video generation;3d;diffusion
generative models
5;6;6;8;8
5;4;5;3;4
2;3;3;3;3
2;3;2;3;2
2;3;3;3;3
6.6
4.2
2.8
2.4
2.8
-0.801784
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Have there been additional relevant experiments regarding camera trajectories, such as comparisons of control quality for generated trajectories of varying complexity?\n\nAt this time, I have no further questions." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Camera control during the video generation process is a significant issue. As more foundational models adopt transformer architectures, exploring control mechanisms for these models becomes crucial. This paper is the first to investigate how to better utilize camera trajectory parameters for transformer-based video generation models, using SnapVideo as the foundational model. The design is well thought out, and the evaluation is rigorous. The strengths of the paper are as follows:\n\n- Unlike the spatiotemporal decoupling generation of U-Net structures, transformer-based video generation considers spatiotemporal video tokens globally, which means it cannot directly leverage the advantages of spatiotemporal decoupling. This paper overcomes this limitation by being the first to explore control specifically for spatiotemporal transformers. This shift in foundational model structure is critical and provides a solid engineering foundation for future work.\n \n- The authors use Plücker embeddings to convert camera intrinsic and extrinsic parameters into pixel-level controls, which match the shape of video tokens. This information is then introduced through read cross-attention layers. While this approach is a straightforward combination of existing methods, it has been validated as effective for transformer-based video generation models, providing valuable experimental insights.\n \n- The paper includes comprehensive evaluations and ablation studies, conducting both qualitative and quantitative experiments regarding video content quality and camera control, with well-defined criteria. The evaluation of baseline models is fair, making the transition to the new foundational model structure more convincing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a camera control method for transformer-based video generation models that enhances control while ensuring visual quality. The proposed approach aligns the video perspective with predefined camera trajectories, improving controllability. The authors claim this is the first study to employ ControlNet-like guidance for global spatiotemporal transformer video generation models, in contrast to the more commonly used U-Net architecture. Moreover, the evaluation demonstrates that both the video quality and adherence to the input camera trajectories are state-of-the-art." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While camera control is the central problem addressed in this paper, the camera trajectories used primarily come from the RealEstate10K dataset, which, as observed in the visual results, mostly follow smooth, straight lines. There is a lack of consideration and experimentation with trajectories of varying difficulty, such as those involving significant directional changes. This raises some questions regarding the trajectory settings.\n \n- There have been several prior works in the 3D multi-view generation field that focus on similar camera control issues, such as the referenced *Cat3D*, which also employed Plücker embeddings and attention calculations. The distinction between spatiotemporal decoupling and other network characteristics is a design feature intrinsic to the architecture. In exploring DiT-based generation, there have also been multiple studies investigating spatiotemporal decoupling, such as *Latte: Latent Diffusion Transformer for Video Generation*. Therefore, the novelty of this work lies more in applying existing designs to spatiotemporal transformers rather than presenting a technological innovation. However, the state-of-the-art results under the new configuration indeed serve as an important engineering reference for future directions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "Though the authors are transparent on the fine-tuning data (RealEstate10K), the pre-training data for this proposed framework is unknown, potentially containing copyrighted data. It may also be contaminated with the test sets of the RealEstate10K data or MSR-VTT data, making the reported results in Tab. 2 concerning.\n\nTo ensure reproducibility as strongly recommended in the ICLR author guide, the authors are encouraged to adapt the proposed framework to publicly available pre-trained models." }, "flag_for_ethics_review": { "value": [ "Yes, Legal compliance (e.g., GDPR, copyright, terms of use)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Can the authors please comment on the above-mentioned weaknesses?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The proposed controlnet design outperforms other model variants designed by the authors. The evaluations are thoroughly conducted for the design choices. Detailed ablations are provided." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The method proposes a controlnet-like architecture for a private video diffusion model by including plucker coordinates as camera control." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The proposed framework overfits on the trajectories that are seen during training. Though the authors provide quantitative comparisons in Tab. 8, no visual comparisons are provided. \n- Though the performance is impressive, the technical contribution is limited in the proposed framework. Training a ControlNet for diffusion transformer is not new, as shown in [1]. Using Plucker coordinates for camera control is not new, as shown in CameraCtrl (He et al., 2024a).\n\n[1] Chen J, Wu Y, Luo S, et al. Pixart-{\\delta}: Fast and controllable image generation with latent consistency models[J]. arXiv preprint arXiv:2401.05252, 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Overall I think this paper provides an effective solution to amend video diffusion transformers with camera control. See weaknesses for questions and discussion." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- paper is easy to follow\n- the proposed design including the Plücker embedding is reasonable and effective\n- comprehensive experiments are conducted and presented in the main manuscript and appendix\n- supplemental materials contain video samples to demonstrate the effectiveness" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper provides a method for adding the control of camera movements to video diffusion transformers. The core idea is to represent camera conditions as pixel-level ray condition with Plucker embeddings. A ControlNet inspired module is used to process the camera condition. The model is finetuned on RealEstate10k and is compared to two similar methods both qualitatively and quantitatively." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The proposed method has been evaluated only on one video diffusion transformer, which raises some concerns on whether its performance can generalize to other pretrained video diffusion transformers.\n- I'm curious about the distribution of camera movements evaluated in the experiments, in terms of its diversity and similarity to natural camera movements.\n- The novelty is slightly limited, as the task is not new, and ControlNet-like module as well as Plücker embedding have been explored and used before.\n\nminor:\n- CameraCtrl was appeared on arXiv in April 2024, which should not be considered as a concurrent work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Nothing crucial." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper presents a nice idea on to enable spatio-temporal transformer-based video diffusion models with camera control. It is the first to do so by using a ControlNet-like block in combination with Plucker coordinates, and presents a nice contribution. I value the ablation studies in the paper and the reasoning as to how they arrived at this particular architecture. The paper is quite well-written overall. It's easy enough to follow and appreciate the contribution." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a first method to condition spatio-temporal transformer-based video diffusion models on camera poses (previous methods focused on pre-trained U-Net-based diffusion models). To this end, they propose a ControlNet-like block that conditions the model on camera embeddings that are based on Plucker coordinates. The paper evaluates the choices made and demonstrates good results, both qualitatively and quantitatively." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While I'm positive overall, there are a few weaknesses. \n\nA criticism that could be made is that this paper is a bit of an A+B paper, where A is the ControlNet block and B is the Plucker coordinates. I don't think that's a useful criticism, because at the end of the day it's a sensible thing to do and the authors demonstrate that you need to do a few things to make it work (Table 2).\n\nThe paper reads well overall, however, I'm not a fan of Figure 3. It's hard to interpret what goes where, and how this relates to the formulas. I'd suggest to clearly delineate what are the video patch tokens vs. the plucker patch tokens, maybe add the variables from the formulas to the appropriate boxes, and overall structure the figure so that the separate blocks are clearly separate.\n\nFinally, in terms of results, I would have liked to see more examples of the same scenes with different camera control (there are only three examples). Furthermore, most examples use input camera trajectories from scenes that are completely unrelated to the target scene. It be nice to see some that are related -- makes it easier to judge if the generated camera path is good or not." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "None." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Line 272, R_1, t_1 can be interpreted as the original extrinsic or the extrinsic after normalization. It would be better to use R'_1, and t'_1 to represent the extrinsic after normalization;\n\n2. I am not quite clear about the baseline implementation details. From my understanding, for MotionCtrl, one additional learned token is concatenated to the video patch tokens, while for CameraCtrl, the camera encoder produces additional latent tokens. Is that correct?\n\n3. While previous works sacrifice motion dynamics when training on RealEstate10K (mostly static scenes), the video in the supplementary material exhibits better and larger motions. What would be the possible reasons for the difference?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The authors made an important step forward in camera control for the Transformer-based video diffusion model, which is unexplored by previous research;\n2. The visual quality and motion dynamics of the generated videos are excellent;\n3. The methodology is clearly explained and easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel approach to incorporating camera movement control into Transformer-based video diffusion models. While camera control has been extensively studied in UNet-based video diffusion models, this area remains largely uncharted for Transformer architectures. The author introduces an additional ControlNet-like network to inject tokenized Plücker embeddings, demonstrating that this design enhances both visual quality and controllability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The main components of the method have been validated by previous works, for instance, the Plucker embedding as camera representation (CameraCtrl), and training on RealEstate10K(MotionCtrl, CameraCtrl);\n\n2. As mentioned in 1., I would say the specified designs for Transformer architecture are the major contribution of the paper. However, SnapVideo's architecture is not a typical DiT (it has the \"read\" operation, and the attention is not performed on the actual tokens). It's not clear how to extend the proposed method for a standard video DiT and how the performance would be.\n\n3. For the MotionCtrl baseline, the visual quality degrades when the base model is fine-tuned. Would it be better to freeze the backbone?\n\n4. Would be better to also provide the trainable parameter scale of the two baselines and the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "3D camera control for video diffusion transformers" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024vdd,\ntitle={{VD}3D: Taming Large Video Diffusion Transformers for 3D Camera Control},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0n4bS0R5MM},\nnote={under review}\n}" }, "abstract": { "value": "Modern text-to-video synthesis models demonstrate coherent, photorealistic generation of complex videos from a text description. However, most existing models lack fine-grained control over camera movement, which is critical for downstream applications related to content creation, visual effects, and 3D vision. Recently, new methods demonstrate the ability to generate videos with controllable camera poses---these techniques leverage pre-trained U-Net-based diffusion models that explicitly disentangle spatial and temporal generation. Still, no existing approach enables camera control for new, transformer-based video diffusion models that process spatial and temporal information jointly. Here, we propose to tame video transformers for 3D camera control using a ControlNet-like conditioning mechanism that incorporates spatiotemporal camera embeddings based on Plucker coordinates. The approach demonstrates state-of-the-art performance for controllable video generation after fine-tuning on the RealEstate10K dataset. To the best of our knowledge, our work is the first to enable camera control for transformer-based video diffusion models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "video generation", "3d", "diffusion" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ad8e388d6ab59a223b9672e022e136508579b04b.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/ca5191215b0292e5d7d20c377a11b2ac79cee1dd.zip" }, "title": { "value": "VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0nJEgNpb4l
PEAR: Primitive Enabled Adaptive Relabeling for Boosting Hierarchical Reinforcement Learning
main
Active
Hierarchical reinforcement learning;Learning from demonstrations
reinforcement learning
5;5;5;8
4;4;3;4
3;2;3;3
2;2;2;3
2;3;2;4
5.75
3.75
2.75
2.25
2.75
0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- How do the authors justify the reliance on expert demonstrations in the first phase of PEAR compared to HRL methods that function without such requirements?\n- Can the authors provide additional comparisons with recent HRL approaches to strengthen the positioning of PEAR within the current landscape?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. **Innovative Approach:** The use of adaptive relabeling to generate subgoals tailored to the capabilities of the lower primitive is a significant contribution that addresses the non-stationarity issue in HRL.\n2. **Theoretical Justification:** The authors provide theoretical analysis that bounds the sub-optimality of their approach, lending credibility to their claims.\n3. **Comprehensive Experiments:** Extensive experiments across multiple challenging tasks demonstrate the practical efficacy of PEAR, showing improved performance and sample efficiency over existing methods.\n4. **Real-World Application:** The validation of PEAR in real-world tasks enhances the relevance and applicability of the research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel approach called Primitive Enabled Adaptive Relabeling (PEAR) aimed at enhancing Hierarchical Reinforcement Learning (HRL) for complex long-horizon tasks. The authors propose a two-phase methodology where the first phase involves adaptive relabeling of expert demonstrations to generate subgoals, followed by joint optimization of HRL agents through Reinforcement Learning (RL) and Imitation Learning (IL). The results indicate that PEAR outperforms various baselines in both synthetic and real-world robotic tasks, achieving up to 80% success rates in challenging environments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Expert Demonstrations Requirement:** The first phase of PEAR relies on expert demonstrations to generate subgoals, which raises concerns about the fairness of comparison with other HRL methods that do not require such demonstrations. This could affect the generalizability of the findings.\n2. **Lack of Recent Comparisons:** The paper does not include comparisons with several hierarchical reinforcement learning methods published in the last three years. This omission limits the contextual relevance of the results and could misrepresent the state of the art, such as [1], [2].\n\n[1] Kim J, Seo Y, Shin J. Landmark-guided subgoal generation in hierarchical reinforcement learning[J]. Advances in neural information processing systems, 2021, 34: 28336-28349.\n\n[2] Wang, Vivienne Huiling, et al. \"State-conditioned adversarial subgoal generation.\" Proceedings of the AAAI conference on artificial intelligence. Vol. 37. No. 8. 2023.\n\n3. **Citation Issues:** Many citations are sourced from CoRR instead of their formal conference versions. This oversight should be corrected to ensure academic integrity and accurate referencing. For example, the last two references should be cited as follows (Note that there are many other citation errors beyond these two):\n\nWulfmeier, Markus, et al. \"Data-efficient hindsight off-policy option learning.\" International Conference on Machine Learning. PMLR, 2021.\n\nZhang, Tianren, et al. \"Generating adjacency-constrained subgoals in hierarchical reinforcement learning.\" Advances in Neural Information Processing Systems 33 (2020): 21579-21590." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Why is the Q_{Thresh} set to 0? If the low-level reward is -1 for not achieving the goal and 0 for achieving the goal, the Q value should be negative for any correct action that puts the agent on a path to achieve the goal. \n2. Is the high level policy only trained with a dataset containing expert trajectories? Or is it also trained on its own interaction data? If it is trained on its own interaction data, is there any relabeling done on that?\n3. I did not understand the “margin” component of the objective. Can you provide another explanation of this component?\n4. Why does the paper characterize the number of demonstrations as a “handful” of demonstrations, when it uses 100 for most tasks?\n5. Have you experimented on any domains with image observations?\n\nI am willing to raise my score if the authors can (i) provide some principled reasons why PEAR should outperform HAC+demonstrations and hierarchical behavior cloning and (ii) provide some answers to the above questions." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The writing was clear and easy to understand.\n- The authors included several ablation experiments that provided some important insights.\n- The paper included some real world robotic experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce an algorithm, Primitive Enabled Adaptive Relabeling (PEAR), to address the issue of non-stationary transition and reward functions when implementing HRL. Like Relay Policy Learning (RPL) (Gupta 2019), PEAR encourages the high level policy to output feasible subgoals using imitation learning. The main difference from RPL is that instead of imitating the actions from a dataset that occur at fixed intervals, PEAR uses a heuristic to select the latest subgoal that the low level policy can achieve. The authors show in a variety of experiments that PEAR can outperform several baselines.\n\nGupta et al. Relay Policy Learning. 2019" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main concern with this approach is that the contribution seems to be marginal as I do not see a compelling reason why PEAR would consistently perform better than (a) HAC (Levy 2019) with replay buffer augmented with demonstration data and (b) hierarchical behavior cloning, in which pure imitation learning is applied to both a high and low level policies. \n\nHAC already addresses the problem of nonstationarity through relabeling and subgoal testing. The problem with HAC is that it does not have a built-in mechanism to explore but this can be remedied with the demonstration data that is provided to PEAR. An advantage of HAC + demonstration data over PEAR, which would use a pure RL objective, is that if the demonstration data is suboptimal, it would not have an imitation learning regularization term forcing the agent to output suboptimal subgoals. HAC has also demonstrated it can learn more than two levels of hierarchy. The results of HER+BC, which I understood to be HER with a replay buffer augmented with demonstration data, was often ultimately able to match the performance of PEAR (see Figure 14), making it more likely that HAC, which is close to a hierarchical version of HER, should be able to match PEAR. \n\nIn addition, it seems that a pure hierarchical imitation learning approach, in which both levels are trained with supervised learning should also work, at least potentially with more data. The baseline BC may not have worked well because the tasks were too long horizon, but the addition of a high level policy trained with imitation learning should help.\n\nLevy et al. Hierarchical Actor-Crtiic. 2017" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Have you tested on suboptimal demonstrations? Are there any interesting findings/results which may point to future avenues for research or failings of the method which should be investigated/improved upon?\n\nWere you able to find notable reasons for why the MSE-regularized learning objective would occasionally outperform the IRL-regularized version? Is there any relationship to task difficulty, data diversity, etc?\n\nIn “Algorithm 2: PEAR,” line 8, shouldn’t the lower-level policy’s IL regularization be done with $D_g$? Since we are providing state $s^f$ from the goal dataset $D_g$ and subgoal supervision $s^e_g$ to the goal-conditioned low-level policy, then the policy predicts action $a$, and we regularize this to be close to the dataset action $a^f$ (either with MSE or the IRL objective)?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors carefully outline their construction of the PEAR algorithm, justifying the usage of each component (goal relabeling, the general algorithm, the joint optimization framework, etc) thoroughly and clearly. The sub optimality analysis provides further credence to their method. The method is novel, and notably outperforms previous HRL works (while, in some cases, removing the need for e.g. hand-made action primitives).\n\nI agree with the author view that the significance of the work should be gauged less by its immediate improvement over other LfD methods, and more by its conceptual groundwork. In this regard, this paper is well-written, the findings are well-presented, and the extensive ablations provide further insight into key aspects of the method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "PEAR combines its key feature, adaptive goal relabeling, with IL regularization (either MSE or IRL), and other tricks (e.g. margin classification objective) to beat several prior HRL methods on a standard array of tasks using expert demonstrations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "As authors note, the method is currently reliant on expert demonstrations. However, many benchmarks exist which include other kinds of demonstrations, including human teleop. While the method may not perform well on these demonstrations just yet (as it is listed as an aim of future work), providing results on suboptimal demonstrations would help demonstrate concretely the strong and weak points of authors' method, and potentially provide insights on why it fails in these settings.\n\nFurthermore, it seems inaccurate to state that PEAR uses only “a handful” of demonstrations, when Fig. 13 shows that generally 50-70+ demonstrations are needed to solve the provided tasks (with the exception of Franka Kitchen, which provides fewer demos)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Many existing works [1,2,3] also use IL loss from data to regularize high-level policy learning. What differentiates the proposed regularization method from these approaches?\n2. Why can we set the threshold of Q to 0 (Section 4.1) in all experiments? I believe this hyperparameter should vary, depending on the reward function specific to each task.\n\n[1] Pertsch, et al. \"Accelerating reinforcement learning with learned skill priors.\" Conference on robot learning. PMLR, 2021.\n[2] Shi, et al. \"Skill-based model-based reinforcement learning.\" arXiv preprint arXiv:2207.07560 (2022).\n[3] Yuan, et al. \"Pre-training goal-based models for sample-efficient reinforcement learning.\" The Twelfth International Conference on Learning Representations. 2024." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Using a few expert demonstrations to improve goal-based HRL is promising. The proposed adaptive relabeling method for IL regularization is straightforward, well-motivated, and yields good results.\n2. The paper includes some theoretical analysis.\n3. The experiments cover a variety of robotic tasks, including real-world test." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method to improve HRL, utilizing a few expert demonstrations. The key insight is that subgoals selected from the dataset can effectively guide exploration for the policy. An adaptive relabeling method is proposed to select the proper subsequent subgoal based on the Q value of the low-level policy. The relabeled data provides an imitation-based regularization for the high-level policy, encouraging it to output reachable, high-quality subgoals for the low-level policy. Experiments in diverse simulation robotic tasks demonstrate the effectiveness of the method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Inconsistent definition: Section 3 states that the expert data contains only states. But in Section 4.2, the low-level regularization term uses actions from the expert data.\n2. Addressing the non-stationarity issue in HRL is a main claim of the paper. However, the proposed method does not resolve this issue. The high-level policy still faces non-stationarity, as the transitions in its replay buffer, which are determined by the low-level policy, continue to change throughout the training process." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We effectively leverage expert demonstrations using our adaptive relabeling based approach to deal with non-stationarity in the context of hierarchical reinforcement learning." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024pear,\ntitle={{PEAR}: Primitive Enabled Adaptive Relabeling for Boosting Hierarchical Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0nJEgNpb4l},\nnote={under review}\n}" }, "abstract": { "value": "Hierarchical reinforcement learning (HRL) has the potential to solve complex long horizon tasks using temporal abstraction and increased exploration. However, hierarchical agents are difficult to train due to inherent non-stationarity. We present primitive enabled adaptive relabeling (PEAR), a two-phase approach where we first perform adaptive relabeling on a few expert demonstrations to generate efficient subgoal supervision, and then jointly optimize HRL agents by employing reinforcement learning (RL) and imitation learning (IL). We perform theoretical analysis to bound the sub-optimality of our approach and derive a joint optimization framework using RL and IL. Since PEAR utilizes only a handful of expert demonstrations and considers minimal limiting assumptions on the task structure, it can be easily integrated with typical off-policy \\RL algorithms to produce a practical HRL approach. We perform extensive experiments on challenging environments and show that PEAR is able to outperform various hierarchical and non-hierarchical baselines and achieve upto 80% success rates in complex sparse robotic control tasks where other baselines typically fail to show significant progress. We also perform ablations to thoroughly analyze the importance of our various design choices. Finally, we perform real world robotic experiments on complex tasks and demonstrate that PEAR consistently outperforms the baselines." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Hierarchical reinforcement learning", "Learning from demonstrations" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2e1ca19ceeb9a133c8fb696b0894570e961e10df.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/031a14e8a5e3b4b8ba5c6811c2665675d6663735.zip" }, "title": { "value": "PEAR: Primitive Enabled Adaptive Relabeling for Boosting Hierarchical Reinforcement Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0nJt9aVGtl
WaveDiffusion: Exploring Full Waveform Inversion via Joint Diffusion in the Latent Space
main
Active
Full waveform inversion;Diffusion model;Partial differential equation
applications to physical sciences (physics, chemistry, biology, etc.)
3;3;6;6
5;5;3;4
3;3;3;3
2;1;3;3
2;3;4;3
4.5
4.25
3
2.25
3
-0.904534
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tHow acoustic data and velocity models were preprocessed before training?\n2.\tThe authors trained the model for 1000 epochs. How long was it in terms CPU/GPU time (depending on the dataset)?\n3.\tThe discussion in the manuscript covers only generation and inversion of 2D spatial data. While 3D models/data are of much higher interest. Could the proposed algorithm be used in the 3D case? What will the implication on computational complexity?\n4.\tAn important test for an inversion code is to check that symmetric data with respect to some plane produces a symmetric velocity model. Would the presented generation model obey this principle?\n5.\tIt is not clear from the experiments how the presented algorithm compares to baselines. Section 4.2.3 Comparison with Inversionnet does not give the answer on the obvious question: which of the two algorithms is better." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Generative AI is transforming different industries in our days and its use for data inversion looks like a promising research direction. Both theoretical and experimental parts are well-present and easy-to-follow. An important original feature of the work is joint generation of acoustic data and velocity models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The manuscript introduces a new approach to invert acoustic wave equation data based on a joint generative process. Although there were earlier papers on the use of generative models for data inversion, the presented approach looks fairly original. The authors study the famous geophysical problem known as full waveform inversion (FWI). The approach was tested on 2D spatial data from public dataset OpenFWI." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tIt is not clear (at least none of the experiments show this) how to use the presented algorithm to invert actual data. It is shown how to generate acoustic data and velocities. But what is typically expected by the reader is the answer on what to do when we are given with some specific seismic data.\n2.\tSection 4.2.3 Comparison with Inversionnet is not sufficiently complete and convincing. See Questions below.\n3.\tThe geophysical terminology is mixed in the manuscript. Notice the used wave equation models $acoustic$ data. This is a significant simplification of seismic phenomena. In other words, the terms $acoustic$ and $seismic$ are not interchangeable." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please address the weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well written and the diffusion approach in the latent space is an interesting extention to dual autoencoder approaches. With convincing results to support this research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a new framework for Full Waveform Inversion (FWI) that uses a joint diffusion process in a shared latent space. This approach merges the bottlenecks of two separate autoencoders (one for seismic data and one for velocity maps) into a unified latent space." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My major problem with this manuscript is that the main approach to generating two joint autoencoders is not novel. A similar approach including similar experiments has been proposed and published the approach on dual autoencoder before this submission (https://arxiv.org/pdf/2305.13314) and another publication on dual autoencoder can be found at (https://arxiv.org/pdf/2405.13220). These contributions are neither acknowledged nor cited. The remaining novelty is the diffusion process within the latent spaces which is by itself an interesting idea and should have been stated as the contribution of this manuscript." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Could you comment on the comparison with the existing conditional generative models? Why didn't you compare to any of the existing methods at least in 4.2.4 section?\n2. Could you comment on the choice of the reconstruction method? I think it would be beneficial to add at least one additional data-driven solver. It would be interesting to see how the reconstruction methods work with data generated by different generative models.\n3. The generative model was trained on the same OpenFWI dataset on which InversionNet was later evaluated. What is the amount of data your generative model should be trained with and how does it compare to the size of a dataset reconstruction methods (e.g., InversionNet) should be trained with? If the size of a dataset for reconstruction methods is satisfying what is the rationale of doing this? I think you should address the limitations of such a setup.\n4. In continuation to the previous question, how realistic is the Gen+1\\% case? In this case, you trained your generative model on the same data distribution as in the 1\\% of the original dataset. If a real dataset is small, wouldn't it be more realistic to train your generative model with real data that differ from the distribution in the small dataset? Maybe a more realistic case would be to train the generative model on the two subsets and add 1\\% of the third subset of OpenFWI. Could you comment on this? What are the implications of the existing setup for real-world applications of the method?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The proposed paper is well-organized and the idea is clearly presented.\n- The paper offers a new perspective on the FWI generation problem by simultaneously generating two modalities -- seismic data and velocity maps -- from the shared latent space. This is a novel idea in contrast to the existing related work which treats these two modalities separately.\n- Treating seismic data and velocity maps separately limits the ability to generate physically consistent seismic-velocity pairs. In contrast, jointly generating these modalities makes them approximately consistent with the governing PDE that describes the relationship between them.\n- The extensive experiments confirm the soundness of the proposed method and show that the jointly generated seismic-velocity pairs can be a useful supplement to real training data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Full waveform inversion (FWI) is a seismic imaging technique that traditionally reconstructs the subsurface velocity model by iteratively comparing observed and predicted seismic data. More recently, machine learning-based approaches would solve FWI by treating it as an image-to-image translation problem. Furthermore, generative diffusion models mainly treated FWI as a conditional generation problem where the velocity map is generated from a given seismic data. This paper offers a new perspective on FWI by considering it as a joint generative process. Namely, the paper considers whether the two modalities -- seismic data and velocity map -- can be generated simultaneously. Two key steps are proposed: first, a dual autoencoder encodes the two modalities in a shared latent space that provides a coarse approximation of the wave equation solution. Second, a diffusion process in the latent space refines the coarse latent representations which are later decoded into seismic data and velocity maps. In contrast to seismic-velocity pairs generated by the conditional models which often lack physical consistency, the jointly generated pairs approximately satisfy the governing PDE without any additional constraint. The paper's main goal is to offer a new perspective by extending FWI from a conditional generation problem to a joint generation problem." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I think there are three main problems in the experiments:\n- The method wasn't compared to any existing conditional generative methods. There is even a section 4.2.4. that compares separate vs. joint diffusion but there the separate diffusion was the same model as for the joint diffusion but with a single branch kept active and the latent space no longer shared. I think it would be useful to see how the proposed method compares to the existing methods (e.g., [1]) both in terms of the diversity of the generated data and the performance of the reconstruction methods when trained on the generated data.\n- The results might also differ based on a different reconstruction method other than InversionNet (e.g., [2] and/or [3]). I think it would be beneficial to add at least one additional data-driven solver.\n- Some of the experiments in the results section do not seem to be realistic. (see more in the questions section)\n\n---\n\n[1] F. Wang, X. Huang, and T. A. Alkhalifah. \"A prior regularized full waveform inversion using generative diffusion models.\" IEEE Transactions on Geoscience and Remote Sensing, 61:1-11, 2023.\n\n[2] P. Jin, X. Zhang, Y. Chen, S. Huang, Z. Liu, and Y. Lin. \"Unsupervised learning of full-waveform inversion: Connecting CNN and partial differential equation in a loop.\" ICLR, 2022.\n\n[3] Z. Zhang, Y. Wu, Z. Zhou, and Y. Lin. \"VelocityGAN: Subsurface velocity image estimation using conditional adversarial networks.\" In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), 2019, pp. 705-714." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The interesting parts of the paper are actually hiding towards the end.\n\n1. How do you actually do coarse to fine?\n\n2. Given some data $d$ how to you use diffusion to find an appropriate model\n\nI would recommend re-writing the paper with section 3.3 in mind. Since training a dual AE is not very innovative and using diffusion of the latent space is not very innovating, the innovation is exactly what you do in 3.3. You could easily develop it to a full paper." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The idea of using a joint feature space is good and then using a diffusion model on this space is also a good idea. The results are interesting and it seems that the approach works for the models in the data base." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper deals with the problem of full waveform inversion.\nThere are two mechanisms that the paper proposes.\n1. The paper uses the same latent space for both the model and the data\n2. They train a diffusion model in the latent space. Such a diffusion model can therefore generate a plethora of models and their data.\n\nResults look reasonable even though the models that are being trained on are very simple." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Unfortunately, the idea of using an AE with common feature spaces for data and model is not new. See https://paperswithcode.com/paper/paired-autoencoders-for-inverse-problems\nThis is the main problem that the paper have. I understand that in this fast moving field some papers are missed but in this case, the work that was already done makes much of the paper not relevant.\nI would recommend the authors to withdraw the paper, concentrate of the diffusion aspect of the paper and resubmit to a different venue." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024wavediffusion,\ntitle={WaveDiffusion: Exploring Full Waveform Inversion via Joint Diffusion in the Latent Space},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0nJt9aVGtl},\nnote={under review}\n}" }, "abstract": { "value": "Full Waveform Inversion (FWI) is a vital technique for reconstructing high-resolution subsurface velocity maps from seismic waveform data, governed by partial differential equations (PDEs) that model wave propagation. Traditional machine learning approaches typically map seismic data to velocity maps by encoding seismic waveforms into latent embeddings and decoding them into velocity maps. In this paper, we introduce a novel framework that reframes FWI as a joint diffusion process in a shared latent space, bridging seismic waveform data and velocity maps. Our approach has two key components: first, we merge the bottlenecks of two separate autoencoders—one for seismic data and one for velocity maps—into a unified latent space using vector quantization to establish a shared codebook. Second, we train a diffusion model in this latent space, enabling the simultaneous generation of seismic and velocity map pairs by sampling and denoising the latent representations, followed by decoding each modality with its respective decoder. Remarkably, our jointly generated seismic-velocity pairs approximately satisfy the governing PDE without any additional constraint, offering a new geometric interpretation of FWI. The diffusion process learns to score the latent space according to its deviation from the PDE, with higher scores representing smaller deviations from the true solutions. By following this diffusion process, the model traces a path from random initialization to a valid solution of the governing PDE. Our experiments on the OpenFWI dataset demonstrate that the generated seismic and velocity map pairs not only exhibit high fidelity and diversity but also adhere to the physical constraints imposed by the governing PDE." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Full waveform inversion", "Diffusion model", "Partial differential equation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a44fecfb5bfbd9910709a7d01117361e01d25f91.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/3e82064be9cab49e3c7147b7990831e6956051c8.pdf" }, "title": { "value": "WaveDiffusion: Exploring Full Waveform Inversion via Joint Diffusion in the Latent Space" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0no1Wp2R2j
Going Beyond Feature Similarity: Effective Dataset distillation based on Class-aware Conditional Mutual Information
main
Active
dataset distillation;conditional mutual information
other topics in machine learning (i.e., none of the above)
3;6;6;6
4;3;4;4
2;3;3;3
2;3;3;3
3;2;3;3
5.25
3.75
2.75
2.75
2.75
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tThe paper is well-organized and clearly presents the methodology, results, and analyses. Figures and tables are effectively used to convey improvements and insights. However, further explanation of certain key terms, such as \"empirical CMI,\" might enhance accessibility for readers unfamiliar with the topic.\n2.\tThe ablation studies conducted to assess the influence of the weighting parameter on the CMI constraint are informative. Still, a broader exploration of other hyperparameters affecting CMI estimation, such as the dimensionality of feature space and network depth, could reveal potential optimizations.\n3.\tThe potential of CMI for real-world applications, such as federated learning or privacy-preserving tasks, is not discussed. Given the emphasis on dataset distillation's applications in these areas, an exploration of how CMI might support these domains would align well with the broader goals of dataset distillation research." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe idea of using CMI in dataset distillation to address the inherent class-aware complexity issue is interesting.\n2.\tThe experiments are conducted based on multiple datasets and various model architectures, providing solid evidence for the method's effectiveness. \n3.\tThe proposed method CMI is a versatile, \"plug-and-play\" regularization component that can be applied to numerous dataset distillation methods, such as DSA, MTT, and IDC. This flexibility allows the approach to generalize across different scenarios and highlights its robustness.\n4.\tBy controlling the complexity of the synthetic dataset, the CMI-enhanced loss achieves faster convergence and reduces the number of required training iterations, which is particularly beneficial for large-scale datasets and resource-intensive models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new approach for dataset distillation by introducing a class-aware conditional mutual information (CMI) metric to address challenges in creating compact, representative synthetic datasets. Traditional dataset distillation methods often compress feature similarity without considering class-specific complexity, making it hard for models to generalize across different classes. This work leverages CMI as a regularization constraint, optimizes synthetic datasets and improves training efficiency as well as model performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tWhile the paper demonstrates the CMI constraint’s benefits clearly, this method also introduces additional computation overhead, especially when dealing with high-resolution datasets. Although the authors briefly mention several strategies for mitigating this cost (e.g., reducing CMI calculations frequency), a more thorough discussion on balancing cost and performance might strengthen the practical feasibility.\n2.\tAlthough empirical evidence is strong, the theoretical basis for CMI as a regularization term could be expanded. Specifically, further details on how CMI inherently captures class complexity or why it is preferable over alternative complexity measures would provide deeper insight.\n3.\tWhile the experiments on Tiny-ImageNet and ImageNet-1K are promising, it remains unclear how the proposed method scales with even larger datasets or more complex models, such as those used in real-world applications with hundreds of classes. Additional experiments in such contexts would further show the robustness of this method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please kindly refer to the above weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe proposed CMI method is a relatively simple yet effective approach that is plug-and-play in nature. It has demonstrated its effectiveness across multiple baseline methods.\n2.\tThe motivation behind the method proposed in the paper is solid and is supported by a certain theoretical foundation.\n3.\tThe experiments in the paper are comprehensive, conducted across various scales of datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a Conditional Mutual Information (CMI) method as a plug-and-play loss function to enhance the performance of dataset distillation methods. Experiments conducted on multiple baseline methods demonstrate the effectiveness of the CMI loss." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThere are now newer and more powerful methods available, such as \"Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching\" (ICLR 2024). The authors could consider experimenting with their proposed method on these methods.\n2.\tThe description of the method in the paper could be clearer, particularly regarding the explanation of the formula symbols, to better emphasize the key points of the approach. Currently, it appears somewhat ambiguous.\n3.\tIn my view, using mutual information or KL divergence is not a particularly novel approach, as it has been employed in many works across various fields." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Can you discuss any potential limitations of your proposed method and suggest directions for future work to address these limitations?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The strengths of this paper lie in its comprehensive experimentation across diverse datasets and network architectures, which effectively demonstrates the versatility and robustness of the proposed method. Furthermore, the method's ability to be integrated as a plug-and-play module into existing dataset distillation techniques, regardless of their optimization objectives, showcases its innovation and flexibility, making it a significant contribution to the field." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel regularization method for dataset distillation (DD) by minimizing both the distillation loss and Conditional Mutual Information (CMI) of synthetic datasets. It uses an efficient CMI estimation method to measure class-aware properties and combines CMI with existing DD techniques. Experiments show that the proposed CMI-enhanced loss significantly outperforms state-of-the-art methods, improving performance by up to 5.5%. The method can be used as a plug-and-play module for all existing DD methods with diverse optimization objectives." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper lacks a clear discussion of the limitations of the proposed method. Furthermore, the authors should consider using more intuitive explanations, visual aids, and pseudocode to help readers better understand the technical details of the method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "see weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The logic is clear.\n- The experiments are comprehensive.\n- The review of related works is thorough.\n- The proposed method is theoretically sound." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a plug-and-play method, termed CMI, designed to enhance existing DD techniques by minimizing conditional mutual information. By applying CMI, the distilled data is concentrated more effectively around the center of each class, thereby improving generalizability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The core ideas, methodology, and formulations in this paper draw substantially from the approach proposed in [1].\n- If \\hat{Z} contains excessive confusing or uninformative information related to S , then H(S | \\hat{Z}, Y) will not be reduced; rather, it could remain the same or even increase. This is because conditional entropy reflects the remaining uncertainty in S after observing both \\hat{Z} and Y . When \\hat{Z} is noisy or irrelevant for predicting S , it does not help in reducing this uncertainty.\n- Line 213 states that “minimizing the class-aware CMI reduces the uncertainty brought to \\hat{Z} conditioned on S ,” which should be “minimizing the class-aware CMI reduces the uncertainty brought to S conditioned on \\hat{Z}”.\n- The authors’ derivation of Equation 6 lacks an explicit explanation, making it challenging to fully understand the transition from previous formulations.\n- Works like [2] and [3], which also target improvements in dataset distillation, are not adequately considered. \n- Equation 3 requires summing over all synthetic instances within class y , how the authors adapt this approach to instance-based distillation methods like SRe2L. \n\n[1] Bayes conditional distribution estimation for knowledge distillation based on conditional mutual information\n\n[2]TOWARDS LOSSLESS DATASET DISTILLATION VIA DIFFICULTY-ALIGNED TRAJECTORY MATCHING\n\n[3]Prioritize Alignment in Dataset Distillation" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "introducing conditional mutual information to enhance the performance and the efficiency of dataset distillation" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024going,\ntitle={Going Beyond Feature Similarity: Effective Dataset distillation based on Class-aware Conditional Mutual Information},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0no1Wp2R2j},\nnote={under review}\n}" }, "abstract": { "value": "Dataset distillation (DD) aims to minimize the time and memory consumption needed for training deep neural networks on large datasets, by creating a smaller synthetic dataset that has similar performance to that of the full real dataset. However, current dataset distillation methods often result in synthetic datasets that are excessively difficult for networks to learn from, due to the compression of a substantial amount of information from the original data through metrics measuring feature similarity, e,g., distribution matching (DM). In this work, we introduce conditional mutual information (CMI) to assess the class-aware complexity of a dataset and propose a novel method by minimizing CMI. Specifically, we minimize the distillation loss while constraining the class-aware complexity of the synthetic dataset by minimizing its empirical CMI from the feature space of pre-trained networks, simultaneously. Conducting on a thorough set of experiments, we show that our method can serve as a general regularization method to existing DD methods and improve the performance and training efficiency." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "dataset distillation", "conditional mutual information" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f084a9da4e57ad80c07755d6ecfa5454e72659bb.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Going Beyond Feature Similarity: Effective Dataset distillation based on Class-aware Conditional Mutual Information" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0nxocR2qx4
ROPO: Robust Preference Optimization for Large Language Models
main
Active
preference optimization;large language models;noise tolerance
foundation or frontier models, including LLMs
5;6;6
4;4;4
2;3;3
2;3;3
3;3;2
5.666667
4
2.666667
2.666667
2.666667
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "(1) Only one type of practical noise is considered in the paper, specifically, the assumption that annotators inherently favor outputs from larger models over those from smaller ones. What are other type of practical noises? \n\n(2) The authors mention ROPO is an iterative alignment approach. How the iterative process takes place? It is unclear based on the methodology descriptions in the paper. The authors may provide a detailed algorithm sketch to describe the iterative process." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper presents a well-motivated study on addressing annotator noise in preference alignment, an issue that is critical for developing reliable policy models.\n\n2. The paper provides a thorough and sensible theoretical analysis of DPO's limitations in discriminating between noisy and clean samples. It also demonstrates how the addition of a regularization loss helps mitigate these issues." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies preference alignment under the condition when there are poorly-annotated preference pairs. The authors propose a robust preference optimization (ROPO) framework with two key considerations, (1) a noise-robust loss function that suppresses the gradients of samples that the policy model is uncertain about; (2) A robustness-guided rejection sampling technique designed to balance the filtering of noisy samples with the preservation of important information from queries that might otherwise be discarded.\n\nIn the experiments, the authors demonstrate that the policy model aligned with ROPO shows the least drop in performance (win rate against a reference model as judged by GPT-4) with an increasing proportion of injected noise in the training data. The injected noise includes both artificial noise, such as flipping the preference labels of training pairs, and practical noise, where responses from a larger model are blindly assumed to be preferred over those from a smaller model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited test datasets. Performance evaluation is only conducted on AlpacaEval and the test split of Reddit TL;DR, lack of comprehensive results on multiple instruction-following / alignment benchmarks, such as Wildbench, Arena-Hard, MT-Bench, etc.\n\n2. The paper consider using loss values to identify model-uncertain samples in the robustness-guided rejection sampling procedure as a major contribution. Yet, there has already been several related works, like [1].\n\n[1] Secrets of RLHF in Large Language Models Part II: Reward Modeling. \n\n3. Lack of human evaluation. The analysis is based on GPT-4, which can be biased in its evaluation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Could you provide a clear definition of noise in the original data and compare the characteristics of noisy data with clean data? Estimating the noise rate in the dataset would add valuable context and make the approach more impactful.\n2. Why choose $\\frac{4 \\alpha}{(1+\\alpha)^2}$ to normalize the ROPO loss? Does this yield any specific advantages over other functions?\n3. Besides ROPO's regularization terms, could alternative regularization strategies be applied, and how would they impact performance?\n4. Could the rejection sampling introduce its own form of bias, especially if it favours certain types of responses?\n5. Given ROPO’s iterative nature, what is the computational cost relative to simpler, non-iterative methods, especially for very large LLMs?\n6. Does the model’s performance depend on specific types or levels of noise, and how would it handle different real-world noise distributions?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. An iterative training approach that optimizes LLM performance while filtering out noisy samples.\n2. Experimental results demonstrate improvements over DPO.\n3. The use of rejection sampling effectively compensates for information lost during the noise filtering step." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces the RObust Preference Optimization (ROPO) framework, a method designed to improve preference alignment in large language models (LLMs) by addressing the challenges posed by noisy preference data. ROPO employs a noise-tolerant loss function and an iterative process that integrates noise filtering during training. Additionally, ROPO includes a robustness-guided rejection sampling technique to retain valuable data while filtering noise. Experiments show that ROPO outperforms existing methods under various noisy conditions, offering a scalable and effective approach to aligning LLMs with human preferences without the need for external models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the paper addresses the impact of noisy data, it lacks a clear definition or characterization of what constitutes noisy data and how it is identified.\n2. In the loss function, the primary contribution is the addition of a regularization term, which is not significantly different from the original DPO approach, aside from a scaling coefficient applied to the DPO loss.\n3. The selection of $\\alpha$ is highly variable, making it difficult to determine an optimal value." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could you provide a more detailed overall description of the ROPO framework to clarify how the components (noisy sample filtering, rejection sampling stages, and noise tolerance training) are integrated?\n\n2. Can you include details the iterative process of the ROPO method?\n\n3. Do different tasks require extensive hyperparameter tuning, and if so, how does this affect the practical value of the ROPO method?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The author demonstrated through extensive derivations that methods such as DPO are not noise-tolerant and have difficulty distinguishing between noisy and clean samples. Additionally, the gradient weighting strategy of DPO amplifies the impact of noise. The author derived a loss as a regularizer through a conservative gradient weighting strategy to prevent the model from overfitting to noisy samples and to identify noisy samples.\n\n2. The author not only proved the effectiveness of ROPO on artificial noise but also validated that ROPO can still outperform DPO and other baselines in more practical noisy scenarios." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper examines the unavoidable presence of noise in preference learning and its significant impact on the performance of Large LLMs. Previous research has only slightly reduced the negative effects of noise, which persists during the training phase. Additionally, efforts to filter out noisy samples often lead to increased computational costs. To address these challenges, the paper introduces the ROPO framework, which combines noise tolerance and the filtering of noisy samples. It also incorporates the technique of rejection sampling to further enhance performance. Specifically, the authors derive a loss function through mathematical derivation designed to suppress the gradients of samples with high uncertainty. This approach prevents the model from overfitting to noisy samples while simultaneously identifying them. The effectiveness of the ROPO framework is demonstrated across three datasets in both practical and artificially noisy scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although the author presented the framework of ROPO in Figure 1, the paper still lacks an overall description of ROPO, making it difficult to understand how the components of ROPO—noisy sample filtering, rejection sampling stages, and noise tolerance training—are integrated and how the method works iteratively. The author could perhaps add some overall descriptions of the framework.\n\n2. ROPO inevitably introduces too many hyperparameters, such as the trade-off hyperparameter alpha and the sample filtering ratio, which seem to require experimental determination. Along with the hyperparameter beta from DPO, does this make the ROPO algorithm more complex? For example, would different tasks require exploring different combinations of hyperparameters, thereby weakening its practical value?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose an iterative alignment framework that mitigates the impact of preference noise by effectively identifying and filtering noisy samples." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024ropo,\ntitle={{ROPO}: Robust Preference Optimization for Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0nxocR2qx4},\nnote={under review}\n}" }, "abstract": { "value": "Preference alignment is pivotal for empowering large language models (LLMs) to generate helpful and harmless responses. However, the performance of preference alignment is highly sensitive to the prevalent noise in the preference data. Recent efforts for this problem either marginally alleviate the impact of noise without the ability to actually reduce its presence, or rely on costly teacher LLMs prone to reward misgeneralization. To address these challenges, we propose the **RO**bust **P**reference **O**ptimization (**ROPO**) framework, a novel iterative alignment approach that integrates *noise-tolerance* and *filtering of noisy samples* without the aid of external models. Specifically, ROPO first formulates the training process with adaptive noise reduction as an optimization problem, which can be efficiently solved in an iterative paradigm. Then, to enhance this iterative solving process with noise-tolerance and noise-identification capabilities, we derive a robust loss that suppresses the gradients from samples with high uncertainty. We demonstrate both empirically and theoretically that the derived loss is key to the noise-tolerance and effective filtering of noisy samples. Furthermore, inspired by our derived loss, we propose a robustness-guided rejection sampling technique to compensate for the potential important information in discarded queries. Experiments on three widely-used datasets of dialogue and post-summarization demonstrate that ROPO significantly outperforms existing preference alignment methods in the practical noise setting and under artificial random symmetric noise, with its advantage increasing as the noise rate increases." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "preference optimization", "large language models", "noise tolerance" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3a889583ca878242b37732165f98aa71fb4ea235.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "ROPO: Robust Preference Optimization for Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0oWGVvC6oq
On Bits and Bandits: Quantifying the Regret-Information Trade-off
main
Active
Online learning;Information theory;Bayesian regret;Bandits
reinforcement learning
5;6;6;8
3;3;3;3
3;3;3;3
3;2;2;3
1;3;3;3
6.25
3
3
2.5
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No concerns." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The problem formulated in this paper seems interesting, and it is interesting to see how information affects learning in general. \nThe paper also companies its theoretical results with experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies regret minimization when extra information about the prior is revealed. \nIn particular, the authors consider contextual bandit problems, where at each round, the natural reveals some context and the algorithm needs to select actions based on the context. The authors consider a Bayesian set up, where there are some Bayesian prior on the context/reward. The external information reveals extra information about the prior. Under this formulation, the paper studies how external information affects learning and performance.\n\nThe paper proves both upper and lower bounds that depend on the amount of information an agent accumulates. The theoretical results demonstrate that information, measured in bits, can be directly traded off against regret, measured in reward. The paper also validates their findings through experiments with both traditional multi-armed bandits and a practical application involving large language models for question-answering tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The technique is not the strong part of this paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Can the authors comment on how the proposed regret bounds might extend to adversarial or non-Bayesian settings? Are there particular adjustments or challenges anticipated in these contexts?\n- Could the authors comment on extensions to settings where the information depends on prior actions?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper presents an interesting information-theoretic approach to quantifying the regret-information trade-off\n- The theoretical approach is rigorous, with clear definitions and proofs\n- The paper is well-organized\n- The paper holds high significance for fields involving sequential decision-making (online learning in particular)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates the trade-off between information acquisition and regret minimization in sequential decision-making. It introduces a framework to quantify this trade-off, drawing on information-theoretic methods. The authors present novel Bayesian regret lower bounds that depend on the information accumulated by the agent, and they show how these quantify the relationship between external information and regret. \nFor brownie points, they show an application of their theory to question answering with LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The assumption regarding information gathering being independent of task history could limit applicability in some environments" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "I don’t have further questions beyond what I’ve written above in “Weaknesses”." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I think this work has many of the ingredients of a strong conceptual paper. The authors identify a conceptual phenomenon which spans many mathematical models, formalize that phenomenon, and develop a method which can analyze this phenomenon simultaneously in all of those models.\n\nAlthough the LLM experiment initially felt out of place to me, I actually think it provides a nice complement to the theoretical results (although the theoretical results certainly remain the primary contribution).\n\nOverall, I think the ceiling for this paper is quite high." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies a general sequential decision-making framework from a Bayesian perspective. Within this framework, it is intuitive that the more information the agent accumulates, the lower the resulting regret. The goal of the paper is to formalize that intuition. The paper does the following:\n1. Develops new information-theoretic methods for analyzing sequential decision-making algorithms\n2. Uses those methods to recover existing lower bounds for a range of sequential decision-making problems, such as standard multi-armed bandits, tabular RL, Lipschitz bandits, etc (Table 1).\n3. Obtains lower and upper regret bounds which depend explicitly on the number of bits of information accumulated.\n4. Runs a question-answering LLM experiment inspired by the above results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I have serious concerns about the presentation. Although the conceptual idea behind the paper is intuitive, it took me a while to make sense of the technical content of the paper. I think there are two issues:\n1. Confusing writing and non-standard terminology. \n2. Lack of explanation of the technical statements.\nI have provided a non-exhaustive list of examples below.\n\nAlthough I am not an expert in information-theoretic methods, I am quite familiar with bandits, RL, and Bayesian regret, so more casual readers may struggle even more than me. If the paper purports to elucidate the intuitive tradeoff between information and regret, but the technical results are not accessible to readers, then I believe the impact of the paper will be limited.\n\nI also think the LLM experiments could be improved by including baselines of always querying/never querying the large LLM. Table 2 suggests that with the query cost of 0.1, always querying the large LLM might actually be the optimal policy. To me, this suggests that a larger query cost is needed and calls into question the significance of the evaluation.\n\nOverall, although I think the paper has many merits, I lean towards rejection so that these issues can be addressed, hopefully resulting in a strong final paper.\n\n_Writing issues_\n1. I found it a bit hard to make sense of Section 1.1 (“Contributions”) without at least informally defining the model. It would also be useful to link to the theorems/sections corresponding to each of the results.\n2. Some of the terminology and notation is a bit confusing. Normally $\\pi \\in \\Pi$ denotes a policy, but here it denotes a “decision” (basically an action). Instead, $\\phi \\in \\Delta(\\Pi)$ is called a policy, which seems like it should just be called a randomized decision/action. Furthermore, $p_t: \\mathcal{C} \\to \\Delta(\\Pi)$ is _also_ called a policy, which is more in line with the normal usage of “policy”. And $\\pi^*$ is also a function from $\\mathcal{C} \\to \\Delta(\\Pi)$, which gives it a different type than $\\pi$. I have also never before seen the term “epsilon-local set” used to describe epsilon-balls. I would suggest better aligning terminology and notation with the literature.\n3. In Example 2.1, is there a reason that you use Bernoulli rewards instead of general rewards? Does your model not cover contextual MAB with general rewards.\n4. Lines 197 - 215: I assume the rho separation assumption is for policies in $\\Phi$, not for policies in $\\Delta(\\Pi)$? If it is supposed to be $\\Delta(\\Pi)$, that seems like a very strong assumption about the structure of the decision space.\n\n_Lack of interpretation/explanation_\n1. I understand that Yang and Barron also make Assumption 3.1, but it seems pretty unintuitive to me, and I would have appreciated some explanation.\n2. Theorem 3.4, especially (9), is a bit hard to make sense of. Could you provide an interpretation for this expression?\n\n_Minor issues_\n1. Line 41: resource allocation is much broader than the specific routing problem you describe. Consider something like “One example of a resource allocation problem is route a process…” The flow in this section also feels a bit weird since you never bring up resource allocation again in the paper. Consider omitting either the resource allocation or online game example and using a single running example?\n2. Since Section 4 also includes upper bounds, should the title be “Information-theoretical regret upper and lower bounds?”\n3. Table 2 caption: the “Appendix ??” reference is broken" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- (3) The first column in Table 2 resembles a budgeted setting. Can there be an autonomous scenario? For instance, let the learning agent decide the proportion of queries." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The submission points out that using Fano, one can introduce the mutual information into the regret bound. The connection between the mutual information and the accumulated knowledge bits (R) provides a means to analyze the effect of the knowledge bits (R) on the regret bound.\n- Moreover, Prop. 4.5 provides an entropy-dependent Bayesian regret lower bound, which is listed in the last entry of Table 1.\n- The advantage of accumulating information in bits is experimentally justified (Figure 2).\n- A bits-based query policy illustrates the advantage of quantifying knowledge in bits and searching for a query that will bring an abundant increase in the knowledge accumulation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The submission studies the information lower bounds of bandits. By relating the mutual information to KL divergence and entropy, the submission rephrases regret bounds in terms of information bits. The new form of the bounds allows a learning agent to acquire and accumulate additional knowledge through active queries (as opposed to passive observations in the previous setting). The main results are summarized in Table 1, consisting of Theorem 3.4, Proposition 4.1 and Proposition 4.5. In the experiments, the advantage of information accumulation is verified in a simulation (Figure 1). Then, a query strategy is proposed and tested in MCQA." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- (1) Combining Prop. 4.1 and 4.3, the submission provides a range of bits required to achieve a given level of regret. Note that the range has a $\\sqrt{logK}$ gap.\n- (2) The proof sketches of the main results (e.g., Proposition 3.2, Theorem 3.4, Proposition 4.1, and the others) are plain. It seems that packing and covering are standard techniques in analysis. Could you please elaborate on the technical challenges and the corresponding contributions in the sketches?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024on,\ntitle={On Bits and Bandits: Quantifying the Regret-Information Trade-off},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0oWGVvC6oq},\nnote={under review}\n}" }, "abstract": { "value": "In many sequential decision problems, an agent performs a repeated task. He then suffers regret and obtains information that he may use in the following rounds. However, sometimes the agent may also obtain information and avoid suffering regret by querying external sources. We study the trade-off between the information an agent accumulates and the regret it suffers. We invoke information-theoretic methods for obtaining regret lower bounds, that also allow us to easily re-derive several known lower bounds. We introduce the first Bayesian regret lower bounds that depend on the information an agent accumulates. We also prove regret upper bounds using the amount of information the agent accumulates. These bounds show that information measured in bits, can be traded off for regret, measured in reward. Finally, we demonstrate the utility of these bounds in improving the performance of a question-answering task with large language models, allowing us to obtain valuable insights." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Online learning", "Information theory", "Bayesian regret", "Bandits" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/23770a57f7efcf848f69bf6d75256aed6983e8da.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "On Bits and Bandits: Quantifying the Regret-Information Trade-off" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0ov0dMQ3mN
CO-MOT: Boosting End-to-end Transformer-based Multi-Object Tracking via Coopetition Label Assignment and Shadow Sets
main
Active
End-to-End Tracking;Transformer;Multi-object Tracking
applications to computer vision, audio, language, and other modalities
3;5;5;6
4;4;5;5
3;2;3;3
2;2;2;3
2;1;2;3
4.75
4.5
2.75
2.25
2
0.688247
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The main experiments are still concentrated on small-scale pedestrian tracking datasets. As mentioned on weakness, for other scenarios, we may face different difficulties. Are there any plans to test the model also on large-scale MOT datasets such as TAO [3]?\n\n[3] Dave, Achal, et al. \"Tao: A large-scale benchmark for tracking any object.\" Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16. Springer International Publishing, 2020." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written, with a clear and logical structure that makes it very easy to follow.\n\n2. The observation on the disproportional assignment of track and detection queries is insightful, highlighting an important yet often overlooked issue in transformer-based MOT. This analysis provides valuable context for the community.\n\n3. The proposed coopetition label assignment strategy is simple and effective. The paper also demonstrates its effectiveness on multiple Transformer-based MOT frameworks, including TrackFormer and MOTR.\n\n4. The experiments are thorough, covering multiple benchmarks, including more large-scale autonomous driving scenes such as BDD100K, and demonstrating the method’s robustness and practical impact." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper tackles end-to-end Transformer-based multiple-object tracking. Previous methods, such as TrackFormer and MOTR, face issues with imbalanced distribution in detection and tracking label assignments, where most objects are assigned to track queries, leaving only a few “newborns” for detection queries. This joint training approach results in weaker detection performance compared to tracking-by-detection methods. To resolve this, the paper proposes a coopetition label assignment strategy to re-balance assignments between track and detection queries. Additionally, it introduces a shadow set that changes the original one-to-one mapping in DETR to a one-to-set mapping, further enhancing tracking performance. Results on various benchmarks demonstrate the effectiveness of this method.​" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. To solve the issue of disproportional assignment of track and detection queries, there are also other simpler alternatives. A straightforward option would be to train detection queries jointly on image detection datasets alongside video tracking datasets. For example, detection queries could be trained exclusively on image datasets, treating every object as a new object. An ablation study comparing proposed methods to this simple joint training alternative is appreciated.\n\n2. The paper uses the first 5 decoders to train with all queries, while the last one trains separately on detection and tracking queries. An ablation study could clarify whether a different arrangement, such as using the first decoder for track queries and the last five for all queries, would impact performance. An ablation study regarding this would be helpful for readers to understand the optimal configuration.\n\n3. The applicability of the coopetition label assignment strategy is mostly limited to cases where there is more video data than image data for training, leading to an imbalance in track and detection query assignments. However, in many practical settings, the opposite is true—large-scale [1] and open-vocabulary MOT tasks [2] often have substantially more image detection data than video tracking data. In these cases, common practice in MOT is to use joint training with both image and tracking data, which provides sufficient supervision for detection queries. This is contrary to the paper’s analysis, and it would be beneficial for the authors to also at least discuss these more common scenarios.\n\n[1] Li, Siyuan, et al. \"Matching Anything by Segmenting Anything.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n\n[2] Li, Siyuan, et al. \"Ovtrack: Open-vocabulary multiple object tracking.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The analysis about the drawbacks of exist e2e-trackers is very interesting, it reveals the negative impact of tracking queries on detection queries.\n\n2. The proposed COLA strategy allows tracked objects to be reassigned to detection queries in decoders, resulting in a significant improvement in tracking performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the limitations of existing e2e-MOT methods, particularly the unbalanced training issue caused by the label assignment strategy. And it introduces a Coopetition Label Assignment (COLA) strategy and a Shadow Set concept. Through extensive experiments on multiple datasets, it demonstrates superior performance compared to state-of-the-art methods while being more efficient." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. From the public records of OpenReview, it can be seen that this paper was submitted to ICLR2024. The reviewers and AC pointed out many weaknesses last year, while authors have made almost no improvements in the latest version. \n\n2. Many SOTA trackers developed in this year have been overlooked by authors, such as DiffMOT, MambaTrack, TrackSSM, et al. These new methods have made many improvements, and it would be best for the author to provide a comparison with the latest methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I have listed my concerns and questions in Weakness part and hope the response from authors." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper's strengths lie in its originality, quality, clarity, and significance. It introduces a novel coopetition-based label assignment (COLA) and shadow sets for one-to-set matching, enhancing the robustness of e2e-MOT without requiring additional detectors. The evaluation across multiple datasets, including ablation studies, demonstrates the effectiveness of CO-MOT, establishing its advantages over state-of-the-art models. The paper is well-structured, with clear explanations and helpful visualizations, although further clarification on certain technical aspects could enhance understanding. Overall, CO-MOT significantly improves the efficiency and performance of transformer-based multi-object tracking, making it a valuable contribution to the field." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents an innovative end-to-end Multi-Object Tracking (MOT) framework that aims to enhance transformer-based MOT models. The authors introduce two key contributions: 1. Coopetition Label Assignment (COLA) revises label assignment by allowing detection queries to utilize tracked objects during training in intermediate decoders. This approach boosts feature augmentation for tracking objects with diverse appearances and alleviates the issue of tracking termination. Shadow Set Strategy aims to address training imbalance in one-to-one matching, CO-MOT introduces \"shadow sets,\" which add slight disturbances to each query, thus allowing one-to-set matching. This enhances the discriminative training process and the model's generalization.The proposed method outperforms existing e2e-MOT models on benchmarks like DanceTrack and BDD100K with improved tracking metrics such as HOTA and TETA, demonstrating higher efficiency and inference speed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the paper presents valuable contributions, several weaknesses could be addressed to strengthen its impact and clarity:\n1. The authors should provide a detailed discussion of the differences between COLA and TALA in Section 2.4, as well as their design in the loss function, to facilitate reader understanding. \n\n2. In the experiments section, the authors need to include comparisons with more methods on the MOT20 and BDD100K datasets.\n\n3. Since the authors analyze the impact of tracking queries on detection performance in transformer-based trackers, if this point serves as one of the motivations, they should compare whether the proposed framework shows improvement in mAP in the experiments.\n\n4. The authors should also analyze the effects of different values of $\\lambda$ and $\\Phi$ in Section 2.5 on the experimental outcomes." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Could the authors provide a detailed analysis of the cost brought by shadow sets?\n- Could the authors provide the evaluation and discussion on MOT20." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The motivation of the paper is interesting and clearly demonstrated by the experiments.\n\n- The introduction of COLA and shadow sets do mitigate the biased label assignment issue in e2e-MOT. This proposed approach provides a balanced training process, leading to improved performance.\n\n- The authors conduct experiments on three datasets, demonstrating the effectiveness of CO-MOT across different scenarios. The results are robust and consistent, showing improvements over the baseline." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces CO-MOT, a novel method aimed at improving end-to-end Transformer-based multi-object tracking (e2e-MOT) through a new coopetition label assignment strategy (COLA) and the introduction of shadow sets. The authors address the issue of unbalanced training in existing e2e-MOT methods, where detection queries often lack positive samples, particularly for newborn objects." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The introduction of shadow sets and the coopetition label assignment strategy may increase the computational complexity and training time. The authors should provide a detailed analysis of the computational overhead and discuss potential optimizations. Notably, Fig.4 in the manuscript only presents the Flops, which is not the actual training and inference time. Intuitively, more object queries would bring larger computation costs. Why do shadow sets not?\n\n\n- Although the proposed method demonstrates strong performance on the tested datasets, it would be advantageous to evaluate CO-MOT on MOT20. The authors assert that the proposed approach enhances detection recall. Thus, the more densely populated nature of MOT20 provides a more suitable context for assessing the effectiveness of the model.\n\n- The authors should investigate the sensitivity of the proposed method to hyperparameters, such as the number of shadow sets and the parameters of the coopetition label assignment strategy. Understanding how these hyperparameters affect performance would provide valuable insights for practical implementation.\n\n- The writing should be improved. For instance, in Fig.3 and Fig.4, the axis titles are overlapped with the axis. The readability of Figures 3 and 4 could be improved by adjusting the axis labels to avoid overlap. This would enhance the overall presentation of the results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024comot,\ntitle={{CO}-{MOT}: Boosting End-to-end Transformer-based Multi-Object Tracking via Coopetition Label Assignment and Shadow Sets},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0ov0dMQ3mN},\nnote={under review}\n}" }, "abstract": { "value": "Existing end-to-end Multi-Object Tracking (e2e-MOT) methods have not surpassed non-end-to-end tracking-by-detection methods. One potential reason is its label assignment strategy during training that consistently binds the tracked objects with tracking queries and then assigns the few newborns to detection queries. With one-to-one bipartite matching, such an assignment will yield an unbalanced training, \\textit{i.e.}, scarce positive samples for detection queries, especially for an enclosed scene, as the majority of the newborns come on stage at the beginning of videos. Thus, e2e-MOT will be easier to yield a tracking terminal without renewal or re-initialization, compared to other tracking-by-detection methods. To alleviate this problem, we present Co-MOT, a simple and effective method to facilitate e2e-MOT by a novel coopetition label assignment with a shadow concept. Specifically, we add tracked objects to the matching targets for detection queries when performing the label assignment for training the intermediate decoders. For query initialization, we expand each query by a set of shadow counterparts with limited disturbance to itself. With extensive ablations, Co-MOT achieves superior performance without extra costs, \\textit{e.g.}, 69.4\\% HOTA on DanceTrack and 52.8\\% TETA on BDD100K. Impressively, Co-MOT only requires 38\\% FLOPs of MOTRv2 to attain a similar performance, resulting in the 1.4$\\times$ faster inference speed. Codes are attached for re-implementation." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "End-to-End Tracking", "Transformer", "Multi-object Tracking" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/bebda35d8fd280c7cad9a4999ab38345068dff34.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/571ae3f7bedd099b42dd764516d8d53fe963d8c5.zip" }, "title": { "value": "CO-MOT: Boosting End-to-end Transformer-based Multi-Object Tracking via Coopetition Label Assignment and Shadow Sets" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0owAtTCOlU
GRIC: General Representation and Informative Content for Enhanced Out-of-Distribution Detection
main
Active
Out-of-Distribution Detection
other topics in machine learning (i.e., none of the above)
3;3;3;5
5;4;4;4
2;2;2;3
2;2;2;3
3;2;2;2
3.5
4.25
2.25
2.25
2.25
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "**1. Presentation**\n\nThe paper is generally well-presented and easy to follow. It begins with a clear hypothesis that using a generalized feature segment from the full feature space can improve ID/OOD sample distinction, followed by a systematic explanation of the proposed method for extracting this general feature space.\n\n**2. Algorithm**\n\nThe algorithm is straightforward and effective, yielding significant performance gains on both small- and large-scale datasets. Ablation studies demonstrate that both the proposed general subspace extraction and hierarchical prompting contribute substantially to the performance improvements." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a method called GRIC (General Representation for Inference and Classification), designed to improve zero-shot and few-shot learning by leveraging representations from large-scale pre-trained models. GRIC integrates domain-specific knowledge into a unified embedding space that allows the model to transfer knowledge effectively across tasks and domains. The major contributions are the introduction of general ID features for OOD detection with hierarchical prompting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**1. Formatting Issues** \n- **Text Accessibility**: The text is not selectable or OCR-scanned, which complicates review and readability.\n- **Font and Equation Sizing**: Equations appear in a very small font, raising concerns about compliance with the `.sty` file specifications; table and figure fonts are also difficult to read.\n- **Inconsistent Spacing**: Vertical spacing is uneven throughout, affecting readability. Additionally, Section 3.2 would be more appropriately placed in the related work section to improve structure.\n\n**2. Missing Experiments and Analysis** \nWhile the paper presents a solid set of experiments across multiple datasets, further analysis would strengthen the justification of the approach:\n - **Single-Modality Vision Models**: The paper should demonstrate the effectiveness of general feature extraction in **vision-only models**, without hierarchical prompting, to show that the method generalizes beyond multi-modal settings.\n - **Integration with Other OOD Scoring Methods**: It would be valuable to evaluate GRIC with alternative OOD scoring metrics, such as **energy-based scores** and **feature-based scores**, to understand its compatibility with established scoring methods beyond MSP.\n - **ID Accuracy**: Given that real-world deployment typically involves handling both ID and OOD data, the paper should report ID accuracy to confirm that GRIC performs reliably on ID data without regression.\n\n**3. Additional Ablation Studies** \n- **PCA Transformed Feature Space**: Examine the effectiveness of using PCA-transformed features (from \\( R^{s \\times r} \\) to \\( R^{s \\times k} \\), where \\( k \\) is the number of principal components) for OOD detection. \n- **Principal Component Masking**: Evaluate whether masking high-variance principal components in the PCA-transformed space, while using the remaining components, can improve OOD detection by focusing on features less affected by dominant ID patterns.\n- **Full Feature Matrix for PCA**: Justify why the paper does not use the full feature matrix across all samples per class to compute PCA, as this could potentially improve the robustness of general feature extraction.\n- **Hyperparameter Sensitivity**: Include a sensitivity analysis on the threshold in Equation 3, as this parameter may significantly influence detection performance.\n\nWould re-consider the scoring based on authors' response." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I wonder about the motivation of this method that class-specific information is unnecessary. To validate this statement, I would like to know the result of GRIC without informative prompts in Table 6 to clarify whether the information being removed is indeed ID class-specific, not a noisy signal.\n\nAlso, I would like to know the result of hard OOD detection.\n\nFor more details, please refer to the Weakness section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- GRIC surpasses the two baseline methods, MCM and GL-MCM." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new enhancement method called GRIC for CLIP-based OOD detection. GRIC extracts general ID representations rather than class-specific features and introduces LLM-based informative prompts for OOD detection. Experimental results show the proposed GRIC outperforms existing methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- There is a limited novelty. DICE [1] has a similar concept to drop unnecessary dimensions and shows the effectiveness for OOD detection.\n\n- The motivation in this paper’s method—that class-specific information is unnecessary—raises some questions. In DICE, the motivation is to exclude signals that introduce noise. Rather than removing information specific to the ID class, I consider this method actually exclude noise signals. Including the ID accuracy of GRIC without informative prompts in Table 6 would help clarify whether the information being removed is indeed ID class-specific.\n\n- A recent challenge in OOD detection is accurately identifying \"OOD images that are semantically similar to ID.\" In this problem setting, known as Hard OOD detection, certain classes within a dataset (e.g., ImageNet) are treated as ID, while other classes in the same dataset are treated as OOD. Therefore, I believe class-specific information is necessary rather than relying on the general representation of the dataset. I would like to see results on the effectiveness of this method when experimenting on Hard OOD detection benchmarks [2, 3].\n\n- The approach is defined as a zero-shot method in L518. However, since it utilizes ID images for PCA processing, I consider this method to be a few-shot learning method, not a zero-shot. The definition of Zero-shot is not using ID images in preprocessing, regardless of whether training is involved [4].\n\n- The code has not been shared, raising concerns about the reproducibility of the method.\n\n[1] Sun+, DICE: Leveraging Sparsification for Out-of-Distribution Detection, ECCV2022. \n\n[2] Li+, Learning Transferable Negative Prompts for Out-of-Distribution Detection, CVPR2024. \n\n[3] Jung+, Enhancing Near OOD Detection in Prompt Learning: Maximum Gains, Minimal Costs, arXiv2024. \n\n[4] Miyai+, Generalized Out-of-Distribution Detection and Beyond in Vision Language Model Era: A Survey, arXiv2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The figures and algorithm seems screenshot and too ambiguous.\n2. Many typos are in the paper and need to be revised. For example, 'fPR95' in 412 is wrong spelling. When using \"citation\" as the subject, parentheses should not be added. Additionally, lines 493 and 494 overlap due to insufficient line spacing." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The concept of general representation of ID data is novel to CLIP-based OOD detection.\n2. The method is well designed with various modules.\n3. The extensive experiments show the effectiveness of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Out-of-distribution (OOD) detection is essential for enhancing the robustness of machine learning models in open-world environments by identifying inputs from unknown classes. While vision-language models like CLIP facilitate zero-shot OOD detection without the need for labels or training on in-distribution (ID) data, existing methods are constrained by their reliance on closed-set text-based labels and complete image feature representations, limiting CLIP's generalization capabilities. This work introduces GRIC, a novel approach that enhances zero-shot multi-modal OOD detection by focusing on general ID representations instead of class-specific features and utilizing large language models (LLMs) to enrich the understanding of ID data and simulate potential OOD scenarios without requiring actual OOD samples. GRIC demonstrates significant effectiveness, achieving a notable reduction in the false positive rate at recall (FPR95) and outperforming state-of-the-art methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. One major issue with this paper is that it claims to be in a zero-shot OOD detection setting, but it should actually be classified as a few-shot setting. This is because the calculation of PCA requires the use of ID data, whereas in a zero-shot setting, ID images should be mixed with OOD images to form the test set, making them unavailable. The entire setting of the paper is flawed and needs to be revised\n\n2. There are more state-of-the-art (SOTA) methods for zero-shot OOD detection that have not been compared, such as NegLabel [1], which demonstrates superior performance, and EOE [2], which also utilizes large language models (LLMs) for CLIP-based OOD detection.\n\n3. The results in Table 1 are not representative, as the baseline MCM has already achieved a score of 99%, indicating that the OOD issue in this benchmark has been effectively addressed.\n\n4. There are many more adjustable benchmarks that have not been explored, such as: hard OOD detection, robustness to domain shift and transfer the method to other CLIP-like models (ALIGN, AltCLIP, GroupViT)\n\n[1] Jiang, X., Liu, F., Fang, Z., Chen, H., Liu, T., Zheng, F., & Han, B. Negative label guided ood detection with pretrained vision-language models. ICLR, 2024.\n[2] Cao, C., Zhong, Z., Zhou, Z., Liu, Y., Liu, T., & Han, B. Envisioning Outlier Exposure by Large Language Models for Out-of-Distribution Detection. In Forty-first International Conference on Machine Learning." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See Weakness" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-crafted and clearly presented, with an engaging motivation and good performance results.\n2. Extensive experiments demonstrate the effectiveness of the proposed method.\n3. The supplementary material provides useful experiments and details." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces GRIC, a novel approach for zero-shot multi-modal OOD detection aimed at enhancing the robustness of machine learning models in open-world environments. Unlike existing methods that rely on closed-set text-based labels and complete image features, GRIC leverages general ID representations and LLMs to improve OOD detection. GRIC's approach rests on two main insights: (1) using general ID representations instead of class-specific features, and (2) enriching the model’s understanding with LLMs to simulate potential OOD scenarios. This method is straightforward yet effective." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The authors claim that \"GRIC reduces the FPR95 by up to 19%, significantly surpassing SOTA methods.\" However, this statement is inaccurate. For instance, NegLabel [1], published a year ago, achieved an FPR95 of 25.40% on the ImageNet-1k benchmark, while the proposed method achieves 20.32%. Thus, the actual improvement is, at most, 5%.\n\n2. I understand that it may be overkill to ask the authors to compare their methods with [2]. However, since [2] also utilizes superclasses for constructing prompts and achieves even higher performance (17.51% evaluated by FPR95), I consider it valuable for authors to add a discussion about the similarities and differences between their proposed method and [2]. If possible, [1] and [2] should be mentioned in the related work part and added to Table 2 to provide a more comprehensive comparison, which will not harm the unique contribution of this work.\n\n[1] Negative Label Guided OOD Detection with Pretrained Vision-Language Models. ICLR, 2024.\n[2] Conjugated Semantic Pool Improves OOD Detection with Pre-trained Vision-Language Models. NeurIPS, 2024.\n\n3. If possible, the authors are recommended to provide more visualization results for deeper analysis.\n\n4. There are multiple typos. It is recommended that the authors conduct a thorough review of the writing. For example, Line 110: G(x;Yin). L278: FOr this sake.\n\n5. The paper has severe formatting weaknesses." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024gric,\ntitle={{GRIC}: General Representation and Informative Content for Enhanced Out-of-Distribution Detection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0owAtTCOlU},\nnote={under review}\n}" }, "abstract": { "value": "Out-of-distribution (OOD) detection is crucial for ensuring the robustness of machine learning models in open-world scenarios by identifying inputs from unknown classes. Vision-language models like CLIP have enabled zero-shot OOD detection without requiring labels or training on in-distribution (ID) data. However, current approaches are limited by their dependence on \\textit{closed-set text-based labels} and \\textit{full image feature representations}, constraining CLIP’s capacity to generalize across diverse labels. In this work, we propose GRIC, a novel method that improves zero-shot multi-modal OOD detection by leveraging two key insights: (1) OOD detection is driven by general ID representations rather than class-specific features, and (2) large language models (LLMs) can enrich the model’s understanding of ID data and simulate potential OOD scenarios without actual OOD samples. GRIC is simple yet highly effective, reducing the false positive rate at $95\\%$ recall (FPR95) by up to $19\\%$, significantly surpassing state-of-the-art methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Out-of-Distribution Detection" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/da7fe8eabcb15a602e2af7f3cf4929bc15d78477.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/2e3388a4051f888cfb6ea0b363d5268748edd9b1.pdf" }, "title": { "value": "GRIC: General Representation and Informative Content for Enhanced Out-of-Distribution Detection" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0owyEm6FAk
Attack on LLMs: LoRA Once, Backdoor Everywhere in the Share-and-Play Ecosystem
main
Active
LoRA;PEFT;LLM Safety;Backdoor;Backdoor Attack
alignment, fairness, safety, privacy, and societal considerations
3;5;5;5;5;5
5;4;2;3;4;4
1;2;3;3;2;3
1;2;2;3;3;2
2;3;4;3;2;3
4.666667
3.666667
2.333333
2.166667
2.833333
-0.632456
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "There is a potential security risk considering that this work proposes a recipe for embedding back-door attacks in LoRA adapters. The authors do explain that the aim is to alert the community of this new security risk." }, "flag_for_ethics_review": { "value": [ "Yes, Potentially harmful insights, methodologies and applications" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Could you please provide more details of a practical scenario of this attack?\n- What are the implications of this attack on other PEFT techniques?\n- How would the use of multiple LoRA adapters, mentioned in L068, affect the attack?\n- How do you aggregate the multiple Task Performance evaluation metrics mentioned in L247 into one, in Table 1.\n- Considering that the aim of this work is to inform the community of this risk, are you also planning to release the source code of your experiments?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Well written paper, no obvious spelling or grammatical issues. Well structured and motivated.\n- Good effort introducing and motivating the problem\n- Detailed Background, and Related Work section that helps with understanding the topic\n- Fair thread model and overall assumptions. I agree that it is possible to embed a backdoor into LoRA adapters.\n- Methodology, results and discussions are sound." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work investigates the security risks associated with the convenient share-and-play ecosystem of LoRA when fine-tuning large language models (LLMs). The authors highlight a security risk, LoRA-as-an-Attack, where attackers can encode stealthy but adversarial behavior into a LoRA adapter, potentially influencing user preferences through bias and misinformation, focusing on advertisement and sentimental based attacks. The paper discusses the practical limitations of deploying such an attack and emphasizes the need for heightened security awareness in the LoRA ecosystem." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- My main complaint is about the contribution of this work. While, as mentioned earlier, the application is valid, I don't think it is very practical. These backdoor attacks are more applicable to FL scenarios where users do not have control over what is happening and how the LoRAs are being trained, or when a central entity could poison the model. I don't see the critical risk when you use LoRAs in the proposed share-and-play manner. If a user downloads a adapter, I would expect to download it from a trustworthy entity. I guess the trust is the same as trusting a big open-source model (e.g., llama)\n- I would have expected a more thorough analysis, with different types of PEFT techniques. How does this apply to QLoRA, for instance?\n- It was not clear to me how the authors combined the evaluation metrics into one, presented as Task Performance.\n- The background section was detailed. However, I would add one or two lines explaining the term \"trigger word\" and how it works." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Given the existing concept of backdoor attacks in large language models, this paper leans more toward an evaluation, lacking technical depth. Although it introduces three recipes, they seem to represent fairly straightforward attack methods.\n\n2. Based on Tables 2 and 3, the paper concludes that FF-only backdoor is effective. However, I have some questions about this conclusion. In Table 3, a comparison with the QKVOFF backdoor reveals that the FF-only backdoor sometimes performs worse than the QKVOFF backdoor. Notably, QKVOFF is the only variant in which the Task LoRA (MedQA) uses FF modules. This means that, in other cases, the Task LoRA’s FF modules remain unchanged, having no impact on the FF module in the FF-only backdoor. Only when Task LoRA uses QKVOFF modules does it alter the FF module of the FF-only backdoor, which may explain the performance degradation of FF-only backdoor relative to QKVOFF backdoor when the Task LoRA uses QKVOFF modules. Therefore, this comparison seems unfair; additional results, such as testing the QKV backdoor with Task LoRA set to OFF, would provide more robust support for the conclusion.\n\n3. I find the paper's training-free recipe impractical. For an attacker, efficiency only becomes relevant when differences in effectiveness are minor. Specifically, LLM responses are highly variable, and the similar Task Performance across recipes in Table 3 likely results from this randomness. Thus, Backdoor Performance is crucial. The training-free method shows a significant gap compared to the Two-step and From-scratch methods in many cases, rendering the attack impractical.\n\n4. The writing in the paper requires further refinement. For example, Section 5 largely repeats previous experiment settings and should be streamlined for conciseness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper introduces a new security risk, LoRA-as-an-attack, which is straightforward to implement and therefore poses a realistic threat.\n\n2. The paper’s threat model sets out three goals and discusses potential trade-offs based on these goals, offering design insights for future attacks.\n\n3. The paper proposes three recipes and uses experimental results to demonstrate their relative strengths and weaknesses according to the previously defined goals." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel security risk called \"LoRA-as-an-attack,\" where an attacker uses a community-shared, backdoor-infected LoRA to compromise base models when users integrate it into their systems. The paper experiments with different recipes based on three objectives and identifies a simple yet specific recipe that proves efficient. Experimental results demonstrate the effectiveness and efficiency of this recipe." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper might lack some technical depth.\n\n2. The conclusion that \"FF-only\" is highly effective could be problematic.\n\n3. The writing in the paper requires further refinement." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The author writes the board impact and potential ethical concerns in just one short paragraph without enough explanation on how to prevent misuse of such tech and how to defend them." }, "flag_for_ethics_review": { "value": [ "Yes, Discrimination / bias / fairness concerns", "Yes, Potentially harmful insights, methodologies and applications" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the weakness above" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "+ LLM security is a timing topic" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies implementing the backdoor in LoRA adapters to poison the LLM ecosystem." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Limited novelty\n- Lack of board impact discussion and ethical efforts\n- No baseline compared\n- Lack of ablation study\n\nFirst of all, I hardly find any new insights from this paper. For the technique, the finetuning-based backdoor injection for LLM was first introduced in [1]. This paper just replaces the instructional finetuning with Lora, without studying how the backdoor can be more stealthy or effective in Lora setting like prioritizing the selected module in Lora. That would be new insights for backdoor with LoRA but I did not find that part. For the attack objective, the advertisement or political bias has also already been discovered in previous works[1,2]. Thus, the threat objective itself is totally not novel. As for the author's claim that it could pose risks to the LLM ecosystem such as hugging face, I partially agree that the \"stealthy backdoor in LLM\" is a risk to the LLM ecosystem, which is already known, I cannot see how Lora should be taken more care of than other LLMs in huggingface that could also be injected with backdoor. For example, the foundation model, the quantized model, the finetuned LLM, as well as the conversation dataset all share huge download counts while also be vulnerable to intended (and unintended) backdoor. The LoRA backdoor is just a very small part of it. Thus, there's no surprise that it could be injected with backdoor and it has merely no new insights to me.\n\nSecond, the author writes the board impact and potential ethical concerns in just one short paragraph without any meaningful discussion. Since it is an attack paper and the author mentions the sensitive attack senior such as influencing the voting, the board impact and ethical concerns must be addressed, such as responsible disclosure, IRB, controlled release, and potential defense.\n\nLastly, the backdoor is already a well-studied field in machine learning. As a research paper, it is needed to compare with the baselines. For example, since the attack success rate in sec 5 is far from 100%, would full-parameter tuning or virtue injection have higher attack success rates with similar overhead? The lack of baseline comparison makes the experiment less convincing. Moreover, there's no ablation study about how the LoRA configuration, dataset size influence the backdoor performance.\n\n\n[1] On the Exploitability of Instruction Tuning\n\n[2] Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Are there any effective defense mechanisms against the attack method proposed in the paper?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is pioneering in highlighting the security risks of LoRA within a “share-and-play”\necosystem, demonstrating a forward-looking perspective.\n2. The proposed training-free merging method maintains high task performance while enabling\nwidespread backdoor dissemination at minimal cost.\n3. The paper conducts extensive experimental evaluations across various LoRA target modules,\nproviding broad coverage that validates the effectiveness of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates the risk of backdoor attacks on large language models (LLMs) within a “shareand-play” ecosystem using LoRA (Low-Rank Adaptation) technology. By analyzing the injection\nmechanism of backdoor LoRAs, the paper demonstrates how attackers can train a backdoor LoRA on\na small backdoor dataset and then merge it, without further training, with various task-specific LoRA\nadapters, enabling widespread backdoor distribution. The core idea presented, “LoRA Once,\nBackdoor Everywhere,” emphasizes that LoRA’s modular characteristics may introduce security\nvulnerabilities in certain scenarios. The paper also evaluates three different backdoor injection\nmethods and conducts detailed experiments on module configurations for backdoor LoRA,\nultimately finding that applying the backdoor LoRA solely to the feed-forward (FF) module strikes the\noptimal balance between performance and attack effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper’s argument for stealth is somewhat limited, as it only uses minimal changes in\ndownstream task performance as evidence of stealth. It lacks more specific stealth metrics, such\nas trigger rarity and detection difficulty, which would provide a more comprehensive evaluation\nof the backdoor’s effectiveness.\n2. The experiments on trigger word diversity are somewhat limited, as only two trigger words were\nused for validation. It lacks comparative experiments across various trigger words to assess the\nmethod’s effectiveness, limiting a comprehensive evaluation of its generalizability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1.\tFF-only is offered as the sweet spot for backdoor Lora and used in the following experiments. However, as shown in Table 2, FF-only performs well on one type of trigger. Please clarify the selection criteria for FF-only.\n2.\tHow are “task performance” and “backdoor performance” measured and calculated?\n3.\tWhy is efficiency prioritized in the attack?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe paper focuses on a practical challenge: the risk that community-shared LoRA models may carry backdoors, which can propagate across various models. This is an interesting perspective.\n2.\tThe attack methodology is validated across multiple applications, including Commonsense Reasoning, Natural Language Inference, and MedQA.\n3.\tThe threat model is clearly stated.\n4.\tThe paper is well-written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the vulnerability of community-shared LoRA adapters to backdoor attacks. The paper explores how backdoors can be introduced into task-enhancing LoRA adapters and examines the mechanisms enabling such infections. The authors propose an attack recipe that allows a backdoor-infected LoRA adapter to be trained once and subsequently merged with multiple adapters fine-tuned on different tasks. Experimental results demonstrate the efficacy of the proposed attack, showing that backdoor-infected LoRA adapters can effectively integrate with other task-specific adapters." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe novelty and the rationale behind the proposed method need to be further clarified. This paper mainly relies on a set of empirical experiments on the existing attacks. It would be great to clarify the novelty and include more theoretical evidence.\n2.\tIt is unclear why efficiency is the first priority of the attacks. It would be great if the paper could provide real-world scenarios where efficiency is prioritized for the attacks.\n3.\tThe attack performance in the experiments seems limited. Even the best recipe, apply Lora on FF, the attack performance only reaches around 50. What are the potential solutions to improve the performance?\n4.\tIt would be great to clearly link the proposed recipe in Section 4.4 with the experimental results Table 3-6." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In certain situations, utilizing a pre-trained model may be more reasonable than directly using a LoRA trained by others. Additionally, most researchers prefer to train their own LoRA models. Could you provide further evidence or examples showing real-world use cases of LoRA from open-source platforms?\n2. Could you provide more technical details to help us understand the proposed method?\n3. Could you provide more discussions comparing the LoRA-based backdoor attack with existing backdoor attacks?\n4. Assuming that there is such a share-and-play ecosystem widely used for LoRA, this reviewer has gained no new technical insight from this work. What potential directions do follow-up studies in this area take?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper is the first to exploit LoRA as an attack by injecting a backdoor trigger.\n2. This paper is well-written and easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper shows LoRA can be used as an attack method by injecting backdoor into it and then uploaded to share-and-play ecosystem. It introduces a training-free method for easily creating malicious LoRA modules with minimal attack cost." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Lack of novelty. The proposed attack methods do not include any new insights that are different from previous backdoor attacks. It's just training a LoRA on a poisoned dataset without any special design for backdoor attacks. The contribution is incremental.\n2. The motivation is unclear. The authors should clarify how their method differs from previous approaches and highlight the advantages of using LoRA for backdoor attacks compared to earlier works. Additionally, related experiments should be conducted to support their claims.\n3. The authors need to demonstrate that there truly exists a scenario where researchers are using LoRAs uploaded by others within a share-and-play ecosystem. If the LoRA is poisoned, the user can just use another LoRA. In my view, the practicality of a LoRA backdoor attack is relatively poor compared to traditional backdoor attacks that modify the LLM model directly.\n4. The authors didn’t present detailed formulas or concrete algorithms for the proposed method, for example: “Training-free Merging” and “Two-step Finetuning”. It is unclear how the attacks are performed in detail.\n5. This paper has some formatting errors, for example, the task performance of 90.60 in the last row of Table 3 not being fully bolded." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "The LoRA share-and-play ecosystem is convenient but exposes users to maliciously tampered modules. We demonstrate that such tampering can be distributed at scale with minimal effort, highlighting the need for urgent community awareness and action." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024attack,\ntitle={Attack on {LLM}s: Lo{RA} Once, Backdoor Everywhere in the Share-and-Play Ecosystem},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0owyEm6FAk},\nnote={under review}\n}" }, "abstract": { "value": "Finetuning large language models (LLMs) with LoRA has gained significant popularity due to its simplicity and effectiveness. Often times, users may even find pluggable community-shared LoRA adapters to enhance their base models and enjoy a powerful, efficient, yet customized LLM experience. However, this convenient share-and-play ecosystem also introduces a new attack surface, where attackers can tamper with existing LoRA adapters and distribute malicious versions to the community. \nDespite the high-risk potential, no prior work has explored LoRA's attack surface under the share-and-play context. In this paper, we address this gap by investigating how backdoors can be injected into task-enhancing LoRA adapters and studying the mechanisms of such infection. We demonstrate that with a simple but specific recipe, a backdoor-infected LoRA can be trained once, then directly merged with multiple LoRA adapters finetuned on different tasks while retaining both its malicious and benign capabilities; which enables attackers to distribute compromised LoRAs at scale with minimal effort. Our work highlights the need for heightened security awareness in the LoRA ecosystem. Warning: the paper contains potentially offensive content generated by models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "LoRA", "PEFT", "LLM Safety", "Backdoor", "Backdoor Attack" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9136f25a45a52c31af13eaed6bf93124c06b6bbc.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Attack on LLMs: LoRA Once, Backdoor Everywhere in the Share-and-Play Ecosystem" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0pLCDJVVRD
A Percolation Model of Emergence: Analyzing Transformers Trained on a Formal Language
main
Active
Emergence;Percolation;Formal languages
alignment, fairness, safety, privacy, and societal considerations
5;5;6;8
4;3;3;3
3;2;3;3
3;2;3;3
3;4;2;4
6
3.25
2.75
2.75
3.25
-0.471405
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- Do you see similar phase transitions for language learning with smaller models or bigger models? In general, do architecture tweaks change the dynamics in non-trivial ways?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper is extremely well written, and focuses on clearly understanding the phenomenon of emergence (albeit in the limited setting of language modeling of formal languages).\n- Explores a new setting of learning entity type relationship, as percolation on a bipartite graph. I believe such a setting has not been explored before (though i'm not sure how it connects to emergence of skills / behaviors in transformers)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies emergent capabilities in transformers via two case studies. In the first study, they look at learning of formal languages (in particular, a language generated via a PCFG). For this setting, they train GPT-2 sized models from scratch for:\n- left-to-right auto-regressive language modeling\n- an unscrambling task that requires the model to take a set of words and convert it into a valid string\n- a conditional generation task that requires the model to generate a sentence that has certain words in it.\n\nAs the model trains they track grammaticality (as measured by whether model generates strings that the PCFG accepts), and if the generated strings follow type constraints. They break down learning into 3 phases, and find that these phases correspond to jumps in the downstream performance (either exact match acc for unscrambling, or loss for language modeling). \n\nIn the second study, they study concept acquisition where entities are associated with types. In particular, they model a concept matrix where row i corresponds to the ith entity, and column j corresponds to the jth type, and the ij entry in the matrix is the probability with which these are seen together. They then define a concept propagation matrix, and use connectedness properties of this propagation matrix to define phase changes. They find that analytic values of these connectedness properties correlate with whether the transformer learns specific concepts." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Phases of learning: I’m not convinced with the learning dynamics story here. Just because the model can generate accurate sentences does not mean that it has acquired grammar. Understanding whether the model has acquired grammar has been studied previously in NLP: a better method to do this would be to create minimal pairs with one grammatical and one ungrammatical sentence, and check if the model assigns a higher prob to the grammatical sentence. Ofcourse, the design of the minimal pair needs to be well thought-of, to eliminate shortcuts. Here is an example of a minimal pair that checks if a model can produce the correct number for a verb:\n\nS1: The man who likes apples is here\n\nS2: The man who likes apples are here\n\nNot clear what is the point of the percolation model: This seems less about emergence of structure in the model, and more about how at a specific data setting, generalization can happen. I’m not sure what the analogy is between learning type constraints (which is a function of training time), and graph percolation (which is a function of the data properties |E| and |K|). But if the authors can clarify this, i'm happy to increase my score.\n\n\n\nNot clear what are new findings in this paper: \n- Many of the conclusions from this paper are also in Murty et al. 2024, who also train transformer language models on formal languages, and find emergence of the correct learning rule, with extended training. They also find that such emergence happens alongside the emergence of tree-structures in transformers.\n\t\t\n- Similarly, Chen et al. also have a very similar setting but with masked language models, and show that grammar is abruptly acquired, and such grammar acquisition has a causal relationship with downstream performance. \n- There’s also other work by Allen-Zhu et al, who train transformers on formal languages, and find evidence of learnability under some constraints." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Is the specific task (free generation/unscrambling/conditional generation) specified to the model somehow, e.g. with a special token?\n\nFor the unscrambling task, is the solution necessarily unique? If not, what's the justification for using exact match/average probability of valid tokens?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The phenomenon of emergence of model abilities with scale, and how suddenly this can occur, is of both scientific and societal importance, together with related questions about the transition from memorization to generalization. The paper studies these using a toy setup that is both similar enough to realistic setups to be interesting, but simple enough to be able to isolate and study both of these phenomena. The theoretical explanation using bond percolation is insightful and deserving of follow-up work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the emergence of abilities over the course of training in a transformer language model trained on a formal language. The authors identify distinct phases where different abilities emerge. They also study the point at which one of the abilities transitions from memorization to generalization, and show that this point empirically follows a scaling law that matches the theoretical scaling of bond percolation in a bipartite graph." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper makes claims about \"structures learned by the model\" (in Definition 1 and Section 5.1 Phase 1), but I do not think that these are really justified by the evidence in the main body of the paper, which only look at performance metrics. There is some analysis of attention maps in Appendix F.6. However, the main evidence given there seems to be that there is increased sparsity at partially-trained checkpoints compared to initialization, and other qualitative claims that are hard to read off from the plots. It would be easier to tell if these were quantified, but my impression is that this evidence is rather weak. I also think that if this evidence were stronger, it should be in the main body of the paper, since it would be necessary to justify this prominent claim.\n\nThat being said, I think there is enough interesting material in the paper without looking at model internals, so my suggestion would be to remove or significantly de-emphasize these claims/this aspect of the paper.\n\nMore broadly, I found some of the opening discussion and the definition given in Section 2 a little unnecessary, and took up space that would have been better devoted to explaining the experimental setup and results more clearly, and perhaps covering more results that only made it into appendices. In my opinion it would have been enough to give the high-level motivation, instead of couching it in terms of a new definition that doesn't really add much (especially if the claim about structure in the model is removed).\n\nI also found that at times the presentation got too bogged down in formal details (e.g. Definition 2), and would have preferred to have seen a more accessible, plain-language explanation of things and simple examples, with formal details relegated to appendices for reference if necessary. At other times I found the exposition too rambling (e.g. Section 5.1 Phase 3), and it would have been easier to follow if the main points had been separated out and made concisely (e.g. using bullet points / short headings).\n\nMore minor points:\n- In definition 1 (if you are keeping it), \"nonlinear\" could be confusing (e.g. quadratics are non-linear but still change gradually). Maybe you mean \"discontinuous\"? Or I would perhaps argue that the relevant thing is how gradual the change is (steepness of slope, even if it is locally linear).\n- In definition 2, I would have found it a bit clearer to say that S is a non-terminal symbol, and just say you start from S, instead of treating it as a special case and saying you first map S to other non-terminal symbols – like the definition in Appendix C. (Also, the definition in Appendix C looks messed up, you seem to be swapping between N and NT / Sigma and T, unless I am misunderstanding something.)\n- I found definition 3 hard to follow. E.g. \"Entities have unique identifiers associated with them to help define subjects and objects in a sentence\" - do you mean e.g. \"John\" will have certain attributes like \"tall\", \"brown-eyed\" etc.? Consider using plainer language and an example.\n- Line 227 \"Humans\" vs line 228 \"humans\" - inconsistent capitalization could cause confusion (I assume these are the same thing).\n- Line 260: For the indicator variable, maybe consider \\mathbbm{1} (from package bbm) instead of \\delta (though this is maybe just personal preference)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weakness part:\n\n1. Please justify the relationship with previous paper, and the reason why we can still believe the current LLMs have emergence. If the definition is different, please justify the reason why the new definition is equal or a proper approximation of previous ones.\n\n2. Please justify the use of formal languages, and what will happen if we do not train on well-typed formal languages. \n\n3. Please provide more physics intuation of current emergence model. For example, during the freezing of water, the Gibbs free energy of water molecules changes, thereby affecting the intermolecular distance and, on a macroscopic level, the volume. We consider this process to be a phase transition." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Generally, this paper builds a bridge between the LLM and the Physics complex system. The paper uses the phase transition from complex system theory to analyze the emergence of LLMs. This paper has the following strengths:\n\n1. This paper provides a clear definition of emergence, which is slightly differently from previous paper, but it is more formal and general. Also, this definition helps further research the measurement of emergence.\n\n2. This paper trained the LLM from formal languages, which generated from a strict grammar with type check. It aligns with current research.\n\n3. he paper’s findings on emergence and phase transitions are potentially generalizable to other neural network models, not just Transformers trained on formal languages." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the phenomenon of \"emergence\" in neural networks, where a model suddenly acquires certain capabilities after reaching a critical data or computational threshold. The authors propose a new definition of emergence in neural networks, linking it to the acquisition of general structures within the data that drive sudden performance improvements in specific tasks. The authors experiment with Transformers trained on a context-sensitive formal language and observe that once the underlying grammar and context-sensitivity structures are learned, performance on various tasks improves dramatically. This phase transition in model learning is likened to percolation on a bipartite graph, where learning dynamics mirror phase changes. Their results suggest that emergence can be theoretically predicted by understanding the structure of the data-generating process, offering insights for regulating and anticipating the behavior of AI models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Previous paper[1] has already claimed that the emergence abilities is mirage. The paper does not clearly address contradictions with previous work: why does the phenomenon of emergence still occur in this study?\n\n2. The selection of formal language, though it is very popular in recent researches, but the situation is that the models not trained on formal languages still shows good performance. The observation is not convincing for such situations. \n\n3. In graph theory, diminishing marginal effects are quite common; however, there is no clear evidence linking this to the percolation model proposed in this paper. Many graph-theoretic functions exhibit properties such as submodularity, which is one of the reasons behind these phenomena. The final emergence modeling presented in this paper is not entirely intuitive.\n\n[1]Schaeffer R, Miranda B, Koyejo S. Are emergent abilities of large language models a mirage?[J]. Advances in Neural Information Processing Systems, 2024, 36." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "There was a discussion about order parameters early on in the introduction, but this was then ignored until the last paragraph of the conclusion. Can you clarify how your definition of order parameters are different than/related to \"progress measures\" that others have proposed to study phase transitions (e.g. [3,4])?\n\n___\n[3] Barak, et al. Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit. 2022. (https://openreview.net/forum?id=8XWP2ewX-im)\n\n[4] Nanda, et al. Progress measures for grokking via mechanistic interpretability. 2023. (https://openreview.net/forum?id=9XFSbDPmdW)" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well written and a pleasure to read. The paper seems to be also placed well in the context of previous and current related work on emergence. Emergence is an interesting topic for the community, and this paper provides a nice background and definition for studying it in terms of training data. And, while the setting studied is simple, the findings are well supported by their experiments, and the appendix has well-detailed additional evidence." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies emergence of structure and capabilities of a small transformer throughout training on a formal language dataset.\nThey identify certain phase transitions correlate with the emergence of capabilities to do specific tasks.\nThey then propose a formulation to predict phase transitions where emergent capabilities, and find that it aligns well with the formal language toy setting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are other aspects of emergence that are not investigated here that need further study. This paper studies emergence over training data scaling, but they mention other axes (e.g. compute or parameter size) that I feel are also important to make more general claims regarding emergence. While the results in this paper are reasonable for the chosen setting, it is unclear whether they will hold in other settings and data choices.\n\nI also wanted to point out a few (recent) papers there are missing from related work, but seemed relevant. The first is Singh, et al.'s [1] work that studies phase transitions of learning subcircuits for in-context learning tasks. The second is Tigges, et al. [2]'s work, which studies how known circuits evolve in the Pythia suite of models over the course of training.\n___\n[1] Singh, et al. What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation. 2024. (https://proceedings.mlr.press/v235/singh24c.html).\n\n[2] Tigges, et al. LLM Circuit Analyses Are Consistent Across Training and Scale. 2024. (https://openreview.net/forum?id=1WeLXvaNJP)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024a,\ntitle={A Percolation Model of Emergence: Analyzing Transformers Trained on a Formal Language},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0pLCDJVVRD},\nnote={under review}\n}" }, "abstract": { "value": "Increase in data, size, or compute can lead to sudden learning of specific capabilities by a neural network---a phenomenon often called \"emergence\". Beyond scientific understanding, establishing the causal factors underlying such emergent capabilities is crucial to enable risk regulation frameworks for AI. In this work, we seek inspiration from study of emergent properties in other fields and propose a phenomenological definition for the concept in the context of neural networks. Our definition implicates the acquisition of general structures underlying the data-generating process as a cause of sudden performance growth for specific, narrower tasks. We empirically investigate this definition by proposing an experimental system grounded in a context-sensitive formal language and find that Transformers trained to perform tasks on top of strings from this language indeed exhibit emergent capabilities. Specifically, we show that once the language's underlying grammar and context-sensitivity inducing structures are learned by the model, performance on narrower tasks suddenly begins to improve. We then analogize our network's learning dynamics with the process of percolation on a bipartite graph, establishing a formal phase transition model that predicts the shift in the point of emergence observed in our experiments when changing the data structure. Overall, our experimental and theoretical frameworks yield a step towards better defining, characterizing, and predicting emergence in neural networks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Emergence", "Percolation", "Formal languages" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/955c6ea77fad990dc5324c5030c52fd2b5942c1a.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/79e246fa60de5af044de2bb79403f89fa2eaaa21.zip" }, "title": { "value": "A Percolation Model of Emergence: Analyzing Transformers Trained on a Formal Language" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0pbxX2jatP
Measuring Free-Form Decision-Making Inconsistency of Language Models in Military Crisis Simulations
main
Active
Language Models;AI Safety;Natural Language Processing;Inconsistency;Transparency;Military
alignment, fairness, safety, privacy, and societal considerations
3;5;5
5;4;4
3;2;3
2;2;2
4;3;4
4.333333
4.333333
2.666667
2
3.666667
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Why was the wargame scenario chosen as the primary setting for examining inconsistency in high-stake decision making? Do you believe the inconsistency findings would generalize to other types of critical scenarios, or are they specific to the military context?\n2. In Section 5, why was a temperature of 1.0 chosen as the default setting?\n3. Could this research potentially inform new methods to improve LLM consistency in decision-making contexts?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The overall presentation is very clear and intuitive\n2: The experimental setups are rigorous - experiments are run with multiple LLMs, variations in prompt scenarios, and statistical controls to examine the consistency across different levels of temperature settings and prompt structures.\n3. Besides quantitative measures, the paper provides qualitative examples that provides valuable insights\n4. The prompts are fully provided for ease of reproduction" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the inconsistency of large language models (LLMs) in terms of ablations like sentence order and semantics when applied in war games. The authors focus on measuring free-form response variability using BERTScore to quantify inconsistencies. The findings indicate that the LLMs exhibit considerable inconsistency in their response, which raises concerns about their reliability in high-stake decision-making contexts." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper focuses on a very specific hypothetical military scenario. It’s also uncertain whether the observed inconsistency is unique to the wargame setup or would generalize to other critical decision-making applications. This might limit the generalizability to other high-stakes applications.\n2. The paper’s main innovation centers on using BERTScore as an inconsistency measure, which may not offer significant novelty in approach. \n3. The study also did not sufficiently compare this approach with other potential inconsistency measurements. \n4. The choice of a default temperature setting of 1.0 in Section 5 may not be appropriate, as it introduces significant response inconsistency by design.\n5. The comparison are limited to a few closed-source LLMs\n6. While the study demonstrates that LLMs can produce inconsistent responses, it would be more impactful if it included strategies for reducing such variability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "NA" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- I appreciate the paper's main goal of investigating LLM inconsistencies in high-stake military scenarios\n- I liked that the authors performed a validation of the inconsistency score using synthetic data with TruthfulQA, which also allows readers to calibrate on what score values mean.\n- The main experiments were well executed.\n- I liked the investigations into the prompt variations.\n- I also appreciated the disclaimer at the end of the introduction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents an investigation of the consistency of LLM responses in a military strategy scenario. Authors invert the BERTScore measure of semantic similarity to compute an inconsistency score, which they validate in a synthetic free-form QA setting based on Truthful QA. In the experiments, the paper shows that LLMs' answers are generally quite inconsistent in both types of generations for their scenario (initial, continuations). They also explore the effect of temperature on inconsistency, as well as the effect of prompt variations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper's main weakness is the conceptual problem definition.\n - I feel like in realistic high-stakes settings, the temperature should probably be set to 0, which is similar to greedy decoding. Authors should probably focus their experiments on that temperature, though I'm expecting very low inconsistency. More importantly, authors should make a clearer argument for why t=0 should not be used in these high-stakes settings, or why t>0 should be studied.\n - Concretely, I feel like section 6 was most relevant to the realistic way that LMs should be used. I wish authors had expanded on such experiments, possibly exploring various types of rephrasing and exploring exactly how the inconsistencies were affected.\n- Second, I feel like the paper's results might be limited in terms of generalizability due to there being only one wargame scenario considered. I understand that this is a relevant vignette, but it would be very interesting to have at least 3-5 high-stakes scenarios (either all wargame or some other high-stakes domains like healthcare). This would ensure that the results aren't an artefact of the topics or domain chosen in the one scenario.\n- Third, I felt like the paper could use more in-depth analyses w.r.t. how model responses are inconsistent. The measure at hand is quite coarse-grained, and might not be able to capture more nuanced consistent/inconsistent outputs (e.g., LLM outputs offering two alternatives, one of which is similar between two outputs). Given the specific and high-stakes nature of the scenario, it'd be really useful to have more insights into how the outputs differ, as currently, the coarse-grained measure yields very little information about how inconsistencies should be mitigated (other than lowering the temperature).\n- Missing citations: http://arxiv.org/abs/2203.12258, https://arxiv.org/abs/2311.00059, https://arxiv.org/abs/2310.11324\n- L204: Authors mention a bi-directional entailment clustering method, but without more details, it seems very confusing why the authors mentioned that... I would remove that sentence or specify why they needed to test that method and why they didn't include the results in the final main text." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Besides inconsistency, is the proposed evaluation framework able to highlights other types of errors or discrepancies?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- the paper is well written and the methodology is easy to follow\n- with the increasing use of LLMs in different areas, the focus on high-stakes decision-making contexts is timely and of importance" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes to examine how reliable LLMs are in high stakes decision making situation. For this, the authors conduct crisis simulations with free form responses and using bertscore and measure the inconsistencies across 5 LLMs. Across experiments conducted, this study shows that there inconsistencies even after fixing parameters like conflict anonymization and temperature, and show that prompt variations can lead to greater inconsistency than temperature adjustments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The framework for measuring inconsistency uses only BertScore. This potentially limits the evaluation setting to the discrepancies found through semantic similarity missing other forms of inconsistency.\n- There is no human evaluation or correlation to human judgement of the inconsistency score. \n- I understand that the objective of this study is to probe LLMs for inconsistency in this particular context. While this study underlines a problem, it does not suggest possible mitigations for the issue of inconsistency." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Using a metric that we test, we quantitatively measure free-form response inconsistency of LMs in a military setting and find they are prone to giving semantically inconsistent responses." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024measuring,\ntitle={Measuring Free-Form Decision-Making Inconsistency of Language Models in Military Crisis Simulations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0pbxX2jatP},\nnote={under review}\n}" }, "abstract": { "value": "There is an increasing interest in using language models (LMs) for automated decision-making, with multiple countries actively testing LMs to aid in military crisis decision-making. To scrutinize relying on LM decision-making in high-stakes settings, we examine the inconsistency of responses in a crisis simulation (\"wargame\"), similar to reported tests conducted by the US military. Prior work illustrated escalatory tendencies and varying levels of aggression among LMs but were constrained to simulations with pre-defined actions. This was due to the challenges associated with quantitatively measuring semantic differences and evaluating natural language decision-making without relying on pre-defined actions. In this work, we query LMs for free form responses and use a metric based on BERTScore to measure response inconsistency quantitatively. Leveraging the benefits of BERTScore, we show that the inconsistency metric is robust to linguistic variations that preserve semantic meaning in a question-answering setting across text lengths. We show that all five tested LMs exhibit levels of inconsistency that indicate semantic differences, even when adjusting the wargame setting, anonymizing involved conflict countries, or adjusting the sampling temperature parameter $T$. Further qualitative evaluation shows that models recommend courses of action that share few to no similarities. We also study the impact of different prompt sensitivity variations on inconsistency at temperature $T = 0$. We find that inconsistency due to semantically equivalent prompt variations can exceed response inconsistency from temperature sampling for most studied models across different levels of ablations. Given the high-stakes nature of military deployment, we recommend further consideration be taken before using LMs to inform military decisions or other cases of high-stakes decision-making." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Language Models", "AI Safety", "Natural Language Processing", "Inconsistency", "Transparency", "Military" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6e3ef0665eddc98dc4a413bf663f334258c43302.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/0338e87c59550af2608ad1ac9c0387b47b6b3781.zip" }, "title": { "value": "Measuring Free-Form Decision-Making Inconsistency of Language Models in Military Crisis Simulations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0py3h7pops
Will the Inclusion of Generated Data Amplify Bias Across Generations in Future Image Classification Models?
main
Active
model bias;image classification
interpretability and explainable AI
3;5;6;6
4;3;3;4
2;3;3;3
2;2;2;2
2;2;2;3
5
3.5
2.75
2
2.25
-0.408248
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can you provide more detail on the human study in section 3.2, and the r% used?\n1. What is the impact of image quality from the expert-guided filtering in section 3.2? \n1. How well does the findings generalize to other dataset? For example, section 4.2 showed dataset bias does not amplify model bias. Do authors expect that to hold for other datasets, too?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This paper addresses a highly relevant topic by examining the implications of generated data on bias, which is essential for advancing our understanding of the gaps between generated and real data.\n- The iterative pipeline for incorporating generated data closely resembles real-world applications. By using datasets of varying complexity and models with different capacities, the study effectively explores different aspects of the problem, enhancing the generalizability of the findings.\n- The study provides noteworthy observations with good experimentation support, such as the low correlation between dataset bias and resulting bias effects, the higher susceptibility of pre-trained models to integration bias, and insights into how different factors affect bias across datasets and models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the bias implications of incorporating generated data from generative models into training downstream tasks. The authors propose an iterative pipeline that repeatedly uses new generators to create additional images that enhance training. Their analysis spans three image datasets of varying complexity, exploring both performance improvements and bias impacts on downstream classification tasks. Bias is examined across different subgroups, covering both single and multiple bias interactions. Through this setup, they observe mixed trends in performance and bias effects across datasets of different complexities and model capacities. The paper concludes with a high-level discussion on potential root causes behind these varied results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper presents mixed findings across datasets and models but does not provide in-depth explanations for these variations. While section 5 includes some discussion on the root causes of observed behaviors, this analysis remains at a high-level and is not well-supported or directly connected to the experiments in earlier sections. The analysis would be more convincing with clearer connections to the results, reinforcing the paper’s claims with evidence from the experiments.\n- In table 1, FID fluctuates for Color-MNIST and CIFAR10 after several rounds of data generation, while it increases substantially for HARD-ImageNet starting from the second iteration. This trend suggests a marked difference in data quality for HARD-ImageNet compared to the other datasets. However, the subsequent experiments focus primarily on how generated data impacts downstream performance and bias without addressing how this observed FID trend might influence these results. A discussion on how does the data quality(assesed by FID) could affect interpretations across the three datasets would enhance the clarity of the findings.\n- Some methodology details are lacking, making it challenging to fully understand and replicate the study. For example, in section 3.2, there is limited information on the design of the human study, the impact of expert-guided filtering on image quality, and the specific r% used.\n- The paper would benefit from some recommendation on the usage of generated data in generative models or downstream tasks with the insights from the experiments." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Q1. How many synthetic data are added or ratio p? Is there a rationale behind the choice of p.\n\nQ2. Are there any experiments or preliminary results on tasks with a larger number of classes?\n\nQ3. How is CLIP used in filtering? Specifically, is it based on the similarity score between label texts and images?\n\nQ4. Would different losses lead to varying results?\n\nQ5. Any insights on how could the conclusion generalize to other tasks, or other modality?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "S1. The scenario proposed by the authors is highly relevant, as synthetic data is increasingly shared online and integrated into various domains. More studies on how the synthetic data will affect model training in generations will be beneficial for the research community.\n\nS2. The experiments on the proposed dataset are extensive, e.g., w/ or w/o biased initialization, different base models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "With the growing prevalence of generative models, the authors raised the concern regarding model bias, particularly in the context of generation and retraining cycles. Then the authors developed a simulation environment and conducted extensive experiments on image classification datasets to reveal how fairness metrics evolve across successive generations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1. Lack of experiments on the choice of generative models. Various generative models can differ in behavior, the choice of model likely impacts sample quality and influences the outcomes of subsequent studies.\n\nW2. The motivation of the paper is on future image classification synthetic data plays in it. With foundational models playing a dominant role, integrating settings like synthetic data for transfer learning in classification will enhance the paper better, going beyond the current base case that may lack scalability.\n\nW3. The experiments are mainly targeting the model bias within a self-consuming loop in image classification domain. However, the conclusions/observations are not significant." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The authors reported the FID score for the augmented data from multiple generations in Table 1. It seems that the FID scores for unbiased colorized MNIST show a decreasing trend; biased colorized MNIST is more or less the same; CIFAR-20/100 decreases first and then increases; Hard ImageNet shows a sudden increase from 50.9 to 186.4. Can the author explain the inconsistent changes here? Are there any implications from the observations? Also, it would be better if the author could visualize the generated images from different generations to visually see the changes across generations.\n\nThe authors mentioned, “We manually partition the original dataset into multiple subgroups, where subgroups within the same class share similar semantics.” Can the authors explain more clearly how they defined and constructed the subgroups for each dataset?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is studying a new problem. In the era of generative AI, more generated content is available on the Internet. Training on the generated data may influence model performance. In this sense, the paper is studying a valid and important problem." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper attempts to analyze whether the inclusion of generated data alleviates the model bias. In the paper, the authors repeatedly train generative models on the generated augmented data and study its impact on multiple fairness metrics." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper mostly conducts experiments on three datasets. Among the three datasets, Colourized-MNIST and CIFAR20/100 are very small datasets in terms of the resolutions and the number of data and classes compared to the existing image data. Moreover, the largest model used in the paper is ResNet50, which is a relatively small network when compared to SOTA models like ViT. This raises the concern of whether the observations from the experiments are still valid in realistic scenarios with large datasets and models. \n\nSome experimental results are hard to understand.\n- baseline performance. To my understanding, CIFAR20/100 has a smaller number of classes and should be an easier dataset to classify when compared to CIFAR100. However, the baseline performance of ResNet50 on CIFAR100 is around 80%; on CIFAR20/100 is around 50%. \n- The trends are inconsistent between models. For example, in Fig. 6(a), ResNet50 and LeNet show an opposite trend in Equal Opportunity and Disparate Impact but the same trend in average accuracy and Maximal Disparity; VGG-19 remains unchanged for the tested metrics.\n- The trends are inconsistent between datasets. For example, comparing the ResNet50 baseline between Fig 5(a) on CIFAR-20/100 and 6(a) on CIFAR-100. Even though the datasets are similar, the ResNet50 baseline shows a totally different trend for the Equal Opportunity, Disparate Impact, and Maximal Disparity metrics. \n- Similar discrepancies are noted in other sections of the paper. The experimental findings do not appear to explain how the generated data affects model bias in general.\n\nThe conclusion is weak. Based on the observations, the authors conjectured that the model bias is affected by multiple factors, including the datasets, models, and data quality across generations. However, the authors did not provide clear experiment evidence or solid theory explaining how these factors influence the model bias." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Look at weakness section to know what to improve." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors investigated an important question on a selected dataset. The future work can be extended to other hierarchical sub-groups and more datasets.\nLimited benchmark results are apparent and convincing" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Great:\nThe authors investigated an important question on a selected dataset. The future work can be extended to other hierarchical sub-groups and more datasets.\nLimited benchmark results are apparent and convincing.\n\nMissing:\nIt will be helpful to accept the claim with more extensive experiments and analysis.\nAccording to me, citations to the original work/s are missing.\nThe final section can be elaborated to cover the important findings that support the main problem.\n\nWithout additions:\nIt's a strong acceptance for a poster.\nWeak acceptance for the main track.\n\nExplanation:\n1. The experiments: If possible, I would like to see the results from other standard datasets or more subgroups for the dataset used.\n2. Citations: Some original works in the introduction section and later are missing citations. If the authors think they have cited all the necessary work, please put that in a rebuttal.\n3. Conclusion and future work: I think it can be improved or extended to connect with the problem and story." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "It will be helpful to accept the claim with more extensive experiments and analysis.\nAccording to me, citations to the original work/s are missing.\nThe final section can be elaborated to cover the important findings that support the main problem." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024will,\ntitle={Will the Inclusion of Generated Data Amplify Bias Across Generations in Future Image Classification Models?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0py3h7pops},\nnote={under review}\n}" }, "abstract": { "value": "As the demand for high-quality training data escalates, researchers have increasingly turned to generative models to create synthetic data, addressing data scarcity and enabling continuous model improvement. However, reliance on self-generated data introduces a critical question: \\textit{Will this practice amplify bias in future models?} While most research has focused on overall performance, the impact on model bias, particularly subgroup bias, remains underexplored. In this work, we investigate the effects of the generated data on image classification tasks, with a specific focus on bias. We develop a practical simulation environment that integrates a self-consuming loop, where the generative model and classification model are trained synergistically. Hundreds of experiments are conducted on Colorized MNIST, CIFAR-20/100, and Hard ImageNet datasets to reveal changes in fairness metrics across generations. In addition, we provide a conjecture to explain the bias dynamics when training models on continuously augmented datasets across generations. Our findings contribute to the ongoing debate on the implications of synthetic data for fairness in real-world applications." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "model bias", "image classification" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b09ff94db42aa7411aabbee903d0684559f207be.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Will the Inclusion of Generated Data Amplify Bias Across Generations in Future Image Classification Models?" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0qexTTfnmH
ME-LORA: MEMORY-EFFICIENT BAYESIAN LOW- RANK ADAPTATION FOR LARGE LANGUAGE MODELS
main
Active
Large Language Models;Low-rank adaptation;Bayesian estimation;Fine-tune
foundation or frontier models, including LLMs
3;3;3;6
4;3;4;3
3;2;2;3
2;2;2;3
2;2;3;3
3.75
3.5
2.5
2.25
2.5
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see above for major concerns. Some minor comments and questions:\n\n- In Section 3, it would be better to recap BLOB's LoRA framework, then discuss the changes introduced by ME-LoRA (i.e., what is currently Section 3.1).\n\n- For the citations, please take a look at the ICLR template; citations should be in paranthesis, e.g.,\n\"the citation should be in parenthesis using \\verb|\\citep{}| (as in ``Deep learning shows promise to make progress\ntowards AI~\\citep{Bengio+chapter2007}.'').\" However, this is not the case for most such citations in the paper, e.g.:\n\"the posterior distribution of the model parameters is inferred rather than relying on point estimates Bishop &\nNasrabadi (2006); Wang & Yeung (2020)\"\n\n- Please define the vec(\\cdot) operator on line 181. Also on line 181, \\Sigma | A| should again be A^T \\Sigma A.\n\n- \"Direct computation of the KL Divergence between the prior and posterior distributions of W is nontrivial. Direct computation of the KL Divergence between the prior and posterior distributions of W\nis non-trivial.\"\n\n- \"3.2 EFFICIENT COMPUTATION OF FULL-WEIGHT KL DIVERGENCE\" <- This is Theorem 3.2 from the BLOB paper (the title is exactly the same as the title used therein).\n\n- Lines 215-230: \"we adopt a strategy analogous to BLoB, where\" <- This is called the reparameterization trick\n\n- \"However, with our proposed method, using such a simplistic prior variance can still lead to overfitting.\" <- Why? Are there experiments to demonstrate this? From the theoretical design of the KL-divergence in BLOB, it also hard to justify directly adding noise to the standard deviation \\sigma_p (how could this arise given the Gaussian prior set up of the KL-divergence?)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The proposed method is straight forward and the computational savings, compared to BLoB, are immediately obvious. It is also commendable that the authors have undertaken the task of reproducing the methods and experiments from the original BLoB paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a small variant to the recent LoRA variant Bayesian Low-Rank Adaptation by Backpropagation (BLoB). BLoB adapts a variational Bayesian setting wherein the LoRA parameters of the A matrix are parameterized by Gaussian priors. Subsequently, BLoB makes the evaluation of the variational objective (i.e., the likelihood regularized by a KL-divergence term) efficient in practice by deriving the KL-divergence under assumed Gaussian priors, as well as incorporating flipout into LoRA for efficient sampling.\n\nThe introduced method, called ME-LoRA, near-directly adapts the BLoB framework. The main technical difference is the use of a full matrix C of rank r (the lower dimension), which acts as an intermediate matrix which is multiplied between the LoRA B and A matrices, i.e., W = W_0 + BCA. While BLoB includes one Gaussian per each value of the A matrix (leading to two learnable matrices of size r x n representing the Gaussians means and variances), ME-LoRA instead utilizes two learnable matrices of size r x r. The authors attempt to reproduce both the BLoB framework and experimental set up from the BLoB, with ME-LoRA performing favorably on accuracy and negative likelihood tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The proposed changes to BLoB are small contributions, which limit the potential impact of the work. There are also several important concerns, in particular:\n- In BLOB, for A \\in R ^ {r \\cross n}, each element of A is assumed to be an independent Gaussian, which is why the joint density is a product of the Gaussians. However, in your setup (line 176):\n> C ∈ Rr×r is Gaussian with mean M and standard deviation Ω, denoted as q(C) ∼ N (M, Ω),\n\nwhich would mean the distribution on line 177, i.e., q(Q) \\sim N(MA, \\Sigma | A|), is incorrect. For Q=CA, it should be\nq(Q) \\sim N(MA, A^T \\Sigma A). Why is there this discrepancy, and what does this mean for the results?\n- Most importantly, there are concerns regarding the degree to which the authors' were able to faithfully reproduce both BLoB and the experiments of the BLoB paper. Firstly, the results in Table 2 are significantly different than the BLoB paper (in fact, BLoB no longer state of the art on the majority of tasks). Secondly, key ingredients of the BLoB paper did not work under reimplementation, as noted on lines 305-301:\n> We re-implemented LAP and applied it to the MAP checkpoints. For BLoB, since no open-source\ncode was available, we replicated the approach based on the description in the paper. To ensure a\nfair comparison, we made appropriate parameter adjustments. BLoB was only sampled once during\neach training, validation, and testing stage. The Flipout sampling technique and KL regularization\nfrom the original BLoB paper were not used in our replication, as they did not perform well. Instead,\nwe applied the KL regularization method from Me-LoRA.\n\nAs previously noted, it is commendable that the authors sought to reimplement the results from the BLoB paper, although the correctness of the reimplementation is a major concern. With the release of the BLoB code, I would hope the authors could better reproduce the experimental set up from that paper and better incorporate their method into the official BloB source (with Flipout and KL regularization).\nOfficial BLoB source (I understand this was just uploaded recently, I hope this aids the authors in their future efforts):\nhttps://github.com/Wang-ML-Lab/bayesian-peft" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The core idea of this paper is well-presented with a clear comparison against the original Bayesian LoRA framework.\n\n2. Comprehensive experiments demonstrate that Me-LoRA achieves a balance between effectiveness and efficiency when compared to state-of-the-art methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores Bayesian Low-Rank Adaptation (LoRA), a method known to reduce overconfidence in inference when data is limited. The authors introduce a memory-efficient variant, Me-LoRA, by performing sampling on a small-scale intermediate matrix rather than the low-rank matrix directly. Experimental results with LLaMA2 models demonstrate that Me-LoRA maintains both effectiveness and efficiency compared to the original Bayesian LoRA framework." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The motivation behind the research problem is not clearly presented. The submission lacks an explanation of when overconfidence occurs in LLM inference, why this issue is critical, and how such overconfidence impacts the model's responses? These questions should be properly addressed.\n\n2. Essentially, Me-LoRA is an efficient variant of BLoB and is supposed to replicate BLoB's performance with reduced resource demands. However, it appears to fall short in terms of ECE, a key metric for assessing overconfidence, against BLoB. \n\n3. The computational cost comparison in Table 6 is confusing. The backbone model requires at least 13GB (LLaMA2-7B) or 21GB (LLaMA2-13B) GPU memory, yet the memory usage reported in Table 6 is significantly lower than that of the backbone model. Additionally, the rank used in the efficiency comparison is missing.\n\n4. This submission seems to be incomplete considering the presented contents and presentation itself. Further revisions are recommended to enhance its clarity and comprehensiveness." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How much do we care about the relatively modest reductions in memory usage in Table 1, as compared to the very large memory cost of the model itself?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well written, and has extensive experiments demonstrating that their method performs reasonably well." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a somewhat more efficient approach for Bayesian LoRA in LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "One weakness is in the odd presentation of Bayesian LoRA in LLMs. It would be far more preferable to present a historical overview, saying something like \n\n>Yang et al. 2023 [or other earlier work] introduced the notion of doing Bayesian inference over the low-rank adapters for fine-tuning LLMs. This had numerous advantages ... . However, Laplace inference, as used there had disadvantages ... . These disadvantages motivated the introduction of BLoB, which uses VI. We build on BLoB ...\n\nMe-LoRA only does Bayesian inference over C, and does MAP over A and B, which will likely reduce the benefits you might see from a fully Bayesian approach, and make it resemble more closely a non-Bayesian approach." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- It should be clarified that A, and B are not variational parameters, they do not fit into the ELBO defined in Eq. (3). They are often referred to as \"model hyper-parameter\".\n\n- Section 3.1 demonstrates the induced covariance matrix over the full-weight matrices, but does not go deeper into:\n1. Is that covariance matrix diagonal, low-rank, or of any structure; \n2. Why shall we care about this quantity? does the covariance matrix help us, e.g. better understand the landscape of LoRA fine-tuning, etc?\n\n- Has the author considered using Monte Carlo estimation to estimate the KL divergence rather than using closed form solution?\n\n- When performing VI, do the authors additionally utilize regularization such as weight decay or L2 regularization?\n\n- Why the prior is set on the full model weights rather than just on the low-rank components (Eq. 5)?\n\n- Why is ensemble worse than MAP in terms of accuracy? Does it mean ensembling could harm the performance potentially?\n\n- Isn't a weight decay of 1e2 too large?\n\n- Why is the proposed method suboptimal in ECE?\n\n- How does the proposed method compare with Rank-1 BNN [1]?\n\n- How many Monte Carlo samples are used for estimating the Bayesian model averaging?\n\n- Why the numbers in the first column of Table 3 different from the numbers in Table 2.?\n\n- Why do we need the random noise epsilon on the prior? The results in Table. 4 seem to be mixed.\n\n- What benefits can we get from this approach, if we are in an open-ended generation setting? A huge body of LLM's applications are open-ended generation tasks such as translation, summarization, etc.\n\n[1] Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The proposed method is easy and technically sound.\n\n- The closed-form computation of KL divergence is carefully derived. \n\n- The experiments are conducted on a wide range of tasks, although mostly multiple choice QA problems, to demonstrate the effectiveness of the problem.\n\n- A reasonable amount of experiment details are provided for reproducibility (though code not provided along with the submission)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper is built upon a recent work BLoB, which applies black box variational inference on LoRA during fine-tuning. This paper suggests a new method: Me-LoRA, which improves upon BLOB in terms of parameter counting, in particular, instead of having a variational distribution directly over LoRA's A and B components, which has a total number of parameters as 2 * (r * n + r* m), it introduces a new components C of shape r by r and perform VI on C instead of directly on A and B, as such the total number of parameters is reduced to r * n + r* m + 2 * r * r, and helps reduce the number of parameters significantly when r << n, which is the case for most LLM. The model then shows that the proposed approach shows comparable / slightly better than the past approach BLOB, but smaller parameter overhead.\n\nOverall, I find the method technically sound and the experiments fairly convincing, the proposed modification to the existing approach is well-motivated and can have some usefulness. However, the writing needs some significant improvement (see weakness)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- It would be nice if the references can be colored;\n\n- End of line 111 is missing parathesis?\n\n- The description of Eq.3 is incorrect, it is the KL divergence between the variational posterior and the prior not the posterior. Also it's more common to call it ELBO rather than free energy in Bayesian deep learning literature\n\n- What is \\theta in Eq. (3)?\n\n- The definition of q(C) at line 178 is confusing: If M is a matrix, then q(C) should be a matrix normal, how could it have a r by r matrix as the covariance?\n\n- The math language is also not consistent: Eq (4) has W; in \\mathcal{N}, but q(C) does not.\n\n- Having a randomized prior is extremely weird and non-standard (Sec 3.3 , Eq. 8), it's also not stated what is U(0, 1), which I guess is a uniform distribution.\n\n- It would be nice if the parameter count section can be summarized into a table for easier comparison. \n\n- Line 267, ' ..saved model checkpoints every 100 steps,...' it's nice to have experiment details presented but I don't think this piece of information is necessary.\n\n- Line 398, the authors mentioned Flipout, but did not provide reference nor any explanation.\n\n- Table 4 and 5 being put at the bottom of related work section is weird." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Memory-efficient Low-Rank Adaptation introduces a low-dimensional square matrix between the two low-rank matrices in LoRA and performs Bayesian modeling on this low-dimensional matrix." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024melora,\ntitle={{ME}-{LORA}: {MEMORY}-{EFFICIENT} {BAYESIAN} {LOW}- {RANK} {ADAPTATION} {FOR} {LARGE} {LANGUAGE} {MODELS}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0qexTTfnmH},\nnote={under review}\n}" }, "abstract": { "value": "Bayesian Low-Rank Adaptation (LoRA) has shown excellent performance in reducing the overconfidence of inference by large language models as it can accurately quantify the inference uncertainty. However, the general Bayesian LoRA technique requires huge memory as it fine-tunes three low-rank matrices with large size: two matrices have size of $n\\times r$ and the other has size of $r\\times m$, where $r$ denotes rank, and $n, m$ denote the size of input and output, respectively. The large amount of memory required by this technique precludes its practical applications especially for the cases with long input or output. Here, we propose a memory efficient Bayesian LoRA technique (called Me-LoRA) that needs only two low-rank matrices plus two small matrices with size of only $r\\times r$. The key idea of our approach is that we introduce a small matrix (with size $r\\times r$) to describe the variance estimates required by Bayesian LoRA, which is calculated through sampling two other samll matrices. Compared with the general Bayesian LoRA technique, our approach reduces the memory requirement by nearly $\\frac{1}{3}$ as the rank $r$ is generally very small. Experimental results using both LlaMA-7B and LlaMA-13B models on representative data sets suggest that our approach achieves the same performance as the original Bayesian LoRA techniques and outperforms the existing approaches. In summary, the memory-efficient Bayesian LoRA presented in this study circumvents the challenge of high memory requirement and thus \npaves a new way to the practical applications of Bayesian LoRA in the cases with larger input and output size." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models", "Low-rank adaptation", "Bayesian estimation", "Fine-tune" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b2f149ec62a68d354760ca1fd47732a819ac8036.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "ME-LORA: MEMORY-EFFICIENT BAYESIAN LOW- RANK ADAPTATION FOR LARGE LANGUAGE MODELS" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0qfIhtel8N
Liquid Dino: A Multi-Task Neural Network towards Autonomous Driving
main
Active
Autonomous Driving;Multi-task Learning;Advanced Driver-Assistance Systems (ADAS);Deep Learning
applications to computer vision, audio, language, and other modalities
3;3;3
5;4;4
2;2;2
2;2;2
2;3;2
3
4.333333
2
2
2.333333
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "Not applicable" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Justification on the novelty of the paper\n\nExperimental evaluation of another one or two datasets listed in the AIDE paper (Yang et al., ICCV 2023).\n\nIn the Introduction (Section 1), Could you clarify how the principles of Liquid Neural Networks are integrated into the model architecture? Specifically, how do these principles influence the adaptability and efficiency of the model in processing temporal data?\n\nIn Experiments (Section 4), how does the model handle varying temporal dynamics across the four tasks, especially in high-stakes contexts such as emotion and behaviour recognition? Are there specific adaptations for managing differences in the timing and frequency of events in each classification task?\n\nGiven the complexity of the multi-task model, are there specific measures or techniques implemented to make the decision-making process interpretable? For safety-critical applications like ADAS, understanding how the model arrives at its predictions is essential for building trust and accountability.\n\nWhat future enhancements do you envision for Liquid Dino? Specifically, are there plans to scale the architecture for additional tasks within autonomous driving, such as prediction and planning? How would the current model adapt to these expansions?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The \"Liquid Dino\" approach is well-thought, particularly through the multi-task learning for autonomous driving. The model integrates Convolutional Neural Networks (CNNs), DINOv2 (self-supervised learning), and Closed-Form Continuous-Time Neural Networks (CFC) to handle spatial, temporal, and unlabeled data. This hybrid architecture incorporates each component's unique strengths, creating a robust model well-suited for the complex, multi-modal requirements of autonomous driving.\n\nLiquid Dino tackles four tasks simultaneously Emotion Recognition, Driver Behavior Recognition, Scene-Centric Context Recognition, and Vehicle-Based Context Recognition. It reflects the diverse demands of real-world driving. Validated on the AIDE dataset, a multi-view, multi-modal dataset that captures rich contextual data under realistic driving conditions, the model demonstrates strong generalization potential. This comprehensive dataset strengthens the relevance and impact of Liquid Dino’s experimental results.\n\nAchieving an overall accuracy of 83.79% and excelling particularly in Traffic Context Recognition (95.03%) and Vehicle Condition Recognition (84.76%), Liquid Dino outperforms existing models despite having an increased inference time of 8 milliseconds per frame. Its frame-by-frame processing, which avoids the need for sequence-based inputs, contributes to immediate, continuous monitoring, ideal for high-stakes ADAS applications.\n\nThe model's capacity to integrate driver behaviour, emotion, and environmental context enhances driver safety and experience, supporting the development of more responsive ADAS and autonomous vehicles." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents \"Liquid Dino,\" a multi-task neural network designed to improve the accuracy of advanced driver-assistance systems (ADAS) by classifying various driver states and contextual driving scenarios. The model addresses four classification tasks: Emotion Recognition, Driver Behaviour Recognition, Scene-Centric Context Recognition, and Vehicle-Based Context Recognition by using the visual data captured through multiple cameras both inside and outside the vehicle. AIDE dataset, a multi-view, multi-modal dataset specifically crafted for autonomous driving research is used to evaluate Liquid Dino against various state-of-the-art models.\nThe model consists of three components: Convolutional Neural Networks (CNNs) for spatial feature extraction, DINOv2 for self-supervised learning, and Closed-form Continuous-Time Neural Networks (CFC) for temporal processing. The architecture is designed to handle diverse data while maintaining high efficiency, with an overall average accuracy of 83.79%, outperforming other models, especially in Traffic Context Recognition (95.03%) and Vehicle Condition Recognition (84.76%).\n\nThe presented approach integrates existing methods, thus limiting its novelty. Moreover, it is evaluated using one dataset so justification/conclusion is questionable for a conference like ICLR. However, it promises that the model performs well within the real-time requirements of automotive systems, making it a promising approach for real-world ADAS applications. Additionally, the model's ability to capture driver emotions, behaviours, and contextual driving environments enhances road safety and driver experience, providing valuable contributions to the development of more reliable and responsive autonomous driving systems." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weakness is the novelty. The approach is the integration of the existing approaches and looks like a technical paper. \n\nThe second weakness is the evaluation of the proposed approach on a single dataset does not justify its suitability. There are many datasets available for driver behaviour analysis. The list of datasets can be found in the AIDE paper (Yang et al., ICCV 2023). At least an evaluation on another one or two datasets would have made this paper stronger. \n\nIn the Introduction (Section 1), the term \"Liquid\" is introduced, suggesting the use of Liquid Neural Networks (LNNs). However, the methodology primarily combines CNNs, DINOv2, and CFCs, with minimal discussion on the specific role or implementation of LNNs. For a more accurate portrayal of the architecture’s functionality, further elaboration on LNN integration would be beneficial. This could be addressed by referencing the study, \"Liquid Neural Networks: A Novel Approach to Dynamic Information Processing\" (https://ieeexplore.ieee.org/document/10466162), which explores LNNs' capabilities in handling dynamic data.\n\nIn the Experiments (Section 4), the model's performance is evaluated across tasks that have varying temporal dynamics. However, there is limited discussion on how the model adapts to these differences across tasks, which is crucial in applications where time-dependent accuracy is essential. Explaining the confusion matrix scores (Figure 3) for each task in detail would increase the explainability of the model.\n\nThe Results (Section 5) provide inference times, which indicate the model’s performance in real-time scenarios, but omit details on the computational resources required during training and deployment. This information is critical for understanding the model's feasibility in resource-constrained environments. \n\nIn the Discussion (Section 6), the model's performance is evaluated using the AIDE dataset. However, the paper lacks exploration into how well the model generalizes to other environmental conditions, such as different weather patterns or diverse geographic settings. Testing across various datasets or environments could address potential generalization issues.\n\nThe diagram (Figure 2) lacks detail in DINOv2 and CFC modules, with unlabelled arrows between them and CNN, making data transformations unclear. The CNN module omits critical parameters (e.g., kernel size, stride), while multi-view inputs are positioned too far from DINOv2. Missing data dimensions hinder understanding of transformations, and inconsistent module details (e.g., CFC as a single block) disrupt coherence. The absence of loss function indicators and missing representation of any feature fusion or attention mechanisms further limit completeness.\n\nLastly, in Future Work (Section 8), potential model enhancements are mentioned, but there is a lack of specific details on scalability. Addressing how the architecture can expand to include additional tasks would clarify its future applicability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Here are the questions based on the provided weaknesses:\n\n1. In the introduction, the authors mention the need for an advanced architecture to address complex requirements of modern driver monitoring systems. Could the authors elaborate on how Liquid Dino specifically overcomes these challenges to strengthen this motivation?\n\n2. To verify the generalizability of the model, would the authors consider extending the approach to other video-based driver distraction behavior recognition datasets, such as Drive&Act (Martin et al., 2019) and DAD (Kopuklu et al., 2021)?\n\n3. Given that Liquid Dino’s performance does not show a marked improvement over the DINOv2 baseline, could the authors clarify the specific contributions of the proposed method that account for any observed gains?\n\n4. Could the authors add a separate section detailing the implementation to provide clearer insights into the architecture, training settings, and parameters used?\n\n5. The proposed approach appears to derive its performance improvement largely from additional feature learning layers following the DINOv2 framework. Could the authors clarify any novel aspects of Liquid Dino beyond adding layers?\n\n6. Could the authors provide qualitative examples of failure cases to illustrate the limitations of Liquid Dino, as this would help clarify areas where the approach may need improvement?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is easy to understand and follow.\n\n2. The task focused by the authors are very important to the community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propose Liquid DINO to achieve more advanced driver assistant multi task learning. The proposed approach shows better performance on the Places365 dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In the introduction section, the authors mentioned that we need to achieve a more advanced architecture to deal with the complex demands of modern driver monitor system. This motivation is not very convincing. The authors should introduce why Liquid Dino can well overcome these complex challenges.\n\n\n\n2. The approach is only verified on one dataset. The generalizability of the model is doubtful. Could the authors extend this approach for video based driver distracted behavior recognition dataset, e.g., DriveACT and DAD?\n\na. Martin, M., Roitberg, A., Haurilet, M., Horne, M., Reiß, S., Voit, M., & Stiefelhagen, R. (2019). Drive&act: A multi-modal dataset for fine-grained driver behavior recognition in autonomous vehicles. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 2801-2810).\n\nb. Kopuklu, O., Zheng, J., Xu, H., & Rigoll, G. (2021). Driver anomaly detection: A dataset and contrastive learning approach. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 91-100).\n\n\n3. The performance of Liquid Dino is not very promising compared with DINOv2 baseline. Thereby the conttibution of this proposed new method is doubtful.\n\n4. Lack of implementation details. The authors are suggested to add a separate section to introduce the implementation details.\n\n5. The proposed approach is not novel enough. It seems that the performance gain mainly comes from adding more layers for feature learning after DINOv2 framework.\n\n6. Lack of failure case analysis. The authors are suggested to add some qualitative samples with failure cases to illustrate the limitation of the proposed approach." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The related work section needs to be expanded to include relevant studies. In the third paragraph, fusion techniques are discussed, but this seems irrelevant as no data fusion is performed in this study.\n\n2. In Figure 1, the driver’s face is partially obscured by wires, and the eyes are not visible. How can meaningful features be learned with such images?\n3. Why are all images combined into a single frame? Wouldn’t using weight-sharing in the encoder allow for a better representation of learning from the images?\n\n4. What role do the external camera images play in the classification task? Does using three external cameras improve the model’s performance, or would a single forward-facing camera suffice?\n\n5. The methodology section does not present a cohesive description of the framework. The parts are divided into unrelated sections, and the motivation for using the CFC module is unclear.\n\n6. What is the rationale for including a CNN backbone after DiNOv2?\n\n7. Table 1 is not discussed in the text, and its purpose is unclear.\n\n8. The authors only report accuracy as the evaluation metric. F1 score and AUC should be included to provide a more comprehensive assessment of the framework's performance.\n9. There is no ablation study to support their design choices. \n10. It would be good for the reader if you provided layer wise details of your framework." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "They are trying to solve an important problem using just images." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors develop a method named Liquid DiNO, which uses images to classify emotion recognition, driver behavior recognition, scene-centric context recognition, and vehicle-based context recognition. The framework consists of three parts: the first is DiNOv2, the second includes a CNN backbone, and the third is a CFC module. They experiment on a single dataset containing images from three external cameras and one internal camera. The results are presented in terms of accuracy, with the authors claiming that their proposed method performs well." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The authors in this work attempt to classify specific driver behaviors using a complex framework. However, the paper requires substantial improvement to be considered for acceptance. Below are some of the main weaknesses:\n\n1. The related work section needs to be expanded to include relevant studies. In the third paragraph, fusion techniques are discussed, but this seems irrelevant as no data fusion is performed in this study.\n\n2. In Figure 1, the driver’s face is partially obscured by wires, and the eyes are not visible. How can meaningful features be learned with such images?\n3. Why are all images combined into a single frame? Wouldn’t using weight-sharing in the encoder allow for a better representation of learning from the images?\n\n4. What role do the external camera images play in the classification task? Does using three external cameras improve the model’s performance, or would a single forward-facing camera suffice?\n\n5. The methodology section does not present a cohesive description of the framework. The parts are divided into unrelated sections, and the motivation for using the CFC module is unclear.\n\n6. What is the rationale for including a CNN backbone after DiNOv2?\n\n7. Table 1 is not discussed in the text, and its purpose is unclear.\n\n8. The authors only report accuracy as the evaluation metric. F1 score and AUC should be included to provide a more comprehensive assessment of the framework's performance.\n9. There is no ablation study to support their design choices." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Liquid Dino is a novel multi-task neural network for autonomous driving , achieving superior classification accuracy in driver behavior and contextual recognition tasks." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024liquid,\ntitle={Liquid Dino: A Multi-Task Neural Network towards Autonomous Driving},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0qfIhtel8N},\nnote={under review}\n}" }, "abstract": { "value": "In the realm of advanced driver-assistance systems (ADAS) and autonomous driving, the accurate classification of driver emotions, behaviors and contextual environments is critical for enhancing vehicle safety and user experience. This study investigates the performance of various neural network architectures across four distinct classification tasks: Emotion Recognition, Driver Behavior Recognition, Scene-Centric Context Recognition and Vehicle-Based Context Recognition, all of which incorporate visual information captured through cameras. By utilizing camera-based data, we aim to evaluate how different neural architectures handle visual inputs in these diverse contexts, thereby exploring the robustness and generalization of each model to different real-world scenarios. We compare the performance of several state-of-the-art models and introduce a novel contribution that significantly improve classification accuracies in all areas. Our results demonstrate that the proposed Liquid Dino architecture achieves an overall average accuracy of 83.79\\%, outperforming other models in recognizing driver emotions, behaviors and contextual scenarios. These enhancements underscore the potential of our proposed methods in contributing to the development of more reliable and responsive ADAS.In the realm of advanced driver-assistance systems (ADAS) and autonomous driving, the accurate classification of driver emotions, behaviors and contextual environments is critical for enhancing vehicle safety and user experience. This study investigates the performance of various neural network architectures across four distinct classification tasks: Emotion Recognition, Driver Behavior Recognition, Scene-Centric Context Recognition and Vehicle-Based Context Recognition, all of which incorporate visual information captured through cameras. By utilizing camera-based data, we aim to evaluate how different neural architectures handle visual inputs in these diverse contexts, thereby exploring the robustness and generalization of each model to different real-world scenarios. We compare the performance of several state-of-the-art models and introduce a novel contribution that significantly improve classification accuracies in all areas. Our results demonstrate that the proposed Liquid Dino architecture achieves an overall average accuracy of 83.79\\%, outperforming other models in recognizing driver emotions, behaviors and contextual scenarios. These enhancements underscore the potential of our proposed methods in contributing to the development of more reliable and responsive ADAS." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Autonomous Driving", "Multi-task Learning", "Advanced Driver-Assistance Systems (ADAS)", "Deep Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f24c72cdba9846b545a131249f362d0b4bac3aff.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Liquid Dino: A Multi-Task Neural Network towards Autonomous Driving" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0qrTH5AZVt
ConLUX: Concept-Based Local Unified Explanations
main
Active
local model-agnostic explanations;post-hoc XAI;concept-based XAI
interpretability and explainable AI
3;5;6
4;3;4
3;2;3
2;2;2
2;2;3
4.666667
3.666667
2.666667
2
2.333333
-0.188982
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "For each data point, the concept used can be different. Given that it comes from an LLM, there can even be stochasticity over it (it may also be worth mentioning the LLM temperature setting for this, if not already mentioned). Even otherwise, two very similar reviews can have different concepts chosen for explanation, with no theoretical guarantee on the LLM choosing similar predicates. This lack of control makes the explanation less robust. In theory, changing a token (may be an adversarially crafted one in practice) can change the entire explanation, as the choice of predicates is left to the LLM. One cat may be identified by its ears and a similar one by its eyes. Can you propose an experiment to study this robustness?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The proposed framework is interesting and can be useful if validated rigorously. The paper explores two different modalities. The framework is applied across multiple established methods (LIME, Kernel SHAP, Anchor, and LORE)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes to use foundation models to discover concepts to augment on methods like LIME to provide concept-based local explanations. The method is evaluated on sentiment analysis and image classification tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Concept discovery using LLM is the key aspect of the proposed framework. This calls for a human evaluation to answer the question: are the concepts discovered by the foundation models indeed aligned with a human-understandable representation? Currently, it sounds like it is assumed that the prompt will take care of this. \n\n\nHow faithful is the backtracking from the perturbed concept space to the original space? This also done by the LLM?\n\nMethods like LIME create a local approximation of the original function to a human-understandable form and explain the decision. This is the reliable yet explainable part of the method. By using LLM for discovering concepts, (given each task and the sample), it becomes difficult even locally to explain the predicates. The decision-making is not fully explained, i.e., the choice of predicates cannot be explained. For instance, in case a poor predicate is chosen by an LLM, the user may be confused by the explanation. I think the proposed method, by using foundation models for concept discovery, makes the framework less reliably explainable.\n\nTo prove the robustness of the method, it would be useful to try intervention-based causal metrics, especially since the concept discovery is done by foundation models. C-insertion and deletion are often used to evaluate concept importance and fidelity in concept-based explanations.\n\nRobustness to the prompt: how robust is the framework to the construction of the prompt? Also, given the literature around prompt engineering, the framework could explore what might be the optimal prompting strategy for discovering human-interpretable concepts. \n\n\nThe paper proposes a new framework for explanation where concept discovery is done by foundation models. For the text modality, the experiments are restricted to sentiment analysis, and 1000 images from the ImageNet dataset are used for classification for images. The method calls for more experiments for validation. However, this is not the sole reason for the decision.\n\n\nThere is no baseline comparison. Though the method is a new paradigm, it could be compared with concept bottleneck methods or predicates selected by other methods. There could also be an ablation study on different LLM model combinations. Additionally, the authors could discuss concept bottleneck-based methods. What do you think about a method where task-specific concept bottlenecks are chosen by the LLM? This is not a weakness or a mandatory experiment for the text.\n\n\nMinor comments:\n\nL44: Please provide a citation, as not all visual explanation methods using attribution mapping follow this principle.\nL80: \"Moreover, we observe that ... across different tasks.\" given the restricted number of tasks evaluated it might be a good idea to tone down this claim.\nTeaser figure: I think, this can be made more comparable if the attribution method is shown with certain thresholding. When shown post-thresholding, the viz can clearly demonstrate the benefit of the proposed method, even allowing for certain comparative evaluations later.\nL93: This is a tradeoff between reliability of explanation to prediction and understandability. If the model is indeed making a decision based on the token, pivoting the explanation on different concepts can change it. \nFigure 2: Comparison of LIME and ConLUX-augmented LIME; the ConLUX highlights the kid! Is this a good explanation? Of course, SAM performs a good segmentation of the image, providing an object-level explanation, but the attribution seems incorrect to me. Or did I miss something?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Don’t have much in terms of questions on the methodology itself, but a few conceptual issues stood out, as mentioned in the weaknesses.\n- It’d be interesting to see if the authors could run additional experiments with different scales of perturbation to make sure these fidelity gains are actually about better explanations, rather than just larger shifts." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- ConLUX shows promise in being adaptable to multiple existing explanation methods (LIME, SHAP, Anchors, LORE), broadening its application across varied model types.\n- ConLUX aims to make model behaviors more intuitive and accessible for end-users, addressing a limitation in feature-based explanations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "- The paper introduces ConLUX a framework designed to enhance model-agnostic explanation methods by transforming traditional feature-level explanations into concept-level ones. \n- The authors argue that mainstream model-agnostic explanation techniques often provide explanations based on low-level features that don't align well with model decision processes or user understanding so ConLUX elevates explanations to the concept level.\n- The framework applies ConLUX to different explanation methods (LIME, Anchors, LORE, and Kernel SHAP) across text and image models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Major**\n- One key weakness of ConLUX I felt was that since it shifts to a concept-level perturbations, which are broader and may disrupt the local fidelity of explanations. Unlike small feature-level adjustments (e.g., word or pixel changes in LIME), concept-level changes can alter the input more drastically, potentially leading to explanations that do not accurately reflect the model’s behavior around the specific input instance. To investigate whether these concept-level perturbations maintain local fidelity, I suggest running controlled experiments comparing concept-level and feature-level perturbations. For example, the authors could measure fidelity loss or gain across a gradient of perturbation scales, allowing for a comparison between the fidelity of feature-level and concept-level explanations.\n- The paper relies on pre-trained models to extract high-level concepts but does not fully explore whether these concepts are consistently relevant across diverse domains. Variability in concept quality could impact the explanation's reliability. I recommend testing concept quality across datasets from different domains and introducing a metric or using existing ones like TCAV[1] to measure concept relevance and coherence within each domain. \n- Since ConLUX relies on pre-trained models to extract high-level concepts, it inherits any biases present in these models. This reliance could skew the explanations based on the biases embedded in the pre-trained models, which might limit the fairness and reliability of the generated explanations.\n- Observed improvement in fidelity metrics, such as AOPC, coverage, and precision, may partly result from the broader concept-level perturbations rather than genuinely enhanced explanation quality. Since larger perturbations at the concept level likely introduce more drastic changes to the model output, they could artificially inflate these scores, making the explanations appear more effective than they might be with finer, feature-level adjustments. To address this, the paper could benefit from controlled experiments using varying perturbation scales, comparing small and large concept-level shifts, to ensure that the fidelity improvements genuinely reflect enhanced interpretability rather than the impact of larger perturbations. I suggest implementing a normalized fidelity metric that adjusts for the magnitude of perturbations.\n\n**Minor**\n- A few typos in page 2 \"ConLUX-agumented LIME\" should be \"ConLUX-augmented LIME\"\n\n**References**\n1. Kim, Been, et al. \"Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav).\" International conference on machine learning. PMLR, 2018." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I am not completely convinced by the strategy of using language models for text concept predicates. While I assume it probably provides the best performance and our generally more than good enough for text-only tasks, they could still be prone to hallucinations in terms of incorrect concept extraction or during generations for perturbation. Did you consider some other method (maybe traditional topic modelling approaches) to validate its concept detection or perturbation outputs. In this sense I like the visual predicates lot more than textual ones.\n\n2. I wonder if you considered an ACE (Ghorbani et al.) + LIME method to compare ConLUX against a concept based reference where you use activation space of an external encoder to cluster superpixels instead of the original model. The concepts can be defined as clusters of superpixels, as in original method. \n\nOverall, comparing the strengths and weaknesses, I find the method to be just sound and strong enough that I would tilt slightly towards acceptance." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The problem setup, what the authors want to solve, and why, is quite clear.\n2. The core idea of using a concept-friendly representation for input and combining it with black-box explanation methods is simple and its positive implications are easy to see. \n3. The experiments are reasonably strong. They cover both text, images, and on multiple black-box models, with positive results in all cases. Also, via the existing black-box explainers, the method can generate different types of explanations (attribution, counterfactual etc.)\n4. While it has its own weaknesses, the proposal to build visual concept predicates is a principled way that could be readily validated by a user if the concept extraction is incorrect. This aspect of simplicity should positively reflect in its application" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a concept based local explanation method ConLUX that is model agnostic. The authors essentially propose a modality specific concept representations of inputs (concept predicates). These representations also readily provide a procedure to perform perturbation on the predicates and subsequently the input. Combining these two, the method is able to augment the traditional model-agnostic approaches to provide concept based explanations. The authors provide experiments on text (sentiment prediction) and images (classification) with multiple black-box models and explanation techniques and essentially show a clear improvement in terms of various forms of fidelity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Twice the authors describe (Fel et al. 2023) as using external knowledge to learn concepts. To my understanding, this is a wrong description. They propose a unified class of methods based on dictionary learning that is completely unsupervised. \n2. Typos/Errors: \n * Rednet (line 448)\n * line 351 should not be in past tense\n * Table 3 caption does not correspond to the table content\n3. The method seems only capable to extract coarse visual concepts. Also it can only admit concepts that can be represented as a segmentation mask. In case of text concept predicates, potential risk of some issues arising from using language models for concept detection and predicate-feature mapping. \n4. I felt a lack of examples/illustrations of visual explanations and any deeper qualitative insights the authors might have." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We have proposed ConLUX, a general framework that automatically extracts high-level concepts and incorporates them into existing local model-agnostic explanation techniques." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024conlux,\ntitle={Con{LUX}: Concept-Based Local Unified Explanations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0qrTH5AZVt},\nnote={under review}\n}" }, "abstract": { "value": "With the rapid advancements of various machine learning models, there is a significant demand for model-agnostic explanation techniques, which can explain these models across different architectures.\nMainstream model-agnostic explanation techniques generate local explanations based on basic features (e.g., words for text models and (super-)pixels for image models). However, these explanations often do not align with the decision-making processes of the target models and end-users, resulting in explanations that are unfaithful and difficult for users to understand.\nOn the other hand, concept-based techniques provide explanations based on high-level features (e.g., topics for text models and objects for image models), but most are model-specific or require additional pre-defined external concept knowledge. \nTo address this limitation, we propose ConLUX, a general framework to provide concept-based local explanations for any machine learning models. \nOur key insight is that we can automatically extract high-level concepts from large pre-trained models, and uniformly extend existing local model-agnostic techniques to provide unified concept-based explanations.\nWe have instantiated ConLUX on four different types of explanation techniques: LIME, Kernel SHAP, Anchor, and LORE, and applied these techniques to text and image models.\nOur evaluation results demonstrate that 1) compared to the vanilla versions, ConLUX offers more faithful explanations and makes them more understandable to users, and 2) by offering multiple forms of explanations, ConLUX outperforms state-of-the-art concept-based explanation techniques specifically designed for text and image models, respectively." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "local model-agnostic explanations", "post-hoc XAI", "concept-based XAI" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/eac8b10aa792bb7246ac58b5baa1411ac467787c.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/b04c7a94a9ee5762ce18597f641b07b43a8136f0.zip" }, "title": { "value": "ConLUX: Concept-Based Local Unified Explanations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0quBGOPP5V
Deep ECG-Report Interaction Framework for Cross-Modal Representation Learning
main
Active
Multi-modal Representation Learning;ECG signal;Report Generation;Zero-shot Classification
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;3;3;6
5;5;5;4
3;2;2;3
2;1;2;3
3;2;3;2
3.75
4.75
2.5
2
2.5
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Methodology\n\nDoes incorporating textual reports in pre-training have an element of supervision?\n\nDoes the pre-training necessitate diagnostic reports for ECGs or can it also utilize ECGs when the reports are not available?\n\nAre the reports automatically generated or written by cardiologists?\n\nThe strength of the self-supervised pretraining approach is learning general features while incorporating textual reports limits the features to the scope of the reports and thus might limit the capabilities for future tasks outside the information provided in the reports. Can the author demonstrate that the learned features are not limited by the scope and bias of the reports?\n\nDoes the alignment loss in Equation 1 accommodate the situation where multiple ECGs have similar text reports?\n\nHow does random masking differ from random dropout?\n\nIs the performance for other approaches evaluated by the author or the original work since most models require a hyperparameter optimization for best performance?\n\nGeneral comments\n\nPage 1 line 29: rephrase “clinical cardiac conditions classification”.\n\nPage 2 lines (61-72): “Specifically, the ECG signal and …. as follows.” Please rephrase.\n\nPage 2 line 78: What is meant by “which can provide clinical semantics visually”?\n\nPage 2 line 91: “temporal and spatial correlation ship of ECG signals”. to “temporal and spatial correlations of ECG signals”.\n\nPage 2 line 140: Details and references for the text encoder and masking/reconstruction are missing in the methodology section.\n\nPage 4 line (164-194): Please provide references if equations 1 to 5 are derived from existing literature and indicate where there are novel concepts.\n\nPage 4 line (202-207): “We introduced” to “We introduce”\n\nPage 4 line (202-206): “Considering that the textual modality … order to provide more textual features”. Please rephrase to improve clarity and avoid very long sentences.\n\nPage 4 lines (206-208): “After completing …. corresponding report text.” Please rephrase\n\nPage 4 sec 3.2: What is meant by the decoded text and ECGs? Are these the reconstructions of the feature encoding or the corresponding aligned text? If encoding then how is it combined in the mixed-modal representation if the dimensions are different?\n\nPage 7 lines (360-362): “The experimental results …. for classification”. Not clear please rephrase.\n\nFigures: Figure captions need to be improved. \n\nFigure 1: Please explain the figure adequately in the caption.\n\nTables: Table captions should include the supervised task and the metric under observation.\n\nTable 7: Please change DERL to DERI.\n\n\nReferences:\n\n[1] Che Liu, Zhongwei Wan, Cheng Ouyang, Anand Shah, Wenjia Bai, and Rossella Arcucci. Zero-shot ecg classification with multimodal learning and test-time clinical knowledge enhancement. arXiv preprint arXiv:2403.06659, 2024." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The work is a natural extension of MERL[1], with more accurate zero-shot classification and the possibility of automatic report generation. The zero-shot classification performance shows significant improvement from [1]. The cross-modal decoders allow the additional capability of automatic report generation utilizing GPT-2 architecture." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The novel DERI approach enhances cross-modal representation learning by incorporating clinical report generation. The work extends the MERL approach [1], integrating multiple alignments and a report generative approach with a novel latent random masking module (RME). The novelty of the approach lies in not only aligning the ECG and report features but also decoding cross-modal features. The author demonstrated a performance improvement compared to other SOTA approaches, verified through supervised tasks on unseen data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper needs rephrasing to improve clarity and readability, especially in the methodology section. The training approach is not applicable to unlabelled ECG in the usual context and necessitates the availability of accurate diagnostic reports by a cardiologist. The performance is related to the quality, distribution, and context of these reports and may not extend to novel tasks outside the scope of diagnostic reports." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Is it possible to disclose what the electrocardiogram reports used as training data in this study are specifically like? Was it verified how diverse the content of these reports is in terms of natural language?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Deep Cross-Modal Interaction: Unlike previous methods that use shallow alignment between ECG and report features, DERI implements multiple alignments and feature reconstructions, fostering deeper interactions between ECG signals and report data. This approach strengthens the representation's ability to capture the clinical context, enhancing both accuracy and relevance for diagnosis.\n- Potential for Broader Clinical Integration: DERI’s architecture, designed to integrate additional data types like electronic medical records (EMRs), positions it well for broader application in clinical settings. This flexibility could make DERI a powerful tool for multi-modal clinical analysis in the future." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The study introduces DERI (Deep ECG-Report Interaction), a framework designed for cross-modal representation learning from ECG signals and clinical reports to improve the clinical relevance of ECG representations. Traditional ECG self-supervised methods focus on single-modal generative or contrastive learning but fail to capture the deep clinical semantics. DERI addresses these limitations by integrating ECG and report data more deeply." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In contrast to other modalities such as chest X-rays and pathological diagnoses, electrocardiogram reports have been mainly produced mechanically by diagnostic equipment for many years. Therefore, this study is more likely to be learning of waveform data and its correct labels, rather than two-modal learning of waveform data and its interpretation using natural language.\nThe scope of this study may be narrow than the general interest of ICLR main conference." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please supplement the references and baseline methods [1-4] in the experiments, and fully discuss and compare their advantages, disadvantages, and innovations in the paper.\n\nPlease explain the parts that are overly similar to MERL and highlight the points of technological innovation." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is well-organized and generally easy to follow. The flow from the motivation behind the DERI framework to the detailed explanation of its architecture, followed by experiments and results, is logical and well-structured. The diagrams, particularly those illustrating the DERI architecture and its training process, are helpful in understanding the complex cross-modal interactions.\n\nThe technical descriptions, such as the use of multiple alignments, the RME module, and the integration of language models for report generation, are well-explained." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes the Deep ECG-Report Interaction (DERI) framework, a novel method for cross-modal representation learning that combines ECG with clinical reports. This paper introduces cross-modal alignment for representation learning and RME module for enhanced ECG learning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper shows a lack of understanding of related work, with many previous related articles not cited or discussed. The following articles [1-4] need to be added and discussed in the paper. \n\nSeveral methods from the referenced articles need to be used as baselines and compared in the experimental section, especially for the ECG report generation part.\n\nThere are many similarities between this paper and the article \"Zero-Shot ECG Classification with Multimodal Learning and Test-time Clinical Knowledge Enhancement (MERL),\" with a lack of innovation.\n\nFor instance, two of the losses used in the paper are almost identical to the ones used in MERL (CLIP loss and mask loss); the paper merely describes them in a different way. The report generation method is also quite similar to many multimodal approaches, such as the BLIP method, and does not represent true innovation. Moreover, these papers have not been cited.\n\nThe downstream task system is also very similar to MERL, except for report generation. However, many report generation baselines are missing from this paper.\n\n[1] Wan, Zhongwei, et al. \"Electrocardiogram instruction tuning for report generation.\" arXiv preprint arXiv:2403.04945 (2024).\n\n[2] Li, Jun, et al. \"Frozen language model helps ecg zero-shot learning.\" Medical Imaging with Deep Learning. PMLR, 2024.\n\n[3] Yu, Han, Peikun Guo, and Akane Sano. \"Zero-shot ECG diagnosis with large language models and retrieval-augmented generation.\" Machine Learning for Health (ML4H). PMLR, 2023.\n\n[4] Zhao, Yubao, et al. \"ECG-Chat: A Large ECG-Language Model for Cardiac Disease Diagnosis.\" arXiv preprint arXiv:2408.08849 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In Appendix Section D, Table 12, the authors include various text models for report generation, including encoder-only models like MedCPT (desgined for text retrieval task). Using encoder-only models for report generation is questionable, as it conflicts with the mainstream approach seen in works such as MEIT [1] and RGRG [2], which typically utilize encoder-decoder or decoder-only models for text generation tasks. Encoder-only models are not designed for generative tasks like report generation, so their inclusion deviates from standard practices.\n \n- Regarding the computation of clinical efficacy in Section 4.2, several aspects need clarification: \n(1) **How is the prompt embedding obtained from the decoder?** If the decoder is used to obtain prompt embeddings, is it based on the representation from the [EOS] token? Clarification is needed on how the embedding is extracted from a decoder-only architecture.\n(2) **How is the classification probability for categories computed using a text decoder?** Does this refer to the highest probability assigned to the category name token? Some diseases (e.g., \"myocardial infarctions\") are tokenized into multiple tokens. If this is the case, how is the classification probability determined for multi-token categories?\n(3) **Handling multiple classes in ECGs**: The PTB-XL dataset shows that ECGs can belong to multiple classes simultaneously. If the authors use only the highest probability for classification, they may be reducing the prediction to a single class, which ignores other relevant conditions. Why are additional classes not considered in the evaluation?\n(4) **Discrepancy between prompts and generated reports**: The method uses a single-class description as the prompt for classification, whereas the generated report may describe multiple conditions associated with the ECG signal. There is a clear gap between the prompt (single class) and the report (multi-class). How is this gap addressed in the evaluation process?\n\n[1] Wan, Zhongwei, et al. \"MEIT: Multi-Modal Electrocardiogram Instruction Tuning on Large Language Models for Report Generation.\" arXiv preprint arXiv:2403.04945 (2024). Tanida, Tim, et al. \"Interactive and explainable region-guided radiology report generation.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Align multi-level features using contrastive loss, while also performing cross-modal reconstruction by using ECG signals to reconstruct text and text to reconstruct ECG. This approach enables learning mutual information between the two modalities. The framework is evaluated across multiple datasets and methods for comprehensive assessment. However, the compared baseline methods are limited, and the evaluation metrics are ambiguous." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes the Deep ECG-Report Interaction (DERI) framework to address the lack of clinical semantics in ECG representation learning. \nBy integrating ECG signals with clinical reports using multi-level alignment strategies, DERI enhances cross-modal learning. It also incorporates a language model for ECG report generation, demonstrating superior performance across multiple datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The novelty is limited. In Section 3.2, the work uses cross-modal alignment with the original contrastive loss. Section 3.3's approach for cross-modal reconstruction is similar to what was proposed in MRM [1], even though MRM was originally for the image domain. The method in this work closely resembles MRM. Moreover, MRM uses features extracted from masked inputs to reconstruct the other modality, while DERI uses features extracted directly from the original inputs, which reduces the difficulty of the reconstruction task.\n- There is a lack of baseline and comparison in the report generation task. In MEIT [2], a comprehensive benchmark for ECG report generation is proposed and implemented on the MIMIC-ECG and PTB-XL datasets, both of which are also used in DERI. However, the authors do not compare DERI against any baseline from MEIT and only use GPT-2 as the text decoder, which is outdated, having been released in 2019.\n- The reproducibility issue is further compounded by the authors' apparent reluctance to share their code.\n- The report generation task is not implemented on PTB-XL. Since MIMIC-ECG is used for pretraining, evaluating solely on MIMIC-ECG does not sufficiently assess generalizability and robustness, as all the data is seen during pretraining. \n- The evaluation metric for ECG report generation is lacking. The evaluation metric for clinical efficacy is ambiguous.\n\n[1] Zhou, Hong-Yu, et al. \"Advancing Radiograph Representation Learning with Masked Record Modeling.\" The Eleventh International Conference on Learning Representations.\n[2] Wan, Zhongwei, et al. \"MEIT: Multi-Modal Electrocardiogram Instruction Tuning on Large Language Models for Report Generation.\" arXiv preprint arXiv:2403.04945 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A novel Deep ECG-Report Interaction framework for cross-modal representation learning is proposed for ECG classificaton and report generation." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024deep,\ntitle={Deep {ECG}-Report Interaction Framework for Cross-Modal Representation Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0quBGOPP5V},\nnote={under review}\n}" }, "abstract": { "value": "Electrocardiogram (ECG) is of great importance for the clinical diagnosis of cardiac conditions. Although existing self-supervised learning methods have obtained great performance on learning representation for ECG-based cardiac conditions classification, the clinical semantics can not be effectively captured. To overcome this limitation, we proposed a $\\textbf{D}$eep $\\textbf{E}$CG-$\\textbf{R}$eport $\\textbf{I}$nteraction ($\\textbf{DERI}$) framework to learn cross-modal representations that contain more clinical semantics. Specifically, we design a novel framework combining multiple alignments and feature reconstructions to learn effective cross-modal representation of the ECG-Report, which fuses the clinical semantics of the report into the learned representation. An RME-module inspired by masked modeling is proposed to improve the ECG representation learning. Furthermore, we extend ECG representation learning with a language model to report generation, which is significant for evaluating clinical semantics in the learned representations and even clinical applications. Comprehensive experiments on various datasets with various experimental settings show the superior performance of our proposed DERI." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multi-modal Representation Learning", "ECG signal", "Report Generation", "Zero-shot Classification" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0b37ca3f940edcf0f1ed9aeaa380b6e0e7afb029.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/e34316e80ae775c4c5158a5446116c5ae2a03107.zip" }, "title": { "value": "Deep ECG-Report Interaction Framework for Cross-Modal Representation Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0rACj8JLAL
BOOD: Boundary-based Out-Of-Distribution Data Generation
main
Active
OOD detection;Diffusion models;Training data generation
datasets and benchmarks
5;5;5;6
4;4;3;4
3;3;2;3
2;3;2;2
3;3;2;3
5.25
3.75
2.75
2.25
2.75
0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.List and compare the actual memory requirements of the proposed model.\n\n2.Further comparative studies on different perturbation strategies could be added to help understand the impact of each strategy on the quality of generated data, and to validate the performance variations of the BOOD method under different hyperparameters.\n\n3.Provide additional descriptions of Figures 2, 3, and 4 in the main text for a more comprehensive evaluation." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.BOOD is the first framework capable of explicitly generating OOD data around the decision boundary, thereby providing informative functionality for shaping the decision boundary between ID and OOD data.\n\n2.The paper is easy to follow.\n\n3.Experimental results on the CIFAR-100 and IMAGENET-100 datasets show that the BOOD method significantly outperforms existing SOTA methods, achieving substantial improvements." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new OOD data generation framework that helps the model to more clearly distinguish ID and OOD data by generating OOD samples near the decision boundary. Specifically, this method identifies ID boundary features by minimizing perturbation steps and generates OOD features near the boundary through gradient ascent. Experiments on CIFAR-100 and IMAGENET-100 demonstrate the effectiveness of the proposed algorithm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.BOOD requires calculating the boundary positions of numerous features and generating images through a diffusion model, which may be computationally time-consuming.\n\n2.The hyperparameter in the paper is crucial for synthesizing high-quality OOD features, it is recommended to provide the basis for its selection.\n\n3.The adversarial perturbation strategy is an important component, it is recommended to provide a comparative analysis with other perturbation strategies to help readers gain a more comprehensive understanding of the experimental setup.\n\n4.Descriptions of the images presented are lacking in the main text." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* How necessary is it to synthesize OOD data, as opposed to finding publicly available OOD data and seeing if training with them can generalize to unseen OOD data? How does BOOD compare with methods that use real OOD data for augmentation, such as [1]?\n\n* The method seems to involve various different hyperparameters, including pruning rate r, max perturbation iteration K, and regularization weight beta. How are they selected? If one applies BOOD to a new ID dataset, are there guidelines or general rules of how to select them?\n\n* Given that generation with diffusion models can be computationally expensive, it would be helpful to see more in-depth analysis on computation-performance tradeoffs (e.g. performance vs. the number of images generated per class). \n\n[1] Hendrycks, Dan, Mantas Mazeika, and Thomas Dietterich. \"Deep anomaly detection with outlier exposure.\" ICLR 2019." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* This paper proposes a new approach for synthesizing out-of-distribution data by performing adversarial perturbation and generating images along the ID boundary. The method is intuitively and technically sound.\n\n* Performance-wise, the gain over existing methods is significant on CIFAR-100 as ID. The synthesized images look reasonable visually as boundary cases. \n\n* The writing and presentation of the paper are clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes BOOD, a method for synthesizing out-of-distribution images that are closer to the boundary, for enhancing OOD detection performance. It first learns a image encoder whose feature space aligns with class token embeddings, and leverage it as a cosine classifier. Then it picks the images whose features need the fewest number of perturbation steps in the gradient ascent direction to change the cosine classifier’s prediction, and generates OOD images from their perturbed features. It then uses the generated OOD images to regularize the training of an OOD classification model. Experimental results show that BOOD outperforms a variety of existing OOD detection approaches on CIFAR-100 and ImageNet-100 as ID data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The method seems to be bounded by the capability of the stable diffusion model. In cases where ID data are very distinct from stable diffusion's training distribution, e.g. if the ID data is SVHN or texture, or some other domains like medical imaging, etc., or where the classification is very fine-grained, it is uncertain how effective the method would be.\n\n* The performance improvement on CIFAR-100 as ID data is significant but the improvement on ImageNet-100 is only marginal, although both datasets are natural images with 100 classes. This also somewhat raises some uncertainty about how much improvement BOOD can bring over the existing methods in general. It may be helpful to include more in-depth discussion or analysis on in which cases BOOD provides significant gains and in which cases its advantage over prior approaches is less obvious.\n\n* Minor point - there are several typos in the use of parenthetical vs. textual citations: e.g. L047, L179, L232" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See Weakness above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.This paper proposes a novel boundary-based method for generating OOD data, leveraging diffusion models to identify ID data closest to decision boundaries and applying an outlier feature synthesis strategy to generate images located around decision boundaries. This approach provides high-quality and informative features for OOD detection.\n2.This paper is technically sound. The ablation experiments, hyperparameter analysis experiments, and visualization experiments are all comprehensive.\n3.This paper provides a clear and thorough introduction to the proposed methods and algorithmic procedures. The formulas and notations are well-explained, with detailed definitions for all symbols and terms used." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel framework, named Boundary-based Out-Of-Distribution data generation (BOOD). It first identifies the features closest to the decision boundary by calculating the minimal perturbation steps imposed on the feature to change the model's prediction. Then, it generates the outlier features by perturbing the identified boundary ID features along with the gradient ascent direction. These synthetic features are then fed into a diffusion model to generate the OOD images, enhancing the model’s ability to distinguish ID and OOD data. Extensive experiments show the effectiveness of their method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.One potential drawback is a notation conflict between the additional perturbation steps c (line 287-288) and the earlier use of C for the number of classes. This overlap in symbols could cause confusion, so it might be beneficial to change the symbol for one of these terms to improve clarity.\n2.In Table 2, the comparison with state-of-the-art (SOTA) methods could be enhanced by including more recent methods from 2024. This would better highlight the advantages and relevance of the proposed approach in the context of the latest advancements.\n3.A limitation of the hyperparameter sensitivity analysis is that it could benefit from experimenting with a wider range of values to better demonstrate the rationale behind the chosen settings. Additionally, more intuitive visualizations could be provided to clearly illustrate the improvements of the proposed method over previous approaches." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Clarification is needed regarding the sensitivity of the method to the hyperparameter r. An exploration of this sensitivity, perhaps through a sensitivity analysis, would provide valuable insights into the robustness and reliability of the proposed approach under varying conditions.\n2. The method performs significantly worse than NPOS on the OOD dataset Textures, as indicated in Table 2. An explanation for this performance discrepancy would be beneficial. The authors could analyze specific characteristics of the Textures dataset or aspects of their method that may contribute to this outcome." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- It introduces a new outlier synthesis through selecting out samples close to decision boundaries and distorting them. Outlier samples can be more easily synthesized from these samples compared to other samples.\n- Extensive experiments are conducted to validate the effectiveness of the proposed method and core technical components such as the sample selection strategy.\n- The paper is well written with clear structure and smooth logic, making it easy for readers to understand its ideas and algorithms." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on addressing the OOD detection task by synthesizing outlier samples. To achieve the synthesis of reasonable outlier samples, it first selects out the samples which reside near to boundaries, and then apply the adversarial attack to perturb features of these samples until their classes are changed. Finally, it applies the diffusion model to generate outlier samples from those perturbed features, which are used for training the OOD classifier. Experiments on various datasets demonstrate that the proposed method achieve better performance than existing methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The rationale for using adversarial attacks to perturb sample features remains insufficiently justified. Perturbing features to alter their class identities might unintentionally transform them into samples of other in-distribution classes. To address this concern, the authors should provide theoretical or empirical evidence demonstrating that their perturbation method reliably generates features distinct from existing classes. Additionally, a comparison with alternative perturbation strategies would help clarify the unique benefits of the proposed approach.\n2. The inquiry into the performance of random feature perturbations, such as adding Gaussian noise or displacing features away from class centroids, is highly relevant. To make this critique more actionable, I recommend requesting an ablation study comparing the proposed perturbation method against these simpler alternatives. Such an analysis would provide concrete evidence of the theoretical and empirical advantages of the method.\n3. The paper lacks sufficient detail on the architectures of the image encoder and the OOD classification model. For replication purposes, it is essential to include specifics such as the number and type of layers, activation functions, and other relevant parameters. A detailed description of these aspects would significantly enhance the reproducibility of the proposed algorithm.\n4. There is an error in Equation (2), where the denominator should correctly be '\\Gamma(y_j)^Tz'. While this observation is helpful, I suggest the authors conduct a thorough review of all equations and mathematical notations throughout the manuscript to ensure accuracy and consistency." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper proposes a novel framework called Boundary-based Out-Of-Distribution data generation (BOOD), which synthesizes high-quality OOD features and generates human-compatible outlier images using diffusion models." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024bood,\ntitle={{BOOD}: Boundary-based Out-Of-Distribution Data Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0rACj8JLAL},\nnote={under review}\n}" }, "abstract": { "value": "Harnessing the power of diffusion models to synthesize auxiliary training data based on latent space features has proven effective in enhancing out-of-distribution (OOD) detection performance. However, extracting effective features outside the in-distribution (ID) boundary in latent space remains challenging due to the difficulty of identifying decision boundaries between classes. This paper proposes a novel framework called Boundary-based Out-Of-Distribution data generation (BOOD), which synthesizes high-quality OOD features and generates human-compatible outlier images using diffusion models. BOOD first learns a text-conditioned latent feature space from the ID dataset, selects ID features closest to the decision boundary, and perturbs them to cross the decision boundary to form OOD features. These synthetic OOD features are then decoded into images in pixel space by a diffusion model. Compared to previous works, BOOD provides a more efficient strategy for synthesizing informative OOD features, facilitating clearer distinctions between ID and OOD data. Extensive experimental results on common benchmarks demonstrate that BOOD surpasses the state-of-the-art method significantly, achieving a 27.9\\% decrease in average FPR95 (40.31\\% vs. 12.47\\%) and a 7.2\\% improvement in average AUROC (90.15\\% vs. 97.34\\%) on the Cifar-100 dataset." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "OOD detection", "Diffusion models", "Training data generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b199cf99517ebcf42b182af077f1692c13c22702.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/f6745b4ae23eded50d8ffe536bb3e2a1bb3e6901.zip" }, "title": { "value": "BOOD: Boundary-based Out-Of-Distribution Data Generation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0rS9o1uKqu
Training-Like Data Reconstruction
main
Active
Network Inversion;Interpretability;Privacy;Training Data Reconstruction
alignment, fairness, safety, privacy, and societal considerations
1;3;3;3
5;3;3;4
3;2;2;1
1;2;2;2
2;2;2;2
2.5
3.75
2
1.75
2
-0.870388
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": ">While these attacks have been demonstrated in controlled settings, where models are typically over-parameterized or overly simplistic, the risks associated with sharing models trained on large, complex and multi-class datasets are yet been fully explored.\n\nSome of the related work you mentioned did actually consider some of these factors. You may want to be more precise here.\n\n>For under-parameterized models, where there is no possibility of memorization\n\nThis is an overstatement." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The related work presents an interesting connection between reconstruction attacks and works from the '90s.\n- The method is fairly presented." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new method for reconstructing data that resembles the training dataset of an ML model. The method is based on two steps: inversion, where one learns the space corresponding to different classes, and reconstruction." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper defines no metrics to evaluate the effectiveness of the method. The only results that are shown are reconstructed images.\n- No empirical comparison is provided against state of the art methods (or any methods, for that matter). Unfortunately, this makes it impossible to judge how better the method is with respect to prior work.\n- It is unclear what the main goal of the reconstruction attack is. In the literature, there are two: 1) reconstructing one (or more) images that look as close as possible to the original image (e.g., see Balle et al. 2022 as referenced by the authors), and 2) reconstructing images that resembles data from a certain label (e.g., (Fredrikson et al., 2015)). Based on the results (Fig 3-4), it seems to me that the proposed attack is trying to do the former but is achieving the latter.\n\n\n(Fredrikson et al., 2015) \"Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures\". You may want to include this reference, which predates (Yang et al., 2019) in terms of model inversion attacks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Under what conditions which the attacks more successful versus not?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The method extends inversion techniques from simpler architectures to CNNs, demonstrating its potential with complex datasets and classifiers.\n\nThe incorporates a few types of losses including Cross-entropy, KL divergence, cosine similarity, and feature orthogonality losses that may be useful to reconstruct training like data.\n\nDemonstrates the model's effectiveness across multiple datasets, highlighting privacy risks in different scenarios." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a network inversion approach to reconstruct training-like data from trained models. The paper modifies the inversion process to incentivize the generator to reconstruct training like data by exploiting several key properties of the classifier with respect to the training data. For example, the classifier is expected to be relatively more confident and robust in classifying training samples, the gradient of the classifier output with respect to the classifier's weight is also expected to lower for the training data than for the random inverted image." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper assumes because training data possess these properties, we can use these properties to reconstruct the training data, which may not be theoretically right. For example, A => B does not mean B>A. \n\nThe paper also do not have any formal metrics on how successful the reconstruction attack is. \n\nThe paper also do not provide clear details about the experimental setup like details about the models, dropout rates and so on, and do not discuss details about how successful the attack can be for different types of models and architecture.\n\nThe paper may benefit more from detailed discussion about which loss is more important to the reconstruction attack." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "What exactly is the input to the generator? You discuss four approaches - label, vector, intermediate matrix and vector-matrix conditioning.\nAdditionally, if you only use vector conditioning, do you simply sample from a Gaussian, softmax it, and then feed this to the Generator?\nAlso, in Figure 1, the input to the generator seems to be a latent + conditioning vector. What is the source of the latent vector?\n\nWhat exactly is the cosine similarity between? It says on Line 319 that \"cosine similarity between features of generated images i and j\". Where are two images coming from?\n\nTypos and grammatical errors\n\nLine 219 has a typo ' a diverse data distributions'\n\nLine 234 - 'we given its simplicity'\n\nLine 255 - for a encoding the label\n\nLine 240 - learnt off of the labels each representative of the separate classes\n\nOther \n\nIs the claim on Line 256 an observation you made during your research or a previously known fact from the literature? If so, please cite the relevant literature.\n\nDo you think you could use a more powerful generative model i.e diffusion model?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The main aim of the paper is quite clear to understand. The authors lay out the techniques they use to design their inversion network and explain the three desired properties they would want their generative model to have quite well. If their generative model is able to produce high confidence samples which have some room for perturbation (i.e some perturbation in the input space do not produce wildly different output distributions by the classifier) and have low gradient norm, then their model has a better likelihood of producing realistic samples. I also liked their use of vector and matrix conditioning techniques which I believe are useful for generating controlled samples from generative models. The authors also provided a very comprehensive literature review in the space of model inversion based attacks which is well appreciated." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors present a technique called Training-Like Data Reconstruction (TLDR) which can essentially create samples which are similar to the data which a neural network based classifier was trained on. Their technique is based on learning a generative model which can take in label encodings and produce outputs in the image space by using up-convolutions. The signal for training the generator comes from the classifier itself. Essentially, the authors come up with a loss function which encourages the generator to produce samples which the pre-trained classifier will classify into the same class as that of the conditional label provided to the generator. The loss function is made up of several regularizations and terms which encourage the generator to produce images which look similar to the training data. Evaluation is done on CNN based classifiers and 4 datasets including Cifar-10 abd MNIST." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper suffers from several weaknesses which I believe can be addressed.\n\nFirstly, the experimental evaluation is very limited. The authors only perform evaluation on MNIST, FMNIST, SVHN and CIFAR-10. These are very simple datasets and we do not know if this technique will be applicable to more complex datasets such as Imagenet or Ms-COCO where finegrained details need to be captured. Secondly, their is no quantitative evaluation of how well the reconstructed images match the original training data. The authors only presented visual samples. Third, evaluation was only done on CNN based classifiers and it would be interesting to know if this technique can perform well on more modern architectures like ViTs.\n\nAnother concern I have about the paper is the complexity of the TLDR scheme. In total, there are 9 types of loss functions including KL divergence, Cross entropy, variational losses, feature orthogonality, cosine similarity etc. There is no clear understanding of the impact of each type of loss function and whether they are all necessary. I would have liked to see some ablation studies or theoretical justification for such a complex scheme.\n\nFinally, I am concerned about the quality of the reconstructions themselves. CIFAR-10 is the most complex dataset they perform their evaluation on and many of the samples are hard to parse. The inversion scheme does not capture color / contrast very well and my understanding is that this inversion is only done to get a feel for what the training data was and not to be able to steal confidential training data and reuse it." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "None." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-motivated, the presentation is mostly on the positive side and mostly well-written." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces a novel approach to reconstruction of training data from ML models using an inversion-based attack. The attack is evaluated on a number of CV benchmark datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The biggest scientific weakness of this work is its impact. We have previously seen dozens of papers on model inversion and data reconstruction using various techniques ranging from inversion networks (e.g. [1, 2]) to gradient reconstruction ([3,4]). Note that these cover collaborative learning in most cases, as authors have already linked the relevant centralised settings in their related work. There is hardly anything new about the approach proposed here or the results obtained in this work. The datasets used here are the basic toy datasets that have previously been inverted multiple times using the techniques I linked above across a very large number of training settings and model architectures (mostly beyond the basic CNN architectures). While novelty is not the only factor that we are looking for when assessing submissions, I can hardly see any additional insights, unexplored work directions or interesting findings either. Almost everything in this work has previously been studied (e.g. the attack method, the use of priors, combination of multiple reconstruction factors etc.) in great detail and I do not see how the community benefits from this work.\n\nIn terms of more addressable concerns I have: the paper is not well-presented given the tight space constraints. With only 9-10 pages, authors should really concentrate on new methods, novel results and discussion. Currently, the introduction (which is in my view really inflated for no particular reason) takes 2 pages. This is not to mention the 2 pages of basic ML vocabulary, where each term has its own paragraph with margins (e.g. you do not need to explain what a cross-entropy loss is at NeurIPS). This adds up to about 4-5 pages of superfluous content. And given my criticism of the novelty and the impact of the work and its findings, this is exactly where this extra space should have gone - to explore the method in more detail, show novel insights etc. There are also 2 diagrams which I find to be relatively similar and they take about a page as well without adding much to the content (i.e. one would suffice, two are too much). \n\nWhile this may not be the comment the authors expected to hear, I would encourage them to concentrate on a) extracting as many novel scientific insights from their method as possible and b) restructuring the work so these results are clear to the reader. This would make the paper useful for the community and acceptable for publication. \n\n[1] - Usynin, Dmitrii, et al. \"Zen and the art of model adaptation: Low-utility-cost attack mitigations in collaborative machine learning.\" Proceedings on Privacy Enhancing Technologies (2022).\n\n[2] - He, Zecheng, Tianwei Zhang, and Ruby B. Lee. \"Model inversion attacks against collaborative inference.\" Proceedings of the 35th Annual Computer Security Applications Conference. 2019.\n\n[3] - Geiping, Jonas, et al. \"Inverting gradients-how easy is it to break privacy in federated learning?.\" Advances in neural information processing systems 33 (2020): 16937-16947.\n\n[4] - Boenisch, Franziska, et al. \"When the curious abandon honesty: Federated learning is not private.\" 2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P). IEEE, 2023." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "In this paper, we propose a network inversion-based approach to reconstruct training-like data from trained machine learning models." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024traininglike,\ntitle={Training-Like Data Reconstruction},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0rS9o1uKqu},\nnote={under review}\n}" }, "abstract": { "value": "Machine Learning models are often trained on proprietary and private data that cannot be shared, though the trained models themselves are distributed openly assuming that sharing model weights is privacy preserving, as training data is not expected to be inferred from the model weights. In this paper, we present Training-Like Data Reconstruction (TLDR), a network inversion-based approach to reconstruct training-like data from trained models. To begin with, we introduce a comprehensive network inversion technique that learns the input space corresponding to different classes in the classifier using a single conditioned generator. While inversion may typically return random and arbitrary input images for a given output label, we modify the inversion process to incentivize the generator to reconstruct training-like data by exploiting key properties of the classifier with respect to the training data. Specifically, the classifier is expected to be relatively more confident and robust in classifying training samples, and the gradient of the classifiers output with respect to the classifier’s weights is also expected to be lower for training data than for random inverted samples. Using these insights, along with some prior knowledge about the images, we guide the generator to produce data closely resembling the original training data. To validate our approach, we conduct empirical evaluations on multiple standard vision classification datasets, demonstrating that leveraging these robustness and gradient properties enables the reconstruction of data semantically similar to the original training data, thereby highlighting the potential privacy risks involved in sharing machine learning models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Network Inversion", "Interpretability", "Privacy", "Training Data Reconstruction" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c66d73e577d9a978703daa6453138c1817655849.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/e156dbff6dfb6011b1763a08003d18333a962102.zip" }, "title": { "value": "Training-Like Data Reconstruction" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0rmOx0Ifbf
Controlled Generation of Natural Adversarial Documents for Stealthy Retrieval Poisoning
main
Active
Dense Retrieval;Corpus Poisoning;Adversarial Attack
alignment, fairness, safety, privacy, and societal considerations
3;3;5;5;5
5;4;4;3;4
2;2;3;2;2
2;1;2;2;3
3;1;2;2;3
4.2
4
2.2
2
2.2
-0.645497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "(6) Why not use a weighted sum of embedding similarity and perplexity instead of introducing an extra model?\n\n(7) Why are only true positives and false positives considered for defense? Would false negatives not be equally important?\n\n(8) Is the LLM naturalness evaluator used during the attack aligned with the one used in the evaluation?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The proposed method is straightforward and easy to follow. \n\nExperiments are conducted on a recent dataset and compared against current baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a beam-search-based adversarial attack method for RAG, designed to produce fluent text with sentence embeddings closely matching a target embedding. Experimental results demonstrate that this approach effectively bypasses perplexity-based filtering and achieves a comparable attack success rate to HotFlip baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1) The proposed attack method relies on simple prompts like “Tell me a story about” to generate documents from scratch. This approach raises concerns about practical applicability, as real-world malicious users typically aim to inject misinformation. It is unclear how the proposed method would effectively introduce misinformation in a RAG setting.\n\n(2) The paper lacks experiments demonstrating how the proposed method impacts the end-to-end performance of RAG systems, such as downstream QA performance.\n\n(3) The novelty of this work is limited, as similar approaches have been applied in slightly different contexts. For example, beam search algorithms have been widely used in adversarial attacks [1][2]. The paper should discuss these related works and emphasize its unique contributions beyond altering the beam search optimization objective.\n\n> [1] Zhao et. al., Generating Natural Language Adversarial Examples through An Improved Beam Search Algorithm\n\n> [2] Liu et. al., A More Context-Aware Approach for Textual Adversarial Attacks Using Probability Difference-Guided Beam Search\n\n(4) The paper claims black-box access to the embedding encoder. However, given the assumption that embeddings can be accessed repeatedly, one could calculate gradients numerically, making the black-box claim somewhat overstated.\n\n(5) Some other minor issues:\n- Please use `\\citep` and `\\citet` in Latex properly.\n- The paper uses the gendered pronoun \"his\" for attackers (Line 109), which could be avoided.\n- The paper contains several grammatical mistakes\n- Notation definitions lack precision and could be simplified. For example, `P_n`​ is defined as a retrieval corpus but actually represents benign documents. The subscript `n` could be omitted." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How to further enhance the attack effect while improving the naturalness of adversarial documents?\n2. For different types of retrieval systems and application scenarios, does this method need to be specifically adjusted?\n3. How to better understand and quantify the \"naturalness\" indicator in order to more accurately evaluate the generated adversarial documents? Is it reasonable to rely solely on perplexity?\n4. How to consider the hallucination and efficiency problems caused by the auxiliary LLM?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The proposed adversarial decoding method is a novel controlled generation technique that comprehensively considers embedding similarity and naturalness and effectively addresses the deficiencies of existing methods.\n2. Experiments were conducted using the large-scale MS MARCO dataset, comparing different generation methods and considering two scenarios: trigger attacks and no-trigger attacks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on the problem of retrieval poisoning in modern retrieval systems. Firstly, it points out the limitations of existing methods (such as HotFlip and COLD) in generating adversarial documents. The documents generated by HotFlip have a relatively high perplexity and are easily detected; while COLD fails to generate useful texts under adversarial constraints. Then, this paper proposes a new controlled generation technique, which combines an adversarial objective (embedding similarity) with a \"naturalness\" objective calculated based on an open-source surrogate language model (LLM). The generated adversarial documents are difficult to be detected by perplexity filtering or other LLMs without generating a large number of false positives. This method has been evaluated in different scenarios such as trigger attacks and no-trigger attacks, using the MS MARCO dataset. In terms of poisoning efficacy and the naturalness of generated documents, it is superior to previous methods, but still has some limitations, such as poor transferability across encoders and the need for more research on defenses." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The transferability of adversarial documents between different encoders is poor, which limits the universality of the method.\n2. It depends on LLM and does not consider the situation of LLM hallucination. In addition, the use of LLM needs to consider the efficiency and effectiveness of the attack.\n3. The experiments are not sufficient. The experiment only considers one retriever, contriver, and other retrievers need to be compared. At the same time, the baselines need to be increased (for example, PoisonedRAG in the references)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* It seems that the attack algorithm is given a prefix prompt (per line 186), however, unless I missed it, it is not mentioned in the text (Sec. 5). Could you clarify what is the role of this prefix and how it is chosen?\n* Results in Sec 7.4, Table 3 (e.g., HotFlip, Top-20, 0.01; with 500 passages), seem to contradict those originally reported by Zhong et al. 2023 [5] on the same dataset and model (e.g., Top-20, 98% with 50 passages). It would be helpful to clarify this discrepancy.\n* In line 243, it is mentioned that the no-trigger attack is tested against 1K queries. Are these disjoint from the 50K attacked set (similar to the trigger attack’s evaluation)?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The work identifies–and clearly states–a limitation in existing retrieval attacks, and proposes a method to address it. \n\n* As the evaluation shows, the proposed attack is harder to detect through the proposed fluency-based detectors (including perplexity and naturalness), while attaining comparable attack success to prior attacks, which further emphasizes the vulnerability of retrieval." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work points to an issue in previous retrieval poisoning attacks—detectability via fluency-based defenses—and addresses it by proposing a new method. Specifically, it introduces a black-box attack that uses beam search to generate adversarial passages following both the retrieval objective, and text perplexity and naturalness (i.e., the level of naturalness as judged by an auxiliary LLM) penalties. The attack shows comparable performance with prior work, while it is arguably harder to detect by standard fluency-based defenses." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Novelty.** The work’s novelty focuses on the naturalness of adversarial documents generated by the new method. However:\n\n* The main novelty of the method—enriching the objective with LLM logit-based naturalness score (Sec. 5.2)—lacks a convincing presentation (see more details below) and its current evaluation might be biased (more below), especially in light of the repetitive text shown in qualitative samples.\n* It was previously shown that the discrete text optimization’s (e.g., in LLM jailbreaks) trade-off between attack success and text fluency [1] can be mitigated (e.g., [2], [3]). Specifically, similarly to this work, Sadasivan et al. [3] show LM-logit-based beam search to produce fluent text attacks. Thus, it is unclear whether this work offers a significant delta w.r.t. to previous work. \n\n**Method.** Since it is introduced as a core contribution, it would be helpful to elaborate on the soft naturalness score component in the method. This, for example, could be done by reporting on a study of alternative scores (if such were considered), or exploring the correlation between the soft naturalness score with naturalness of text.\n\n**Threat Model.** It is unclear why an attacker would aim to generate unconstrained documents (potentially meaningless) and promote their retrieval. For example, In the trigger attack is motivated by the potential of “spreading adversarial content” (line 112), although, to my understanding, such content is not necessarily contained in the generated documents.\n\n**Evaluation.** As the work’s contribution focuses on the “naturalness” of the generated documents, it would be helpful to strengthen the evaluation:\n\n* **Perplexity Filtering (Sec 7.2).** As GPT2 is a relatively dated and weak LLM (line 329), it would be helpful to additionally calculate documents’ perplexity using stronger LLMs (e.g., Gemma2-2B or others), and show that the method is robust to such filtering.\n* **Naturalness Filtering (Sec. 7.3).** It seems that the naturalness evaluation for the non-basic attack (“Adv”) is largely done using the same LLM (Llama) used in the attack. A stronger result would be to show the generated documents are robust to naturalness filtering of different, strong, LLMs. Alternatively, one could ask LLMs for a score in a large range (e.g., 1-10), as the current prompt (asking for binary score) could possibly bias the LLM’s answer Another option is reporting on a user study of their naturalness.\n* **Evaluated Model(s).** The paper evaluates the attacks against a __single__ retrieval model (namely, Contriever [4]). It should be noted that the evaluated dataset (MS-MARCO) is out-of-training-distribution for this model (Contriver was not trained on MS-MARCO [4], as opposed to most text encoders), and it was previously observed to be exceptionally vulnerable to such retrieval attacks [5]. Thus, it would be helpful to validate the results on additional models.\n\n**Presentation.** Some presentation-related comment and nits:\n* Sec. 7.3: It would be helpful to state in the text (besides the table caption) that the evaluated attack is trigger attack.\n* Fig. 1: The figure would be easier to interpret if the y-axis ticks would match the (pre-log) values from text.\n* Algorithm 1: As LLM_{logits},LLM_{naturalness} and \\lambda are all part of the algorithm parametrization, it would be clearer if these were included in the Input.\n* Algorithm 1, line 23: Shouldn’t `k` be `m` (in the comment)?\n\n**References:**\n\n[1] Baseline Defenses for Adversarial Attacks Against Aligned Language Models; Jain et al. 2023.\n\n[2] FLRT: Fluent Student-Teacher Redteaming; Thompson & Sklar, 2024.\n\n[3] Fast Adversarial Attacks on Language Models in One GPU Minute; Sadasivan et al., ICML 2024.\n\n[4] Unsupervised Dense Information Retrieval with Contrastive Learning; Izacard et al., TMLR 2022.\n\n[5] Poisoning Retrieval Corpora by Injecting Adversarial Passages; Zhong et al., EMNLP 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper introduces a novel controlled generation method named AdversarialDecoding, which uniquely integrates embedding similarity with a \"naturalness\" constraint. By leveraging a surrogate large language model (LLM) to compute soft scores, the method simultaneously optimizes for semantic relevance and textual naturalness.\n\n- The methodology presented in the paper is robust and meticulously designed. The authors conduct comprehensive experiments using the MS MARCO datasets, and give a lot of ablation studies." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the vulnerability of modern retrieval systems that rely on embedding similarity. The authors highlight that existing adversarial methods, such as HotFlip, generate malicious documents with high perplexity, making them easily detectable through perplexity filtering and large language model (LLM) evaluations. To address this, the paper introduces a novel controlled generation technique called AdversarialDecoding, which simultaneously optimizes for embedding similarity and naturalness using a surrogate LLM. This approach produces adversarial documents that maintain low perplexity and appear natural, effectively evading both perplexity-based and LLM-based detection mechanisms. Experimental results on the MS MARCO dataset demonstrate that AdversarialDecoding achieves high poisoning success rates comparable to traditional methods while significantly reducing the likelihood of detection. Additionally, the study explores the limited transferability of these adversarial documents across different encoders, suggesting potential avenues for developing robust defenses. The research underscores the importance of advancing defensive strategies to safeguard retrieval systems against sophisticated adversarial attacks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The writing in this paper could be **improved a lot**. Firstly, each formula lacks numbering. Additionally, the citation format in lines 251-258 seems off. Moreover, line 123 ends with a comma, and line 130 lacks a period. These issues are quite frequent in the article, which suggests a need for more attention to detail.\n\n- In some experiments, using llama3.1-8b as both the adversarial decoding model and the naturalness evaluation model could raise concerns about fairness. This is because llama3.1-8b might be biased towards the data it generates itself. Besides, could you explain why you use GPT2 to measure the perplexity of generated adversarial documents rather than GPT3 or other LLMs?\n\n- The selected baselines are limited. Hotflip is an early character-level adversarial attack method, but since then, many more effective attack algorithms[1] have been developed, whether at the word, character, or sentence level. These newer methods often result in much higher fluency.\n\n- Adding some additional human evaluations would be valuable.\n\n**References:**\n\n[1] https://github.com/thunlp/TAADpapers" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Q1: In line 378, The author stated \"Table 2 shows that evasion of detection transfers to two other prompts and another LLM\". It is confusing as table 2 does not seem to include the results for other prompts and LLMs. So where is the result for evasion of detection on other prompts and LLMs?\n\n- Q2: In experiment setup, The author said \"To evaluate 'naturalness' of adversarial and real documents, we prompt GPT-4o and LLaMA-3.1-8B with these prompts\". But where is the result of GPT-4o filtering?\n\n- Q3: In table 2, at the same threshold, increasing the width of the beam search actually increases the true positive rate of LLM-based naturalness filtering (0.5 -> 0.7), which means more adversarial document is filtered. This is very strange to me. In my opinion, increasing the width of beam search should be able to find better solutions (i.e., more stealthy adversarial documents) and therefore less likely to be detected." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper presents a novel approach to generating natural adversarial documents for retrieval poisoning. Combining an adversarial objective with a “naturalness” objective based on soft scores from a surrogate LLM is novel. This addresses the limitations of previous methods that produced easily detectable adversarial documents.\n- The methodology section explains the proposed adversarial decoding method in detail and the \"Algorithm 1 Adversarial Decoding\" is clear. The experimental setup and results are also presented in a clear and organized manner.\n- The work is significant as it addresses an important issue in modern retrieval systems. The ability to generate stealthy adversarial documents has implications for the security and integrity of retrieval-augmented generation and other applications that rely on retrieval systems." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the vulnerability of retrieval systems based on embedding similarity to poisoning attacks. The authors demonstrate that previous techniques, such as HotFlip, produce documents that can be easily detected using perplexity filtering. They introduce a new controlled generation technique that combines an adversarial objective (embedding similarity) with a \"naturalness\" objective based on soft scores from an LLM. The proposed method aims to generate adversarial documents that cannot be automatically detected using perplexity filtering or LLM-based \"naturalness\" filtering without incurring significant false positives, while maintaining similar poisoning efficacy. The authors evaluate their approach on the MS MARCO dataset and show that their method outperforms prior techniques like energy-guided decoding (COLD) and is more effective than HotFlip in generating stealthy adversarial documents." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Methodology:\n\n- **Dependence on Surrogate LLM**: The proposed method's reliance on a surrogate LLM for computing the naturalness score has a drawback. It significantly raises the computational cost because computing $s_{natural}$ demands the calculation of the LLMs' output logits, which is more costly than computing the similarity score. This could limit the method's practical application, especially when dealing with large datasets. I would expect a runtime comparison between their method and baselines, or to discuss potential optimizations to reduce computational cost.\n- **Single Prompt Optimization**: Optimizing adversarial documents based on only a single prompt (“Is this text unintelligible?”) restricts their robustness.\n- **Insufficient Evaluation of LLM Detection Evasion**: One of the three “naturalness” filtering prompts (“Is this text unintelligible?”) is identical to the attacker's prompt, and the other two are semantically similar. This resembles a \"data leakage\" situation, in my opinion. The perplexity-based filtering is also the case (the attacker and defender both use GPT-2). I expect a more comprehensive evaluation using a wider variety of prompts and different LLMs to accurately determine the method's ability to evade detection. \n- **Generalizability across Different Retrievers**: Given the relatively low transferability of adversarial decoding across different retrievers, more experiments on different retrievers are needed to verify the effectiveness of the proposed method.\n\nPresentation:\n\n- Figure 1 is too large, in my opinion. It might be better to present two figures (e.g., one for trigger attack and one for non-trigger attack) horizontally. \n- The table's caption should be placed before the table." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We design, implement, and evaluate a new controlled generation technique that combines an adversarial objective (embedding similarity) with a \"naturalness\" objective based on soft scores computed using an open-source, surrogate LLM." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024controlled,\ntitle={Controlled Generation of Natural Adversarial Documents for Stealthy Retrieval Poisoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0rmOx0Ifbf},\nnote={under review}\n}" }, "abstract": { "value": "Recent work showed that retrieval based on embedding similarity (e.g., for retrieval-augmented generation) is vulnerable to poisoning: an adversary can craft malicious documents that are retrieved in response to broad classes of queries. We demonstrate that previous, HotFlip-based techniques produce documents that are very easy to detect using perplexity filtering. Even if generation is constrained to produce low-perplexity text, the resulting documents are recognized as unnatural by LLMs and can be automatically filtered from the retrieval corpus.\nWe design, implement, and evaluate a new controlled generation technique that combines an adversarial objective (embedding similarity) with a \"naturalness\" objective based on soft scores computed using an open-source, surrogate LLM. The resulting adversarial documents (1) cannot be automatically detected using perplexity filtering and/or other LLMs, except at the cost of significant false positives in the retrieval corpus, yet (2) achieve similar poisoning efficacy to easily-detectable documents generated using HotFlip, and (3) are significantly more effective than prior methods for energy-guided generation, such as COLD." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Dense Retrieval", "Corpus Poisoning", "Adversarial Attack" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a4dc20dac0953406d87951db9dca968183b9efe4.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Controlled Generation of Natural Adversarial Documents for Stealthy Retrieval Poisoning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0sJ8TqOLGS
LLM Spark: Critical Thinking Evaluation of Large Language Models
main
Active
critical thinking;llm;problem-solving;benchmarks
datasets and benchmarks
3;3;5;8
4;4;3;3
2;2;2;3
2;2;2;3
2;1;3;3
4.75
3.5
2.25
2.25
2.25
-0.855186
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "None" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The work uses inspiration from cognitive science to come up with framework \n\nThey address an important aspect of LLMs which is critical thinking\n\nMultiple models are considered for the work and comparison \n\nMultiple hypothesis are tested in this work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces SPARK, a framework intended to assess the capability of LLMs to identify inconsistencies in problem framings using modified existing datasets. The authors come up with two metrics \"Correctness\" and \"Challenge Rate\" for the evaluation. They use the idea of Three-space theory from the cognitive science to come up with this framework. The dataset consists of different domains such as math, science, comprehension etc. They also introduce perturbation's to the data to see the changes in the output given by the LLM and evaluate those responses and try to analyze the behavior of LLMs. While the work is interesting but there are many issue with this, right from writing to selection of data and evaluation metrics." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-> First most of the figures are poorly inserted, there could have been other type of figures chosen as most of the figures have the data overlapping and it's hard to interpret them.\n\n-> The writing is poor, there is too many things and little details\n\n-> While the related work is good there are many more work that are missing one of them is \"Tree of Thoughts\" \n\n-> The datasets chosen for this work are diverse and contains many existing datasets, there is no mention of testing of data contamination given the models that are considered for this work have this data in their training data, also the dataset could have been better, I feel there are better datasets like Game24, or the one's mentioned in the related work are more relevant to this work.\n\n-> It is mentioned that you create benchmarks in the abstract, I didn't clearly understand exactly what that meant.\n\n-> A framework paper should be more detailed such that others can reproduce and compare their work to this, need more quantitative results.\n\n-> Also there could have been more evaluations metrics rather than having just two of them and using them to test variety of hypothesis, this decreases the robustness of the results.\n\n-> Most of the figures in the appendix were hard to interpret, more details on them is appreciated." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Major problems are written in Weakness #2, and here are some less severe questions that need clarification or presentation advice:\n\n1. Why choose this particular set of questions? Why there is a focus on reasoning-focused datasets? \n\n2. What are the decoding parameters for most models used in the experiments? Some datasets and models are sensitive to these hyperparameter decisions, so it should be clarified in the paper. There is no need to seek for best hyperparameter combinations, but for reproduction purposes, it is needed to know these experiment details. \n\n3. How are checkpoints compared in Section 4.7 different? More importantly, what are they, and how they are related to the analysis in the main text?\n\n4. Presentation Advice: The fonts and colors in many figures are hard to read or interpret, and some figures contain confusing legends and annotations (e.g., Figure 10)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper investigates an interesting evaluation dimension on whether LLM can critique flaws in the prompted problem formulation, complementary to widely-used instruction-following and LLM reasoning benchmarks. \n\n2. The experiment results can support the major claims of the paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel approach to evaluating LLMs' critical thinking in identifying flaws in problem formulation. Grounded in Three-Space Theory, the authors reformulate existing datasets as critical thinking evaluation ones by removing correct answer choices (for multiple-choice QA datasets) or removing necessary conditions (for free-form generation datasets). They assess the \"challenge rate\"—the frequency with which LLMs, prompted to detect flaws, correctly identify issues, using GPT-4 for automatic YES/NO judgments. To further evaluate the model's robustness to misleading information in the problem formulation, the authors also augment QA datasets with hints (\"gaslighting\") on correct/incorrect answers or both. \n\nExperiment results demonstrate that while some larger LLMs can achieve non-trivial challenge rates ($>50\\\\%$) on free-form generation tasks only, there remains substantial room for improvement. Notably, the challenge rate does not correlate with model accuracy, and chain-of-thought prompting yields inconsistent effects on both metrics across models and datasets. Although gaslighting increases challenge rates across models, it also reduces accuracy, highlighting LLMs' susceptibility to manipulation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Limited Insight into Findings.** As this is an evaluation-focused paper, deeper analysis and implications of the results are the most important contributions. Many findings are presented as direct observations, often summarized by broad statements like \"experiment results are influenced by dataset properties, models, training ...\", \"[prompting methods] achieves mixed results\", or \"[model names] are vulnerable to manipulation in prompts\". While these findings may hold, they resemble insights from prior work (Section 2) and align with expectations under a well-constructed evaluation framework. Although the paper emphasizes a unique \"critical thinking\" evaluation, it’s unclear what additional insights this approach offers beyond previous evaluation works.\n\n2. **Clarity and Rigor in Experiment Design.** This paper reformulates existing datasets to build a new benchmark that focuses on evaluating LLM critical thinking, but several important experiment details are missing, or not rigorous enough. For example, the implementation of \"Missing Information\" for non-Quail datasets is not clearly defined, and criteria for identifying and removing \"necessary conditions\" (to make questions unanswerable) are unspecified. \nThe validity of the \"LLM-as-judge\" approach in this new critical thinking evaluation benchmark is not clearly explained, nor are the \"held-out datasets\" used in evaluations. \n\n Additionally, the exclusive use of instruct-tuned models raises questions about claims regarding instruction training effects (e.g., Line 272), as non-instruct-tuned models are not assessed. Including control prompts where no flaws should be detected is also important to investigate potential false positive problems and prevent simple flaw-reporting models from skewing the results. Also, it seems a random baseline (possibly achieving 50% challenge rates) can beat most models in identifying problem formulation flaws, but there is no related explanation and analysis. Further problems are listed in the \"Questions\" section. \n\n3. **Readability and Conciseness of Main Text.** Introducing the new \"SPARK\" framework and articulating hypotheses is understandably challenging. However, the overall paper organization, especially in Sections 3 and 4, could be improved for readability. The flow from Section 3.1 to Section 3.2 feels disjointed, and hypotheses are discussed in fragments across sections, which complicates their verification for readers. While the reviewer appreciates the efforts of putting an experiment summary in Section 3.4, it lacks grounding in detailed results, making the introduction feel verbose. Tightening the organization and streamlining explanations would improve the paper's clarity and coherence." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses.\n\nP.S., I'm really curious how o1 would respond to such problems." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Focusing on LLM's critical thinking skills, this paper frames the Benchmark by designing inconsistencies in the problem and designing a large number of experiments to explore it. \n\nI personally like the idea of this work. In my opinion, the main strengths of this work include: \n1) This paper uses the three-space theory to model LLM's critical thinking ability and explores the reverse proof of the framework, which provides a theoretical basis for critical thinking related research.\n2) This paper conducts a large number of experiments to explore LLM's critical thinking and its influencing factors from multiple perspectives, which provides a feasible direction for the subsequent research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Based on the three-space theory, this paper presents the SPARK hypothesis and assessment framework on LLM critical thinking. Through the benchmark constructed in the paper, the authors explored the current critical thinking ability of LLM, with the influence of various factors on it, through a large number of experiments, contributing to the assessment and enhancement of LLM's critical thinking." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I do not find significant shortcomings of this work, but only a few minor points to be clarified:\n1. The critical thinking assessment was designed without taking into account the impact of the model's capabilities. For example, if the model itself cannot understand or does not have knowledge of the question, it is difficult to \"criticize\" it. This is especially true for multiple-choice questions and smaller models, as shown in Figure 2, where multiple-choice questions have a low percentage of correct answers and most models have a low change rate. This may result in an underestimation of their critical thinking, i.e., it is not that they do not have the ability to think this way, but that the questions are beyond their knowledge and ability.\n2. In terms of assessment metrics, it is best to minimize the use of LLM assessments, which can be costly. For example, for multiple-choice questions, can there be a simpler way of assessing correctness rate, making the benchmark easier to use?\n3. Correctness rate is sometimes used in complete problems [line 278] and sometimes in incomplete problems, and is also expressed as \"none of the options\" in its definition (incomplete problem), which can be confusing when reading the experiments and results.\n4. Why does the gaslight increase its challenge rate while decreasing its correctness rate? If it affects the correctness rate, i.e., LLM is misled by gaslight, shouldn't the reasoning not be challenged but follow the misguidance?\n5. Some of the dots and text in Figures 2, 11, and 12 overlap, which is hard to read." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How are the correctness rate and challenge rate calculated? Can a model that always rejects to answer questions obtain the highest challenge rate?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The framework is grounded in a cognitive theory (the Hierarchical Three-Space Theory)\n- Extensive experimental results covering multiple domains and tasks, demonstrate the limitations of current LLMs in identifying inherent inconsistencies in provided problems." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces SPARK, a framework for evaluating Large Language Models' (LLMs) critical thinking abilities, specifically their capacity to identify inconsistencies in problem framing. The framework is grounded in the Hierarchical Three-Space Theory and evaluates LLMs across multiple dimensions (problem framing space, strategy space and implementation space) through five key hypotheses proposed by the authors. The authors create benchmarks by modifying existing datasets like commonsense QA, math and science datasets to introduce inconsistencies (e.g. missing options or missing conditions in the questions). Multiple LLMs are tested, and the experimental results show their limitations in critical thinking abilities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The findings that LLMs lack of ability to identify flaws and often agree with the hallucinations in the given queries are not surprising.\n- It is unclear whether the modified question can truly capture the problem inconsistencies in the real world. It would be helpful to add a human baseline to see if this task is solvable and aligned." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Evaluation of Critical Thinking Ability of LLMs" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024llm,\ntitle={{LLM} Spark: Critical Thinking Evaluation of Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0sJ8TqOLGS},\nnote={under review}\n}" }, "abstract": { "value": "Large language models (LLMs) excel in complex tasks but often struggle with\ninconsistencies in problem framing, a critical skill for real-world scenarios. This\npaper introduces SPARK, a novel evaluation framework grounded in the Hierar-\nchical Three-Space Theory, to assess LLMs’ ability to identify missing informa-\ntion and challenge flawed problem setups. We create benchmarks by introducing\ninconsistencies and misleading cues in diverse question-answering datasets, cov-\nering mathematics, science, and reading comprehension. Our experiments with\nstate-of-the-art LLMs reveal their limitations in critical thinking, particularly in\nrecognizing inconsistencies. We also explore mitigation strategies such as modi-\nfied prompting and targeted fine-tuning. Furthermore, we conduct comprehensive\nexperiments to investigate how model and problem properties influence critical\nthinking capabilities in LLMs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "critical thinking", "llm", "problem-solving", "benchmarks" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3bab59f64da3db5981884b32a5020124afc52a35.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "LLM Spark: Critical Thinking Evaluation of Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0sU4myabw1
RapidDock: Unlocking Proteome-scale Molecular Docking
main
Active
molecular docking;protein-ligand binding;transformer;equivariance;high-throughput screening;drug discovery
applications to physical sciences (physics, chemistry, biology, etc.)
3;3;5;5
3;3;5;4
3;1;3;3
2;3;2;2
3;1;2;4
4
3.75
2.5
2.25
2.5
0.904534
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "* Is there a better way to determine which atoms are rigid and which are flexible? For example, AutoDock Vina determines if bonds are rotatable using a simple chemical definition, which dictates which atoms are flexible and which are not. Just searching through a lot of generated conformations seems like it might miss bonds that only rotate when exposed to external charges.\n* Is this model potentially applicable to *target fishing*? Target fishing is the process of taking an existing drug compound and evaluating it against a large number of potential proteins to see if it can target them. This can be applied for drug repurposing, which is the use of currently approved drugs against new indications based on previously unknown binding against a new protein. This is potentially a strong application of the proposed method, but I do not see it explicitly mentioned in the paper anywhere." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* Clear presentation of methods and results.\n* Novel application of transformer architecture to docking, which results in much faster inference.\n* Reasonable design choices in model. These include the addition of features for ligand atom charges and the use of a pre-trained protein language model. The scaling of the attention vector based on distance also seems well-motivated.\n* Strong results on two distinct datasets, achieving a better success rate than two competitive deep learning methods at a fraction of the cost. While RapidDock has significantly worse accuracy on Posebusters compared to AlphaFold 3, I do not think this is a negative, because as the authors note AlphaFold 3’s speed is not suitable for large-scale docking. Additionally, my understanding is that AlphaFold-3 performs energy minimization as a post-processing step, which would give it an unfair advantage compared to RapidDock.\n* Ablations of various model components help show the benefits of each design choice." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors tackle the problem of proteome-scale docking, the goal of which is predicting the binding pose of a ligand against many thousands of proteins. To do this, they develop an equivariant Transformer model (RapidDock) for dramatically accelerating docking compared to previous diffusion or GNN-based approaches. The model takes various features from the protein and ligand as input, and outputs a prediction of the binding pose of the ligand. The results show that RapidDock achieves 100x faster runtimes than three competing deep learning methods, while retaining equivalent accuracy (except to AlphaFold 3, which is much slower but has much better accuracy)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Motivation behind the problem setting is unclear. The authors address proteome-scale docking because any protein in the human proteome could be a potential *off-target* (a term of art that should probably be included in the paper) of a drug. Thus, docking against all proteins and then predicting affinity in a downstream task would detect these potential off-targets before they are discovered in later preclinical or clinical testing. However, I am not convinced that docking to each protein in the proteome is necessary to detect off-target effects. Pharmacologists often screen a drug against a limited number of safety-relevant proteins (in the hundreds), such as G-protein coupled receptors or ion channels, that are frequent off-targets [1, 2]. This is usually sufficient to detect many clinical issues beforehand, and it is not obvious that just considering more potential off-targets would further reduce the rate of adverse effects occurring (which for example may be due to more complex issues, such as toxicity of metabolic products or on-target unwanted effects). Could the authors provide a better argument for why it is important to screen a drug against all potential protein targets?\n[1] Bendels et al. “Safety screening in early drug discovery: An optimized assay panel.“ J Pharmacol Toxicol Methods 2019.\n[2] Peters et al. “Can we discover pharmacological promiscuity early in the drug discovery process?” Drug Discovery Today 2012.\n\n* Limited baseline comparisons. I think the most important baseline to include would be AutoDock Vina (or another similar docking program). Despite not being deep learning-based, Vina is relatively fast and the current state-of-the-art in applied fields. Practitioners conducting large-scale docking would likely use Vina, so including results on this baseline is important. An additional deep-learning based docking model, such as TANKBind, would also improve the strength of the results, but it is not as critical.\n\n* Some of the design choices are not well-explained. For example, it is not clear to me why discretized charge embeddings are used for the ligand atoms instead of simply providing the charge scalar as an input. It is also not clear what role the distance bias matrices play." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. DiffDock-L (also AlphaFold3 and NeuralPLexer) is a generative approach, so it gives multiple binding poses according to their probabilities. Therefore, its prediction accuracy can be improved by including multiple different poses (Top-n poses) in the evaluation. However, RapidDock is deterministic, so it gives only a single output. It is understood that the authors want to emphasize computational speeds, but it seems they also need to discuss the accuracy aspects for a fair comparison.\n2. Since RapidDock requires conformation searches for each molecule before docking, the authors need to clarify whether the reported computational times include the time for conformer searches or not. \n3. The proposed method has been compared with AlphaFold3 and NeuralPLexer. However, the former performs a rigid docking, whereas the latter predicts binding poses only from protein sequences and molecular graphs. Therefore, the prediction complexity of RapidDock is much lower than that of the baseline models, so the direct comparison between them is less meaningful. The authors need to clarify this fact in the introduction or result sections. The current form may cause undesirable confusion to potential readers.\n4. In the reconstruction of ligand location, predicted distances between ligands and also between ligand-protein may not precisely match with the coordinates of a single pose. If this is the case, the authors should elaborate more details about this process. Does it require a kind of post-processing?\n5. The authors greatly emphasize the importance of proteome-wide docking, but this work does not provide any meaningful analysis except the computational time, which can be readily estimated without performing the actual calculations. The authors may add an additional study verifying that the proposed method can indeed provide meaningful results from the proteome-wide docking. Otherwise, they need to tone down their argument from the title and introduction parts. \n6. Appendix A.6 shows the examples of 3D structures predicted by RapidDock and AlphaFold3. The authors deliberately selected specific examples where RapidDock outperformed AlphaFold3, while they admit that the latter is far better than the former on average. AlphaFold3 even predicted those structures from the sequence-level information. These examples may lead to misunderstanding that RapidDock works better than AlphaFold3. The authors need to provide examples where RapidDock fails while AlphaFold3 succeeds." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. This work shows the possibility of transformer-based approaches for binding structure predictions, while most previous works are based on diffusion methods.\n2. The proposed method outperformed the popular base-line, DiffDock-L, in the two benchmark studies, while its computational time for predictions is much faster than those of all base-line models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposed a fast and reliable prediction method, RapidDock, for protein-ligand binding poses. Most previous methods employed in this work as base-lines used diffusion models, leading to large computational costs. However, RapidDock is based on a transformer and so much faster than the base-line models. The benchmark studies on PoseBusters and DockGen show that RapidDock is not only fast but also far more accurate than others except AlphaFold3. While the performance of the proposed method seems competitive, there are several issues that need to be addressed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed method needs to generate 96 molecular conformations for each molecule and analyze the conformers to obtain its distance matrix.\n2. The experiment in Section 4.2 seems meaningless because it uses holostructures when it predicts binding poses.\n3. The title and introduction parts emphasize the importance of proteome-wide docking, but this work does not provide any meaning results regarding that. \n4. Technical details of the proposed method are insufficient." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- When comparing runtime in inference mode, did you preprocess the protein, or was the comparison done solely for molecule conformation?\n- Did you use the same time-splitting approach as previous methods, such as DiffDock and NeuralPlexer, for the dataset?\n- In the statement, \n> \"Only the fixed distances across the molecule’s possible conformations are recorded, and others are denoted by a special value of −1,\" \n\n Could you clarify why you selected -1 as the special value?\n- In the RapidDock attention section, you state, \n> \"First, we multiply the attention scores corresponding to input pairs with known distances (i.e., ligand-ligand within a rigid part and protein-protein) by a learnable scalar $s_m$, one for each layer $m$.\" \n \n Did you also apply this to protein-ligand pairs? If not, could you explain why?\n- For inference, do you generate one conformation per runtime for each protein-ligand pair?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The authors demonstrate the need for a faster model by outlining the scalability limitations of previous deep learning models.\n- Aligned with their motivation, RapidDock shows the ability to perform conformation sampling for molecular docking in GPU inference runtime in approximately one-hundredth of a second per protein-ligand pair.\n- In benchmarking with PoseBuster, RapidDock achieves the best performance among open-source codes (noting that AlphaFold 3 is not open source), particularly in the percentage of ligands achieving RMSD < 2 Å.\n- The paper provides a clear explanation of how ligand and protein modalities are utilized and fed into the Transformer model.\n- For constructing the ligand distance matrix, the authors use a hybrid approach, incorporating physics-based methods from RDKit, such as MMFF and EDKG.\n- Additionally, the inclusion of ligand charge embeddings represents another hybrid approach in the model.\n- For protein embeddings, the authors showcase the effectiveness of using not only pre-trained models but also their custom-trained ESM-2 models.\n- The Transformer architecture employs a non-autoregressive approach with a full attention mask, introducing a new method for molecular docking by incorporating an attention scaler within the attention mechanism.\n- The training hyperparameters are shared in detail through comprehensive tables.\n\n**Originality:** RapidDock introduces a unique approach to molecular docking, particularly in terms of preprocessing compared to other DL-based methods. The detailed steps in the ligand and protein embedding process highlight its originality, which the authors further validate through an ablation study.\n\n**Significance:** From a large-scale proteomic perspective, RapidDock is highly scalable and significantly faster in runtime compared to other DL-based and search-based molecular docking methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors introduce RapidDock, a new approach to molecular docking that leverages a Transformer model. The method includes both ligand atom embeddings and ligand charge embeddings. Protein representations are generated using embeddings from protein amino acids, the ESM-2 PLM, and calculated distance metrics. Instead of deep learning, RDKit-based methods such as MMFF and EDKG, construct the rigid distance matrix for molecules. Trained on the PDBBind and BindingMOAD datasets, RapidDock outperforms DiffDock and other open-source methods on the PoseBuster benchmark in % of ligands in RMSD < 2 Å metric, delivering at least a 100x increase in inference speed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The code is not shared.\n- Although the method section claims equivariance, it lacks sufficient explanation on this aspect.\n- The rationale for using ligand atom charges is not adequately clarified.\n- It is unclear why non-fixed distances in the molecule's rigid distance matrix are assigned a value of -1.\n- The annotations for distance bias matrices are insufficiently explained; the annotations appear to be included simply because they work, without detailing why they are effective.\n- Similarly, the rationale behind RapidDock’s use of attention and charge embeddings, along with their annotations, is not fully addressed.\n- The splitting strategy for the training, validation, and test sets is not sufficiently described.\n- The parameter comparison in benchmarking does not seem fair; DiffDock results are compared with 30 million parameters, while RapidDock has 60 million parameters.\n- The RMSD metric, commonly used in molecular docking and Structure-Based Drug Design (SBDD), does not always yield bioactively, physically, or chemically plausible structures, as shown by Posebuster[1], PoseCheck[2], PoseBech[3], and CompassDock[4]. Including these metrics in the benchmark would strengthen the study.\n- Appendix A.6 lacks a comparison between DiffDock and NeuralPLexer examples.\n- The extent of ligand filtering during ligand preparation is not sufficiently discussed.\n\n**Quality:** Although the authors claim this is the first use of a Transformer-based model in blind docking, ETDock[5] and FeatureDock[6] have previously used Transformers for molecular docking; however, these methods are not mentioned in the paper. Additionally, the benchmarking is limited to comparisons with only a few popular methods. In the conclusion, the authors state that: \n\n>\"... the model demonstrates a strong understanding of the physicochemical principles behind forming biological structures,\" \n\nyet no bioactivity or physicochemical analyses, as discussed earlier, have been conducted to support this claim.\n\n**Clarity:** The paper contains numerous grammatical errors that detract from readability and should be carefully revised. Placing the Related Work section after the Experiments section disrupts the flow, and the method annotations are not clearly explained.\n\n**Reproducibility:** As the code is not shared, it is currently impossible to test whether it performs as reported. If the code were provided, I would be able to review, test, and reassess my evaluation (including scoring and comments) accordingly.\n\n### **References**\n[1] Martin Buttenschoen, Garrett M Morris, and Charlotte M Deane. Posebusters: Ai-based docking methods fail to generate physically valid poses or generalise to novel sequences. Chemical Science, 15(9):3130–3139, 2024. \n\n[2] Charles Harris, Kieran Didi, Arian R Jamasb, Chaitanya K Joshi, Simon V Mathis, Pietro Lio, and Tom Blundell. Benchmarking generated poses: How rational is structure-based drug design with generative models? arXiv preprint arXiv:2308.07413, 2023.\n\n[3] Alex Morehead, Nabin Giri, Jian Liu, Jianlin Cheng. Deep Learning for Protein-Ligand Docking: Are We There Yet? arXiv preprint arXiv:2405.14108, 2024.\n\n[4] Ahmet Sarigun, Vedran Franke, Bora Uyar, Altuna Akalin. CompassDock: Comprehensive Accurate Assessment Approach for Deep Learning-Based Molecular Docking in Inference and Fine-Tuning. arXiv:2406.06841, 2024.\n\n[5] Yiqiang Yi, Xu Wan, Yatao Bian, Le Ou-Yang, Peilin Zhao. ETDock: A Novel Equivariant Transformer for Protein-Ligand Docking. arXiv:2310.08061, 2023.\n\n[6] Mingyi Xue, Bojun Liu, Siqin Cao, Xuhui Huang. FeatureDock: Protein-Ligand Docking Guided by Physicochemical Feature-Based Local Environment Learning using Transformer. ChemRxiv" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Wording:\nL. 30 “ … will revolutionize medicine” is an unsupported statement, please reformulate. It would be better to focus on the obstacles that need to be overcome and how you tackle those (like in\nyour paragraph 2, L. 34 X.).\nIn l.113f. the authors claim equivariance, however they work with interatomic distances only, making your model invariant.\nPermutation loss citation is wrong (e.g. l. 228), cite original paper (Zhu et al. 2022 Direct molecular conformation generation) \nIn my opinion, l.419f. “the model demonstrates a strong understanding of the physicochemical principles” is a big stretch - how is this supported in your work? Accurate and fast prediction of ligand poses in proteins doesn’t mean that the model understands the physicochemical principles that lead to those poses, and it is debatable whether this is even needed.\nOpen questions:\nSince the prediction time is critical to the paper's claims, I would like to see how much time you spend at inference on average in a) pre-processing, b) prediction, c) reconstruction and d) any form of post-processing.\n\nIn l.162f., what is the reasoning for choosing 257 buckets? Are there ablations on higher/lower resolution? How does this affect the performance-inference time tradeoffs? Same questions for the charge embeddings." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "RapidDock is fast and accurate, allowing it to tackle human proteome-scale docking studies. The reported speed could enable its use as an oracle function for other DD-related tasks in future work. Ablations on PoseBusters and Dockgen show promising results. The\npaper is well written and gives a lot of insights into the modelling and training" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces RapidDock, a fast and accurate transformer-based model for the\nblind-docking task. The model predicts interatomic distances. Afterwards, the docked\npose is reconstructed with the L-BFGS algorithm. The authors report a 100x speed-up while\nsimultaneously improving over commonly used deep learning-based docking models in the\nPoseBusters and DockGen datasets. The authors make first experiments at human\nproteome-scale docking." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the method shows good results on a task relevant to computational biology, there is not enough novelty on the ML side that justifies acceptance as a main track contribution. The authors use a standard transformer embedding the ligand and proteins with ESM2 and\nemploy cross-attention to the distance matrix. I encourage submission to a domain-specific journal." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce RapidDock, a first-in-class transformer-based model capable of accurate high-throughput molecular docking." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024rapiddock,\ntitle={RapidDock: Unlocking Proteome-scale Molecular Docking},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0sU4myabw1},\nnote={under review}\n}" }, "abstract": { "value": "Accelerating molecular docking -- the process of predicting how molecules bind to protein targets -- could boost small-molecule drug discovery and revolutionize medicine. Unfortunately, current molecular docking tools are too slow to screen potential drugs against all relevant proteins, which often results in missed drug candidates or unexpected side effects occurring in clinical trials.\nTo address this gap, we introduce RapidDock, an efficient transformer-based model for blind molecular docking.\nRapidDock achieves at least a $100 \\times$ speed advantage over existing methods without compromising accuracy.\nOn the Posebusters and DockGen benchmarks, our method achieves $52.1$\\% and $44.0$% success rates ($\\text{RMSD}<2A$), respectively. \nThe average inference time is $0.04$ seconds on a single GPU, highlighting RapidDock's potential for large-scale docking studies.\nWe examine the key features of RapidDock that enable leveraging the transformer architecture for molecular docking, including the use of relative distance embeddings of $3$D structures in attention matrices, pre-training on protein folding, and a custom loss function invariant to molecular symmetries. We make the model code and weights publicly available." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "molecular docking", "protein-ligand binding", "transformer", "equivariance", "high-throughput screening", "drug discovery" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6875504defa46c15dc424229e66bd7c84e761ca8.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "RapidDock: Unlocking Proteome-scale Molecular Docking" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0sary0UZn5
On the Limitation and Redundancy of Transformers: A Rank Perspective
main
Active
Transformers;self-attention;low-rank;redundancy;model reduction
interpretability and explainable AI
3;3;5;8
3;4;4;3
2;1;3;4
2;1;2;4
2;2;3;4
4.75
3.5
2.5
2.25
2.75
-0.366508
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "The findings are rather interesting. Would they apply to other Transformer models, such as those used in NLP and audio processing tasks?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "This work studies a fundamental problem in Transformer model efficiency. In this paper, the authors present extensive empirical results and rigorous theoretical analysis, offering critical insights into the architectural limitations and redundancy in Transformer models. The findings are highly valuable for designing more efficient Transformer-based architectures." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work explores the architectural limitations and redundancy in Transformer models by analyzing the ranks of their attention score matrices. Through extensive experiments across diverse model configurations and data distributions, the authors uncover two key properties: the low-rank barrier and the model-reduction effect. These findings are rigorously supported by a fine-grained mathematical analysis, revealing (i) a consistent theoretical upper bound on the attention rank (0.63n) and (ii) a critical threshold for rank saturation where the hidden dimension h scales as Ω(log n). These results illuminate the inductive biases and internal dynamics of Transformers, deepening our theoretical understanding and enabling better assessment of model capacity and efficiency in practical applications. These insights are particularly valuable for Transformer architecture design and optimization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the paper presents extensive experiments on various model configurations and data distributions, the evaluation focuses on a few common computer vision datasets (CIFAR-10, CIFAR-100, and SVHN). This raises the question of whether the findings generalize to other domains, such as natural language processing or audio processing, where Transformer models are widely used. Including additional experimental results from these domains would be very helpful." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "There are no ethical concerns in my opinion." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- (Q) Both KDEFormer [A] and Hyperattention [B] seems to also consider the rank of the attention matrix theoretically (among others). However, these references are missing. How is the setup of this current paper positioned against these references?\n - [A] Zandieh, Amir, et al. [\"Kdeformer: Accelerating transformers via kernel density estimation.\"](https://proceedings.mlr.press/v202/zandieh23a.html) International Conference on Machine Learning. PMLR, 2023.\n - [B] Han, Insu, et al. [\"HyperAttention: Long-context Attention in Near-Linear Time.\"](https://openreview.net/forum?id=Eh0Od2BJIM) The Twelfth International Conference on Learning Representations. 2024\n- (Q) It is not clear how the rank of the attention matrix is tied to the expressivity of the attention mechanism. Are there existing results that make this connection?\n- (Q) Given the use of softmax operation in the attention matrix, isn't it expected that the attention matrix will not be full rank? Part of the motivation for schemes like Performers [C], Scatterbrain [2] etc is this low rank structure. In fact, if we remove the softmax, this linear attention can probably have full rank if the head dimension is large enough but is not desired since the softmax operation is what makes attention work.\n - [C] Choromanski, Krzysztof Marcin, et al. [\"Rethinking Attention with Performers.\"](https://openreview.net/forum?id=Ua6zuk0WRH) International Conference on Learning Representations. 2020.\n- (Q) In Section 3.1, whose rank are we computing? The $\\textbf{Attn}^{(i)}(\\mathbf(X))$ matrices? How are the ranks aggregated across the multiple heads?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "This paper considers an interesting topic of analysis, studying the rank of the attention matrix, and how it is limited from above, which puts a limitation on the expressivity of the attention mechanism." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on the attention matrix in transformers, and studies the effect of the head dimension on the rank of this attention matrix. Under assumptions on the data and the transformer weights, the paper empirically and theoretically highlight that the rank of the $(n \\times n)$ attention matrix for $n$-length input sequences is upper-bounded by a quantities close to $0.63n$, and that attention matrix rank grows with the head dimension $d_h$ but the gain in the attention matrix rank diminishes as the head dimension grows, demonstrating a \"diminishing returns\" behaviour. This behaviour is demonstrated with vision transformers where the ranks of the attention matrix of the first transformer block are reported as the head dimensions is varied." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- (W1) In my opinion, the main weakness of this paper are the analysis tools and assumptions utilized for providing the rank upper bounds which is used make the main claim of this paper that the attention matrices are rank limited. Here are some specific issues:\n - (W1.1) There are many variations of multi-head where the value matrix $\\mathbf{V}^{(i)} \\in \\mathbb{R}^{n \\times d_h}$ has the head dimension $d_h$. But usually the key and query matrices do not need to have the same dimensionality as the value matrix. Furthermore, there are versions where $d_h \\times h \\not= d_{\\text{model}}$, and $W_o$ projects $h \\times d_h \\to d_{\\text{model}}$. For example $d_h = d_{\\text{model}}$. How does the results studied here be affected by these different strategies? Furthermore, there are versions that just vary the value matrix dimension, but the query/key matrix dimensions are not affected by the number of heads. In that case, this analysis is not applicable. In fact, in the empirical evaluation of Figure 4b (which uses a different variation), we are able to go beyond the $0.63$ range with $d_h < 5$ and goes up to 0.68. Different variations of this could allow us to go to full rank (or close to it). In fact, it would seem that, with $h = 1$, we should be able to recover the full-rank. Can the authors please explicitly discuss how their analysis might change for these different variations? Alternately, can the authors please justify their choice of focusing on one particular formulation of multi-head attention as this would clarify the scope and limitations of the presented results?\n - (W1.2) It is not clear how much of this analysis is dependent on the normal distribution assumption on the weigthts and tokens? Consider a case where the $\\mathbf{K} = \\mathbf{Q} = \\mathbf{X}$ with each $\\|\\|\\mathbf{x}_i \\|\\|_2 = 1$ and the temperature is low enough that we are doing top-1 attention (which is also what is considered in this paper), then the matrix $\\textbf{Attn}(\\mathbf{X}) = I_n$ which is the $n\\times n$ identity matrix, which is full-rank. So this clearly gives a case where the proposed bound is violated. Why is this an implausible case, or conversely, how does the presented analysis subsume this situation? While the identity attention matrix seems too special, one can think of a problem where each token needs to just attend to the token right before it (that is $\\textbf{Attn}(\\mathbf{X})[i, i-1] = 1, \\forall i > 1$, leading to an off-diagonal almost full-rank attention matrix. Can the authors please address this specific counterexample and explain how it relates to their assumptions and results?\n - (W1.3) It is odd that while we are studying the effect of the head dimension on the rank of the attention matrix, Theorem 1 has no dependence on $d_h$. This makes the result somewhat odd. I think this is an artifact of the assumptions and analysis, which effectively reduces the attention matrix to the case where each row has 1 in one of the $n$ indices at random in a query independent way. This is equivalent to each token sampling uniformly at random with replacement a token out of the $n$ tokens. Thus the expected number of unique tokens attended to in the complete attention matrix (with only one non-zero per row) is equivalent to its rank. Using standard combinatorics arguments, this expected number of unique tokens attended (for sampling with replacement) to will come to $n(1 - (1 - 1/n)^n)$ which approaches the $\\approx 0.63n$ bound. While the analysis in the paper is correct, this form of an attention matrix is not useful or interesting for real applications of transformers, and also the head dimension $d_h$ plays no role here, which is different from the motivation of this paper. Can the authors please explicitly discuss this apparent discrepancy and explain how it relates to their overall claims about the effect of head dimension on attention rank? Alternately, the authors can also share (or point to) a finer-grained analysis that directly tie this rank upper bound to the head dimension. \n - (W1.4) It is not clear why line 363 \"Recall that the rows of $\\mathbf{X} \\mathbf{W}_q \\mathbf{W}_k^\\top \\mathbf{X}^\\top = \\mathbf{Q} \\mathbf{K}^\\top$ are independently and identically distributed as $\\mathcal{N}(\\mathbf{0}_n, \\mathbf{K} \\mathbf{K}^\\top)$\" is true. Why is this distribution independent of $\\mathbf{Q}$? Similarly, equation (6) seems odd, highlighting that the rows for each query in the attention matrix are distributed identically. This query-independence is both odd and counter to the main motivation of transformers that usually have different attention patterns for different rows/queries.\n- (W2) This is smaller weakness, but it is not clear what we can do with this rank-bounded-ness insight (assuming that this upper bound is useful and accurate). It is not clear what problems a transformer is unable to solve because of this rank-boundness, or what problems it would have been able to solve if it was able to have full-rank attention matrices. Can the authors please provide specific examples or hypothetical scenarios where this rank-boundedness might impact transformer performance or capabilities as this would help connect their theoretical results to practical applications.\n\nMinor comments:\n- (C1) It would be good to make the caption of Figure 1 self-sufficient (or point to the part of the text where the complete description is available). Otherwise, having such an introductory figure on page 2 seems a bit confusing.\n- (C2) The \"less than or around 0.01\" comment in Table 2 caption makes that \"log-dependence\" argument a bit less convincing. One can alternately argue that for sequence length of 200, we needed more than a linear increase in $d_h$, implying a very different story.\n- (C3) While the assumptions on the input are discussed in Remarks 2 and 3, note that the assumptions on the key/query projection matrices seem more restrictive to me, and require appropriate discussion." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper offers a new theoretical perspective on Transformer architecture, particularly in characterizing the limitations of attention ranks, which is insightful for understanding the underlying mechanics of model capacity.\n\n2. The two phenomena in Transformer attention ranks: (1) an upper bound on attention rank, referred to as the \"low-rank barrier,\" and (2) the \"model-reduction effect,\" which denotes the diminishing returns on attention rank when increasing head dimensions beyond a certain threshold, are quite interesting and intriguing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The manuscript investigates the relationship between head dimensions and the corresponding attention score matrix ranks. The authors identify two phenomena in Transformer attention ranks: (1) an upper bound on attention rank, referred to as the \"low-rank barrier,\" and (2) the \"model-reduction effect,\" which denotes the diminishing returns on attention rank when increasing head dimensions beyond a certain threshold. Experiments are provided to validate these findings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Lack of motivation: First of all, why should one care about the rank of the attention matrix? While it is interesting to note that the attention matrices display the low-rank barrier and model-reduction effects, it is unclear how these findings directly impact the design or usage of Transformer models in practical applications. The study would benefit from a more explicit motivation linking these theoretical insights to specific challenges in machine learning or computational limitations. In particular, attention ranks do not seem to have a clear relationship with the model performance or expressive power. Have you identified whether the low-rank barrier correlates with any performance metrics? Could the model-reduction effect be leveraged to improve model efficiency?\n\n2. Assumptions in theoretical analysis: The theoretical analysis assumes orthonormal input sequences to attention, which may not fully reflect the reality. For example, there are other evidence in the literature suggesting that contexualized token embeddings tend to be anisotropic [Ethayarajh, 2019]. While the authors justify this orthogonal assumption by citing Tian et al. 2024, further discussion on the applicability of the theoretical results in varied real-world scenarios would enhance their robustness. \n\n3. Limited exploration of practical applications: While the theoretical findings are interesting, the work could benefit from a more explicit discussion of how these insights translate into practice. For example, I would be interested to see if the findings on the model-reduction effect could lead to model compression techniques without significant performance loss.\n\n\n\nKawin Ethayarajh. How contextual are contextualized word representations? comparing the\ngeometry of bert, elmo, and gpt-2 embeddings. In EMNLP, 2019." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What would the effect of the rank in Theorem 1 be when having data that is almost orthogonal?\n\n- Does the rank saturation phenomenon also happen in real datasets when the input dimension n varies?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The phenomenon that is presented in the paper is interesting and sheds new light on the expressive power of attention matrices. The paper provides theoretical results and complementing empirical validation under different setting. The technical parts of the paper are clearly written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the limitations of the rank of attention matrices, with a goal of gaining a better understanding of the expressive power of transformers. First, the paper provides experiments on randomly generated data from different distributions and show empirically a rank saturation at around 0.63n (n being the input dimension). Next, it is proved theoretically that transformers at initialization also exhibit this rank saturation. Finally, experiments are given on the CIFAR-10/100 and SVHN datasets showing a rank saturation phenomenon." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I have some concerns with the presentation of the paper and the theoretical and empirical results:\n\n1) The results of Sections 3 and 4 discuss transformers with random data, random weights, or both. It is difficult to generalize from that to general transformers trained on real data. In general, I believe it is OK to have a theoretical paper with restricted assumptions. However, the presentation of this work, and specifically the abstract and introduction, gives the impression that the low-rank phenomenon is general and not restricted to random settings. If the focus of the paper is random settings (i.e. random data and/or weights) this should be clearly stated. If the message of the paper is more general, then further evidence should be provided as I discuss later.\n\n2) The theoretical result (Theorem 1) is nice, but limited and isn’t enough in itself for a theoretical paper. The biggest limitation is the assumption of the data. Although the authors justify assuming that the samples are almost orthogonal (e.g. drawn from a Gaussian or uniform on a sphere), the assumption is that they are exactly orthogonal. This allows only for n samples, instead of O(2^n) samples. It seems possible to prove this result for almost orthogonal data.\n\n3) The “real-world experiments” in section 5 are done on very small-scale image datasets, CIFAR-10/100 and SVHN. It would be more convincing to do experiments on larger datasets, and specifically text datasets where it is possible to change the embedding dimension, and thus experiment on the effects of changing n." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024on,\ntitle={On the Limitation and Redundancy of Transformers: A Rank Perspective},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0sary0UZn5},\nnote={under review}\n}" }, "abstract": { "value": "Transformers have showcased superior performances across a variety of real-world applications, particularly leading to unparalleled successes of large “foundation” models. \nHowever, since these models are usually trained on web-scale datasets, the overall computation and memory loads are considerably increasing, calling for more *efficient* methods in machine learning. \nIn this work, we step towards this direction by exploring the architectural limitation and redundancy of Transformers via investigating the ranks of attention score matrices. \nOn one hand, extensive experiments are conducted on various model configurations (model dimensions, heads, layers, etc) and data distributions (both synthetic and real-world datasets with varied sequence lengths), uncovering two key properties: \nalthough the attention rank increases with the head dimension $d_h$, as expected, the rank is eventually upper bounded (limitation) and gets saturated (redundancy). We call them the *low-rank barrier* and *model-reduction effect*, respectively. \nOn the other hand, we provide rigorous demonstrations for these observations through a fine-grained mathematical analysis, highlighting (i) a consistent theoretical upper bound ($\\approx 0.63n$, $n$: the sequence length) of the attention rank regardless of the head dimension $d_h$, and (ii) a critical position of the rank saturation ($d_h=\\Omega(\\log n)$).\nThese results shed light on the inductive biases and internal dynamics of Transformers, contributing to the theoretical understanding and assessment of the model capacity and efficiency in practical applications." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Transformers", "self-attention", "low-rank", "redundancy", "model reduction" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b08b269a9b0f4c8e3a2d650595ec57dd7a5eef1e.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "On the Limitation and Redundancy of Transformers: A Rank Perspective" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0spR7wDwBh
A grid world agent with favorable inductive biases
main
Active
intrinsic rewards;inductive biases;planning;uncertainty;deep reinforcement learning;reinforcement learning
reinforcement learning
3;5;5;5;8
3;4;4;3;2
2;3;3;3;3
2;3;2;3;3
2;2;3;2;4
5.2
3.2
2.8
2.6
2.6
-0.534522
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- From learning curves is evident that NACE is more sample efficient than all the other tested algorithms. However, I would like to ask why it is not able to reach the optimal policy and which can be the intuition behind this recurrent behavior.\n- Thinking out of the grid world environment, I would like to ask how this method can work and if you see limitations and challenges that have to be considered in more complex problems.\n- Regarding non-deterministic transitions, how can NACE give \"system tolerance\" as stated in line 294?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Novelty: the work brings novelty due to the adoption of a curiosity model based on causal reasoning. \n- Narration: the paper's narration is well-done and sound, and the work is generally well-written.\n- Experiments: the experimental campaign is convincing since it considers several state-of-the-art RL algorithms and exploration frameworks. The evaluation metric regards the sample efficiency of each method, demonstrating NACE's brilliant results.\n- Supplementary materials: the attached zip file containing NACE's codebase runs easily and smoothly." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "NACE (Non-Axiomatic Causal Explorer) is a novel experiential learning agent leveraging causal reasoning and intrinsic reward signals to enable more efficient learning within grid world environments. The authors compare the proposed method against state-of-the-art RL algorithms, demonstrating its benefit in terms of sample efficiency across many different grid world environments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Some notations are not very clear.** In particular, the section dedicated to the NACE architecture (section 4.4) leaves some symbols unexplained, such as the observer's sets $M_t^{change}, M_{t}^{observation-mismatched}, M_t^{prediction-mismatched} $, which have been introduced here only in mathematical notation. Still, I would suggest to explain their meaning. Same for the function $f_{exp}$ whose usage and terms composition are not completely clear.\n- Apart from the notation, also **intuitions behind the need for some components of the architecture are not immediately understandable**. I would have rather added an appendix to explain those details more deeply. For example, I would explain the interactions between the different components of the architecture more verbosely, also describing the flow diagram in Figure 1 and the role of each component in natural language, to give an intuition about the maths behind it. Perhaps, a pseudocode of the entire algorithm could come in handy. \n- The main limitation of NACE is due to its application since it is **usable only in deterministic grid world settings**. However, authors highlight as future works possible extensions to more complex problems.\n- **Experimental setups could have been explained more in detail** in the Appendix, by reporting a more extended description of the presented scenario, perhaps with the support of the relative images (bird-view map). Furthermore, authors could add those scenarios that have not been presented in the main paper, but that can be run in the codebase, such as the *soccer world*. \n- **Hardware employed to run the experiments and time consumption of the framework** not provided." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Is the match quotient Q(r,c) defined for cell c being the consequence cell?\n\nNew rules are created “when positive evidence is found for the first time” - but how are the set of precondition equality constraints determined for the new rule? I.e., how does NACE determine which cells are relevant? \n\nWhy is positive evidence only counted for a rule if all of the precondition cells changed values and/or didn’t match the prediction at the last step? Since the precondition is an AND conjunction of many cell values, it is possible only one might need to change for a rule to be activated. And why can the positive evidence count still increase even if the rule fails to predict the outcome?\n\nWhy is the predicted reward not the sum, rather than the average, of the reward of each of the N utilized rules? Each rule seems to describe a way to obtain a certain reward, so if multiple rules are satisfied shouldn’t multiple rewards be obtained?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The sample efficiency results look very good.\n\n In general, the writing quality is high. \n\nThe Observer and Hypothesizer components of NACE, along with the State Match measure of state familiarity, appear to be quite novel. \n\nSuch a method should be quite interpretable - though the authors do not show any of the rules learnt by NACE in the test environments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors present NACE, a learning agent which uses strong inductive biases, causal reasoning and a causally-informed intrinsic reward to explore more efficiently in grid-world environments. NACE maintains an internal state consisting of a 2D array corresponding to each cell of the grid world, a 1D array to track non-spatial values such as inventory, as well as a set of rules of the form “(preconditions, action) => consequence” with counts of associated positive and negative evidence. At each step, it updates the 2D array and calculates which observed cells changed and which did not match their predicted values, uses this evidence to update the set of rules, then plans an action sequence to maximize expected return– or if no positive return trajectory is found, then to reach a state with minimum familiarity (average over all cells of how well they match the best fitting rule). Finally, the best-fitting rules are used to predict the cell values of the next state. They test on a number of minigrid environments and show that NACE reaches good performance in about 1000 steps, while existing DRL methods take around 1e6-1e7 steps to reach similar performance, although the best methods converge to higher average rewards at the end of training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The authors do not mention or compare to existing methods for efficient structured learning which capture inductive biases, for example [1]. It is hard to evaluate the work’s originality given that the authors did not contextualize it among existing related approaches. \n\nThough NACE heavily relies on an explicit model of the gridworld, they also do not compare to any explicitly model-based deep RL algorithms such as [2] or [3]\n\nThe significance of the contribution seems limited. NACE shares a lot of weaknesses with existing methods- (depends heavily on quality of state representations, would struggle where defining impactful state changes is difficult) - while lacking strengths (adaptable to continuous state spaces or high-dimensional action spaces, theoretical optimality guarantees). It seems limited to very simple rules, and the environments the authors tested on likewise covered a very small number of dynamics- navigating to a goal location with obstacles, and picking up a key to unlock a door to test sequential dependencies. \n\n - The authors did not test the ability to develop rules that capture dependencies across space rather than time, e.g. the need to flip a switch to unlock a set of doors. In fact, because the precondition constraints are defined on cells’ relative positions to the consequence cell, this method would likely do poorly on this dynamic, since this constraint would be best expressed as a condition on a cell specified by its global position (the switch location). \n\n - The constraints also require the cells to be exactly equal to a certain value, and are limited to cases where all constraints must be satisfied, rather than other conjunctions like Or, which excludes dynamics where values need only be above some threshold or within a set of allowable values (e.g. the Put Near minigrid environment where the agent must place one object near to another object).\n\n - The environments did not contain any stochasticity or objects that can move independently of the agent, e.g. the Dynamic Obstacles environment. A core component of NACE is observing which cells changed at each step and using that to create and update rules- is this method robust to settings where cells change irrespective of the agent’s action?\n\n\nThe clarity of the paper has room for improvement:\n - The cell notation is inconsistent and confusing- the subscript changes between $c$, $c_r$, $c_{t,x,y}$, $c_t$ without any explanation. Different symbols should be used for cell variables than for cell values e.g. in the definition $\\bar{c}:=(c_r=c)$. If the precondition constraints are on cells’ relative positions, there should be notation for that in contrast to the global position notation $c_{t,x,y}$ \n\n - K is used for the number of rules and also the number of equality constraints- consider using a different symbol.\nsome aspects of the method were not fully explained- see the Questions section.\n\n - Should consider using a different notation for the Match Quotient, since Q is usually used for the Q value function in RL. \n\n - Small grammar errors throughout the paper. E.g. “Such [an] approach” on line 154, quotation marks are flipped on line 163\n\n[1] Tsividis, Pedro A., et al. \"Human-level reinforcement learning through theory-based modeling, exploration, and planning.\" arXiv preprint arXiv:2107.12544 (2021).\n\n[2] Hafner, Danijar, et al. \"Mastering diverse domains through world models.\" arXiv preprint arXiv:2301.04104 (2023).\n\n[3] Sekar, Ramanan, et al. \"Planning to explore via self-supervised world models.\" International conference on machine learning. PMLR, 2020." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How does this compare to RMax? It seems to me that it has a similar flavor, in which we observe transitions and the agent explores such transitions until sure.\n2. The formal definition of a cell is missing. I supposed the cell is the value of the 3rd dimension of the state definition.\n3. Is there any value estimation happening? If so, how are you estimating the value function?\n\nMinor comments\n- Planner. Lines 311-314. Unclear wording.\n- Overloading c(r) I think (line 288)\n- Fix notations (use \\citep)" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is motivated by the importance of the inductive biases they propose to grid world environments. Thus, the authors proposed to study these by incorporating them all in their agent design. Finally, showing that these biases have a huge effect on sample efficiency.\n2. The paper is mostly well written with some gaps in notation that I had a hard time following (see Questions)\n3. The agent design seems to be novel in the way they instantiate the different biases based on predicate rules." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors designed NACE (Non-Axiomatic Causal Explorer), a learning agent that incorporates a set of inductive biases that the authors consider to be important for an acting agent. These include causal relations, temporal locality, spatial equivariance, state tracking, and attentional biases. \nThe design of the agent is based on predicate rules that are proposed by the agent given the observations. The agent then plans to either explore rules (to collect new evidence about the rule) or maximize reward.\nFinally, the authors test this agent in various scenarios of Minigrid and compare it against a wide range of (deep) RL agents. They show that in these particular scenarios, NACE is particularly sample efficient compared to the RL agents." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. It is clear that NACE beats all the (deep) RL agents. However, given the comprehensive design, it is hard to understand where the benefit comes from. Perhaps ablating the effects of each inductive bias would be a good way to understand its contribution. Moreover, all RL agents considered are used in all experiments, but each one of them incorporates different biases that are incorporated in NACE. Perhaps grouping the RL agents based on the biases would make a clearer point of the importance of each bias.\n2. RL baselines are shown to be less sample efficient. This could be the result of their generality (less inductive biases) as claimed. But I’m concerned that it seems that in all these cases the problems violate the Markov assumption, putting all these RL agents at a disadvantage. Is there an explicit handling of partial observability? Are there any RNNs/memory involved?\n3. In the formal presentation of the agent, some notation is overloaded (e.g. c for cells, clauses in a rule, c(r) in line 288) which makes some of the method presentation hard to follow. \n4. Although this is stated at the core of the paper, NACE is specifically designed for the grid world considered. It’s unclear how the results would extrapolate to other type of tasks. Also, I think it would be relevant to compare NACE to RMax, at least to discuss its similarities and differences." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weaknesses" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors target a very important and interesting question: How to incorporate inductive bias into Reinforcement Learning and increase the data efficiency. Moreover, the method is compared to various other already established algorithms and tested with different examples." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Non-Axiomatic Causal Explorer (NACE), an agent optimized for grid world environments using causality-informed intrinsic rewards and inductive biases, including temporal and spatial modeling, to achieve data-efficient learning. Unlike most standard RL approaches, which require extensive training data, NACE efficiently learns policies in fewer steps by systematically exploring unfamiliar states. Experiments in MiniGrid scenarios show NACE's superior sample efficiency across various environments. The paper suggests that NACE’s principles could extend to more complex domains, promising advancements in data-efficient reinforcement learning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Unfortunately, the reviewer cannot recommend the paper for publication at ICLR due to the following issues:\n\n- The reviewer notes that while NACE’s systematic exploration of unfamiliar states is highlighted as its primary distinction from other RL methods, the incorporation of additional inductive biases defined in Section 4.1 remains unclear. Could the authors elaborate on how each bias is implemented within NACE’s framework? Additionally, conducting ablation studies on the contribution of each inductive bias would provide valuable insight into their individual impacts on performance.\n\n- In the experimental results, the authors present rewards over time steps. Could the authors clarify how time steps are defined in this context? Specifically, are these time steps equivalent to RL framework iterations, with each time step representing the generation and evaluation of a potential solution?\n\n- The reviewer suggests that comparing computational costs between algorithms would enhance the study's rigor. The current comparison lacks detail, as one time step in NACE may involve higher computational complexity than in other algorithms.\n\n- In many of the RL frameworks tested, rewards remain stagnant for extended periods. If the results were examined at a finer scale, would smaller reward changes become visible, or does the mean reward remain consistently at zero?\n\n- After the initial rapid increase in reward, NACE plateaus below the maximum attainable reward across all environments. The reviewer recommends exploring this behavior further and considering modifications to the algorithm that might enhance performance during the latter stages of learning. This could provide insights into whether additional mechanisms could support continued improvement toward optimal rewards." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. How were the hyperparameters chosen for the baseline algorithms?\n2. Why is NACE unable to find the optimal policy? What improvements could be made to enable NACE to do so? A case-study on a specific environment would be interesting." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Existing RL techniques for solving gridworlds are systematically laid out and elaborated on in Section 3, which makes it easy for the reader to contextualize the work.\n- Section 4 introducing NACE is concise well-described.\n- Section 5 provides compelling results with a comparison to multiple baselines. Figures highlight the salient contributions that the authors attempt to make with NACE: extreme sample efficiency.\n- The overall prose of the paper is extremely clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce NACE, a novel learning agent that utilizes a causality-informed curiosity model to make intelligent hypotheses about causal information in grid world environments. NACE is comprised of 4 components: an observer that updates a \"bird-view\" map of the environment and assesses prediction-observation failures, a hypothesizer that generates new rules, a planner that balances an exploration-exploitation tradeoff for accruing reward and refining hypotheses, and a predictor that models the environment. The authors assess NACE in a variety of environments from the Minigrid library clustered into three relevant groups: stationary environments, dynamic environments, and dynamic environments with sequential dependencies. Although NACE does not always find the optimal policy, its data efficiency is unparalleled by modern DRL algorithms." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- A more thorough discussion of the 5 kinds of inductive biases, including examples, would make them easier to grasp.\n- A diagram depicting the states and rule representations described in section 4.3 would be useful. Section 4.3 could use more development and examples.\n- An example of a full set of causal rules for a simple environment would be welcomed." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Experiential learning in grid worlds with causally-informed intrinsic reward and inductive biases." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024a,\ntitle={A grid world agent with favorable inductive biases},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0spR7wDwBh},\nnote={under review}\n}" }, "abstract": { "value": "We present a novel experiential learning agent with causally-informed intrinsic reward that is capable of learning sequential and causal dependencies in a robust and data-efficient way within grid world environments. After reflecting on state-of-the-art Deep Reinforcement Learning algorithms, we provide a relevant discussion of common techniques as well as our own systematic comparison within multiple grid world environments. Additionally, we investigate the conditions and mechanisms leading to data-efficient learning and analyze relevant inductive biases that our agent utilizes to effectively learn causal knowledge and to plan for rewarding future states of greatest expected return." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "intrinsic rewards", "inductive biases", "planning", "uncertainty", "deep reinforcement learning", "reinforcement learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6dd8a68ba1bfbf621348b288885a69b53a2f3edc.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/e09f353f90dbd5c26a468eb1f8f162f41e33801a.zip" }, "title": { "value": "A grid world agent with favorable inductive biases" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0sr8bS4S2H
AgentStore: Scalable Integration of Heterogeneous Agents As Specialized Generalist Computer Assistant
main
Active
human-computer interactions;multi-agent;multimodal learning
applications to robotics, autonomy, planning
3;3;5;6
3;4;4;4
2;2;1;3
2;2;2;3
3;2;2;4
4.25
3.75
2
2.25
2.75
0.555556
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How are AgentStore(FT) baseline constructed? Why is it performing worse then AT?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The idea of effectively integrating and dynamically use a range of domain-specific agents is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents AgentStore, which allows integrating and dynamically use a range of domain-specific agents (the difference is mainly the base model, how the model is prompted, the action/observation space for each agents).\nThey train a model, named MetaAgent, to dynamically select the agents and distribute the tasks given current context. \nPerformance is verified OsWorld." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Vague Agent Descriptions: The descriptions of agents in the AgentPool are too insufficient to understand what each agent actually are. The only available information seems to be from Table 6, which provides only names for many agents. What distinguishes each agent, such as SheetAgent from SlideAgent? Is it simply their prompts, or are there other differences?\n2. Over-engineered, Overfitting to OSWorld: Many agents in Table 6 appear optimized for tasks specific to OSWorld, raising doubts about their general applicability. Evaluating the system against broader benchmarks, like GAIA or SWE-Bench, would strengthen the claim of generalist capabilities.\n3. Scalability Concerns: the claimed scalability of this system is unclear. Will there be contributors create specialized agents? And can this platform effectively integrate diverse agents? Table 6 shows that 18 of the 20 agents are authored by the team. So it's unclear if this system design can effectively integrate diverse agents found in the wild.\n3. Missing Key Baselines in AgentToken: The study only presents AgentToken training with tunable embedding layers. It would be valuable to compare performance and efficiency when the entire model is tunable to understand the trade-offs better." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see weakness for points of clarification desired." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper proposes an interesting idea of combining together multiple agents to solve complex tasks. There is a clearly significant engineering effort that went into creating this work, i.e. to create nearly twenty different agents and documents from scratch. The approach achieves impressive performance on a very challenging benchmark." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method for managing and deploying multiple different agents to achieve computer control tasks. The approach collects a group of agents, AgentStore, each with different capabilities and domain specialties. Each agent has an associated document describing the agent.\n\nIn order to deploy the correct agent for a given task, the paper uses “AgentToken”, which is a trained embedding for selecting an appropriate agent to deploy for the task. For more complex tasks, the AgentManager can select up to k tasks. The paper demonstrates SoTA performance on OSWorld. They also release a dataset, based on OSWorld for tasks that require multiple agents." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper may overstate the difficulty of agent selection and potentially understates how much the success depends on designing customized agents for the applications specifically in the benchmark. While MetaAgent with AgentToken is presented as a main contribution, the paper does not conclusively demonstrate its superiority over an ICL baseline.\n\nThe reported 49.63% accuracy for ICL with GPT-4o in agent routing seems unusually low. This appears to stem from implementing a simplistic ICL baseline that inefficiently includes entire agent documents in the prompt (as shown in the appendix, though the baseline implementation should be better detailed in the main text). A fair comparison would require:\n* Testing ICL with more concise capability descriptions\n* Including few-shot examples\n* Providing concrete examples demonstrating where ICL fails compared to AgentToken\n\nThe system's current implementation raises significant scalability concerns:\n\n* Custom agents and documentation were developed specifically for each app in the OSWorld dataset\n* Scaling requires substantial manual effort for each new application:\n * Collecting demonstrations\n * Implementing new agents\n * Writing documentation\n* The strong performance appears largely attributable to carefully engineered custom agents rather than a scalable automated approach\n* True scalability would require automated agent generation\n\nThe paper lacks sufficient detail on the nearly twenty custom different agents (excluding existing ones e.g., Friday) used in the system. Without these details, it is difficult to assess the effectiveness of the approach. Most concerning is the unclear origin of training demonstrations and their potential overlap with OSWorld test tasks. The paper should:\n* Specify the source of demonstrations\n* Detail measures taken to prevent data leakage between training and test sets\n* Discuss how generalization is ensured\n\nThere are spelling and grammar errors.\n* In Figure 1,”SildeAgent specialize…” should be “SlideAgent specializes…”, \n* In Figure 1, “are required to collaborate system-wide” should be “are required to collaborate on system-wide…”\n* In the prompts in the Appendix, “Demostation” should be “Demonstration”\n* In the prompts in the Appendix, “Templete” should be “Template”" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I noticed that the number of tasks in the AppAgent paper is higher than those discussed in your paper. Additionally, the accuracy in your paper is reported in increments of \"20%,\" which makes it less convincing, as I didn't see this in the original paper. Did you select a subset of tasks? Please correct me if I'm wrong.\n\n2. Could you make the figures more clear? Currently, there are too many elements, especially in Figure 2, making the figures look cluttered.\n\n3. For AgentMatch, you mention a \"ground truth set of agents required for successful task completion.\" What if multiple different sets could successfully complete the tasks, making it so there's no single ground truth?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. AgentStore enables easy integration of various specialized agents, similar to an app store, allowing the platform to continuously expand its capabilities. This adaptability makes it suitable for handling a broad range of tasks in evolving operating system environments.\n\n2. The MetaAgent with AgentToken routes tasks to the most suitable agents and can manage collaborative tasks involving multiple agents. This approach significantly enhances task handling by using minimal resources and avoiding frequent model retraining.\n\n3. AgentStore achieves a marked improvement on challenging benchmarks like OSWorld, doubling the success rates of prior systems. This demonstrates its capability to handle complex tasks across different software and application domains effectively." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces AgentStore, a platform designed to integrate and manage a wide variety of digital agents capable of performing specialized tasks on computer systems. This system addresses the limitations of current general-purpose agents, which struggle with complex, open-ended tasks, by using a flexible, scalable approach similar to an app store. AgentStore includes a core MetaAgent that uses a novel AgentToken strategy to dynamically select and manage suitable agents for specific tasks, allowing for collaboration between specialized agents. Experiments show AgentStore's effectiveness on the OSWorld benchmark, significantly outperforming previous systems by more than doubling their success rates on complex tasks. This advancement highlights the potential of AgentStore in developing versatile, specialized assistant systems that improve both user experience and task automation across different environments​." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The authors claim that their methods \"double the performance of previous systems\". However, this comparison is not entirely fair, as their approach employs a significantly larger number of agents and incurs substantially higher memory and costs. The paper does not address these additional costs, nor does it include experiments comparing baselines that utilize multiple agents, which would provide a more accurate comparison with the proposed method. I suggest that the authors test multi-agent baselines that use the same group of agents mentioned in the paper.\n\n2. While the authors describe their AgentStore as a \"generalist\" assistant, the evaluation lacks sufficient breadth. The method could be tested on one additional benchmark such as WebArena or Mind2Web to demonstrate generalizability. Both APPAgent and OSWorld-Multi involve fewer than 100 tasks, which is a relatively small number and could allow for manual tuning of the agents to game the evaluation system.\n\n3. The presentation of the paper lacks rigor. The introduction uses overly fancy language and falls short of the scientific rigor expected, including imprecise terms such as \"stunning results.\" Additionally, in Figure 2, the \"AgentPool\" is illustrated with agents like Sheet Agent, Slide Agent, Web Agent, etc., which are not clearly defined in the paper. Please provide an explanation of what each of these agents is and how they are built in the main text or appendix, or revise the figure to present a more accurate representation.\n\n4. The related work section is not comprehensive, particularly regarding multi-agent systems. The authors state that previous works \"use a fixed number of agents with predefined roles\" and that \"their agents are usually homogeneous,\" but this is inaccurate for many studies, such as \"Internet of Agents\" and \"AutoGen\". A review of classical papers in multi-agent systems would also reveal that many incorporate heterogeneous agents, a discussion that the authors have entirely overlooked." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethics concern for this paper." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Could you provide more details about the term *hybrid* in Table 1? There are no related explanations in the paper, which makes it unclear for the reviewer to understand the exact meaning of *hybrid* in this context.\n\n2. Is *Hash Manager* a commonly used term in this context? The connection between your paper and *Hash_RC6*, as mentioned, is unclear. Additionally, the statement *\"our method narrows the management scope to a few selected agents, leaving ample context space for detailed documentation of these fixed agents. This design shares similarities with hashing methods\"* is unclear and could benefit from further clarification.\n\nIt would be appreciated if the authors addressed all of the weaknesses and questions." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. I like the scope of this paper. It is necessary to discuss how to scale up by incorporating more evolving agents into one platform.\n\n2. The experiments show the SoTA performances in the OSWorld Benchmark, and the performance is strong compared with other baselines. \n\n3. The figures are interesting. And the claims are straightforward." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "AgentStore is a scalable platform designed to integrate diverse agents to automate operating system tasks dynamically. Through its MetaAgent and AgentToken modules, AgentStore achieves state-of-the-art results on the OSWorld benchmark by enhancing adaptability and task execution efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I strongly suggest that the authors include at least one additional benchmark. Since the current OSWorld benchmark is relatively new, achieving good results on it may not be fully convincing. Most importantly, it seems that there are many similar benchmarks within the same scope, and incorporating several of them would provide a more comprehensive evaluation.\n\n2. Building on the first weakness, it would be helpful if the authors could conduct experiments that explore the generalizability of the model across different benchmarks.\n\n3. Is the main contribution of this paper training a model that orchestrates different agents, and prior to this, do you also introduce agents in the AgentStore? If so, I believe the most relevant baselines would be models that train to select APIs or tools, which can be analogous to selecting agents (as they function similarly). It would be beneficial to compare your results with these existing methods.\n\n4. Expanding on the 3, I suggest supplementing the experiments using alternative methods to orchestrate the agents within AgentStore. For example, you could compare against RL-based approaches such as [1] GPTSwarm, which orchestrates agents using graphs, or model-based methods like [2] Toolformer, which selects tools from a trained model, and [3] LangChain's ICL-based tool-calling agent.\n\n5. The time or cost analysis of training and inference is missing and would provide valuable insights.\n\nReferences:\n\n[1] GPTSwarm: Language Agents as Optimizable Graphs.\" ICML 2024\n\n[2] Toolformer: Language models can teach themselves to use tools.\" NeurIPS 2024\n\n[3] LangChain: https://python.langchain.com/v0.1/docs/modules/agents/agent_types/tool_calling/" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024agentstore,\ntitle={AgentStore: Scalable Integration of Heterogeneous Agents As Specialized Generalist Computer Assistant},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0sr8bS4S2H},\nnote={under review}\n}" }, "abstract": { "value": "Digital agents capable of automating complex computer tasks have attracted considerable attention due to their immense potential to enhance human-computer interaction. However, existing agent methods reveal deficiencies in their generalization and specialization capabilities, especially in handling open-ended computer tasks in real-world environments. Inspired by the rich functionality of the App store, we present AgentStore, a scalable platform designed to dynamically integrate heterogeneous agents for automating computer tasks. AgentStore empowers users to integrate third-party agents, allowing the system to continuously enrich its capabilities and adapt to rapidly evolving operating systems. Additionally, we propose a novel core MetaAgent with the AgentToken strategy to efficiently manage diverse agents and utilize their specialized and generalist abilities for both domain-specific and system-wide tasks. Extensive experiments on challenging benchmarks demonstrate that AgentStore surpasses the limitations of previous systems with narrow capabilities, particularly achieving a significant improvement from 11.21\\% to 23.85\\% on the OSWorld benchmark, more than doubling the previous results. Comprehensive quantitative and qualitative results further demonstrate AgentStore's ability to enhance agent systems in both generalization and specialization, underscoring its potential for developing the specialized generalist computer assistant. All our codes will be made publicly available." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "human-computer interactions", "multi-agent", "multimodal learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1f106f354c691895a7fae50be82815dc861a3273.pdf" }, "presentation": null, "primary_area": { "value": "applications to robotics, autonomy, planning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "AgentStore: Scalable Integration of Heterogeneous Agents As Specialized Generalist Computer Assistant" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0tAXMiSufG
BOND: Aligning LLMs with Best-of-N Distillation
main
Active
LLM;Alignment;RLHF;Best-of-N
foundation or frontier models, including LLMs
5;6;6;6;6
4;4;4;3;4
3;4;3;4;3
3;3;3;3;3
2;2;2;3;3
5.8
3.8
3.4
3
2.4
-0.25
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How does J-BOND performance compare to (Amini et al, 2024) and other concurrent works?\n\nThere are three algorithms discussed in the paper, namely BOND, Iterative-BOND and J-BOND. Is it always preferable to use JBOND or do you recommend using each algorithm in particular situations?\n\nWill the code be made publicly available to serve the research community?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper makes multiple contributions, namely theoretical derivation for the Best-of-N distribution and a practical RLHF finetuning algorithm that distills the Best-of-N distribution into a policy which is sample efficient and requires just one single sample at inference time\n\nThe authors are making a lot of engineering design choices in their proposed model, and carefully analyze the role of each component in the performance of the proposed algorithm\n\nTo regularize the model and ensure it is not steering too far from the reference model (the supervised finetuned policy), the authors use a combination of both forward and reverse KL, namely Jeffrey divergence. While the forward KL ensures mode covering behavior, the reverse KL is used for mode seeking behavior; their combination results in better aligned policies that combine the advantages of both divergences\n\nApplying BOND recursively (Iterative BOND) improves the sample efficiency of the BOND algorithm and works for very small values of n (2, 4); its reward/KL tradeoff is comparable to the non-iterative BOND while being more sample efficient\n\nThe J-BOND algorithm presents better reward/KL trade-off compared to the REINFORCE algorithm with different values of \\beta and does not require using a specific regularization strength\n\nThe paper is well written, well-motivated, presents theoretically and experimentally sound insights that would benefit the research community" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper is focusing on the RLHF alignment problem, in particular on emulating the Best-of-N distribution which is known to perform very well, but is very costly at inference time (for each prompt it requires drawing N candidate generations from the reference model and selecting the one with highest reward according to a reward model). The authors propose the BOND (Best-of-N Distillation) algorithm designed to force the distribution of generations from the finetuned policy to be close to the Best-of-N distribution, requiring the generation of just a single sample (instead of N). To this end, BOND regards the alignment problem as a distribution matching problem and distills the Best-of-N distribution by finetuning the reference policy to imitate the Best-of-N distribution. To stay close to the original reference model, the authors incorporate a KL regularization term that considers both the forward and backward divergence (Jeffrey divergence). In addition, they incorporate Monte-Carlo quantile estimation, and exponential moving anchor, resulting in the J-BOND algorithm. The authors conduct experiments on the abstractive summarization task (XSum dataset) and aligning GEMMA using J-BOND." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper combines a lot of distinct ideas already proposed in previous works - it would be good to actually clearly articulate what the novel contribution is. Besides, the comparison with concurrent works is not very clear, in particular the difference with (Amini et al, 2024), WARM, WARP (Rame et al, 2024). \n\nFigure 4 - It would be interesting to see how the performance of Best-of-N compares to the proposed algorithm J-BOND and REINFORCE\n\nAlgorithm 1, line 330 - \\pi_t is not defined\n\nLine 329 - \\pi \\in Capital \\pi -Capital \\pi in Algorithm 1 is not defined\n\nLine 456 - “a large previously trained reward model r(.)” - please provide details\n\nLines 481-482 - there are not many details about the hyperparameters of the REINFORCE algorithm \n\nThe authors are conducting experiments on Gemma 2B and 7B models, while results are convincing it would be good to see if they hold with other models and other tasks than summarization" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In Section 4.2, the authors utilize 32 Monte Carlo samples to estimate the backward and forward KL divergences between the training policy and the reference distribution. Given the high dimensionality of these distributions, this sample size seems insufficient to reliably capture the divergence and may introduce substantial estimation variance. A sensitivity analysis showing how the estimator's variance changes with an increasing number of Monte Carlo samples would strengthen the results. Alternatively, using a larger sample size for these estimates could enhance the reliability of the reported divergences.\n\nWhile BOND’s benefits, such as improved KL/reward trade-offs and dynamic regularization, are discussed in the body of the paper, they are not clearly summarized in the introduction or abstract. A brief overview in these sections would effectively communicate BOND’s main advantages over traditional RLHF approaches, aiding readers in understanding its unique contributions and practical value." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The paper is well-structured and clearly presents its methodology, with detailed explanations and algorithms that allow readers to follow the progression. From iterative BOND to the addition of KL regularization in Sections 4 and 5, the additional experimental results effectively support these methodological advancements. \nBOND is notable for its originality, offering a practical and computationally efficient alternative to traditional RLHF that achieves a superior KL-reward balance without requiring commitment to a specific regularization level. The work is significant in its potential impact on RLHF practices, as it provides a scalable solution for optimizing performance and efficiency while minimizing trade-offs between KL divergence from the reference distribution and reward maximization." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Best-of-N Distillation (BOND), a novel alignment tuning algorithm designed to emulate the Best-of-N sampling method in a more computationally efficient manner. BOND aims to achieve the same high-quality output as Best-of-N sampling without the inference-time computational overhead, by aligning model outputs to match the distribution of the Best-of-N candidates. \nIn addition, to ensure stability and scalability, the authors introduce an iterative approximation strategy that operates effectively even with a minimal sample size (e.g., 2 or 4). \nFurther, based on the two types of loss function derivation aiming forward and reverse KL respectively, the author leverages Jeffreys divergence proposing J-BOND, an enhanced algorithm incorporating iterative distribution matching with an exponential moving average (EMA) anchor. J-BOND demonstrates effectiveness in maintaining a stable training and superior KL-reward trade-off through experiments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper relies heavily on the Jeffreys divergence without sufficient comparative analysis against alternative divergence metrics. The mode-covering and mode-seeking behavior property paper mentioned about are only observed in lower dimension such as multimodal distribution in 1-dimension. An inclusion of other divergence types, especially in the iterative stages, could offer clearer insights into the unique advantages of Jeffreys divergence. Further, relevant literature on divergence measures in alignment tuning, should also be cited to contextualize this choice.\n- Go, Dongyoung, et al. \"Aligning language models with preferences through f-divergence minimization.\" Proceedings of the 40th International Conference on Machine Learning. 2023.\n\nAs the paper discusses the method’s efficiency, the paper would benefit from explicit comparison of the computational cost saved by BOND relative to traditional Best-of-N sampling, or comparisons with sampling approaches used in RLHF. This would clarify BOND’s potential advantages in real-world applications.\n\nAdditionally, while the paper addresses the challenge of sampling size N through iterative approximation, showing practical advantages like non-saturation compared to non-iterative BOND, this helpful randomness raised in iterative BOND is solely introduced by approximation randomness, which lacks controls or specific directions. This calls into question whether the proposed algorithm genuinely achieves a distilled $\\text{Bo}N^M$ distribution. \nThe substantial difference in $r(y)$ between iterative and non-iterative BOND in Figure3 suggests a potential vulnerability to reward hacking, as discussed in Gao et al.\n- Gao, Leo, John Schulman, and Jacob Hilton. \"Scaling laws for reward model overoptimization.\" International Conference on Machine Learning. PMLR, 2023.\n\n\nThe introduction combines related works and problem setup, which could be structured more effectively. Detailed discussions on RLHF and Best-of-N would be more suitable in a separate related works section or could be incorporated into the problem setup. In the introduction, it would be clearer to emphasize the limitations of existing methods and highlight the advantages of the proposed approach over current methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Can the authors compare with more fundamental baseline methods, such as BoN or other BoN distillation algorithms?\n\n2. Can the authors supplement additional experiments, including downstream validation and more ablation studies as discussed in Weakness 3?\n\n3. Can the authors prove more clearly the advantages of the BOND algorithm over other Alignment algorithms, in terms of both performance and efficiency, to make the argument more convincing?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Rigorous Theoretical Analysis: This work rigorously analyzes the distribution characteristics under Best-of-N sampling and establishes its connection with standard RLHF, as well as the specific reward value $r_{BOND}$ under this correlation. This provides a reliable theoretical foundation for the work, rather than being based on naive assumptions.\n\n2. Some Degree of Novelty: Although there is some concurrent work, the idea of distilling distributions from Best-of-N is fairly novel and important.\n\n3. Consideration of Practical Efficiency: I appreciate the authors' consideration of the practical efficiency of the algorithm. The proposed J-BOND algorithm theoretically has lower sampling complexity, which should increase the efficiency of RLHF." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a distribution matching-based Best-of-N distillation method that simulates the Best-of-N distribution space, while reducing the time overhead of N inferences to just one. Starting from the theoretical distribution of BoN, the authors construct the Iterative BOND algorithm based on Quantile estimation and the choice of Jeffreys Divergence, and further propose the more practically meaningful J-BOND algorithm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Lack of Important Baselines: Given that the main purpose of the paper is to distill Best-of-N sampling, BoN performance should straightforwardly serve as an important baseline to analyze pros and cons in terms of performance and efficiency. Moreover, other concurrent BoN distillation algorithms [1] should also be considered.\n\n2. Lack of Downstream Validation: The main metrics in the paper, such as reward value and KL divergence, cannot be directly equated to the model's performance on downstream tasks. For an RLHF method, it is necessary to conduct experiments on downstream tasks and present more intuitive metrics to demonstrate the model's alignment performance.\n\n3. Insufficient Experimental Setup: The paper lacks exploration of several issues. For instance, BoN sampling heavily depends on the Reward Model, and the influence of different RMs on the BOND algorithm is not investigated. Additionally, a more nuanced exploration of Jeffreys Divergence with smoother β variations could be included; and the comparison between J-BOND and standard Iterative BOND lacks investigation.\n\n[1] Variational Best-of-N Alignment, https://arxiv.org/pdf/2407.06057" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Would the curves in Figure 4(a) converge to similar spots if you run the algorithms long enough?\n\nDo the authors expect that the conclusions would be similar for much larger models?\n\nWriting: \n- pi(x,y) makes it look like a joint distribution; do the authors mean pi(y|x)?\n- 3e-6 means 0.000003; 3e^(-6) means 0.007. Which one are you referring to?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The motivation of distillation is great and this direction should be deeply explored. The paper is written carefully. The algorithm is clearly written. I checked the derivations and they seem correct so far.\n\nREINFORCE is a natural baseline and the authors have attempted multiple beta values for the baseline." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper is essentially distilling inference-time best-of-N sampling into training time. Specifically, the authors propose to train the policy to match the best-of-N distribution (which is an analytical form derived by the authors). The distribution matching is done through minimizing a combination of forward and backward KL. The behavior of J-BOND appears better for reward vs. num steps as well as KL(policy || ref policy) vs. num steps, compared to REINFORCE with various beta values." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Have the authors tried reward shaping techniques for RLHF baseline, e.g., making the reward values more peaky – either very large or very small? \n\nI’d appreciate a more comprehensive discussion on how much the authors expect this technique to benefit downstream tasks.\n\nIt’ll be great if the authors can include more discussion on whether the baseline B is important or not in the algorithm.\n\nWhat ablations on beta and gamma in Algorithms 2 (balancing among forward KL, backward KL, additional regularization) would likely benefit downstream tasks more? It's still unclear to me why we want to put an equal/large weight on backward KL. More motivation would be nice." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "#### Question 1:\nLine 298, why is the proposed approach \"equivalently to distilling a Best-of-$N^M$ of the initial distribution $\\pi_{ref}$\"? Is this a qualitative or quantitative statement?\n#### Question 2:\nFigure 4, seems like a linear relationship between # steps and r(y) for iterative BOND.\nFor Figure 5, it seems like another kind of trending between $r(y)$ and # steps or $KL(\\pi||\\pi_{ref})$.\nFigures 6 and 7, present a log-like trend between the KL and reward similar to the REINFORCE algorithm.\nA similar trend is also observed in http://arxiv.org/abs/2406.00832 and http://arxiv.org/abs/2407.06057.\nAs discussed in http://arxiv.org/abs/2204.05862 and my experience, there can be an approximately linear relationship between $\\sqrt{D_{KL}}$ and $r$ for BoN sampling. \nIt would be interesting if the author could provide some empirical or theoretical intuition about such a relationship.\nDoes this indicate that even though performing a BoN distribution match, it is still more similar to the general policy-gradient RL algorithm (which may try to match another distribution)?\n#### Question 3:\nWhy is there an approximately linear relationship between #steps and KL in, as presented in Figures 5 and 6 for the BOND algorithm?\nFor the REINFORCE algorithm, it seems a quite different trend.\n#### Question 4:\nFigure 4 presents consistent trending between KL and $\\log_{p\\le(y)}$ across varying $N$ and algorithm, which is quite interesting.\nIs there any explanation for this phenomenon?\n#### Question 5:\nIs there any analysis or comparison of the reward overoptimization of this algorithm?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. This work implements the LLM alignment problem through Best-of-N distillation, which can be a sound direction for the development of algorithms in this field.\n2. This work formulates and discusses the BoN distribution and its relationship with the general RLHF target. \n3. This work proposes to utilize Jeffreys divergence to balance the mode-covering and mode-seeking behavior introduced by forward- and backward-KL- KL optimization.\n4. This work further integrates their method with EMA techniques and proposes an iterative version." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This article models the alignment of LLMs as a distribution-matching problem. Specifically, it considers the distribution induced by Best-of-N sampling as the target distribution and optimizes the Jeffreys divergence with respect to it for balancing the characteristics of forward & backward KL-based optimization. Additionally, this work derives an iterative form, updating the reference model during training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. My biggest concern is the absence of direct comparison with close-related works http://arxiv.org/abs/2407.06057, http://arxiv.org/abs/2406.00832. It would be much more convincing if there is a clear comparison like Figure 1 in http://arxiv.org/abs/2407.06057, rather than a simple baseline REINFORCE presented in Figure 7 in this work.\n2. More discussion about why BoN distribution as the target can be included.\n3. Other possible additional analyses are discussed in the Questions." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a novel RLHF approach to align LLMs via online distillation of Best-of-N sampling policies." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024bond,\ntitle={{BOND}: Aligning {LLM}s with Best-of-N Distillation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0tAXMiSufG},\nnote={under review}\n}" }, "abstract": { "value": "Reinforcement learning from human feedback (RLHF) is a key driver of quality and safety in state-of-the-art large language models.\nYet, a surprisingly simple and strong inference-time strategy is Best-of-N sampling that selects the best generation among N candidates.\nIn this paper, we propose Best-of-N Distillation (BOND), a novel RLHF algorithm that seeks to emulate Best-of-N but without its significant computational overhead at inference time. Specifically, BOND is a distribution matching algorithm that forces the distribution of generations from the policy to get closer to the Best-of-N distribution. We use the Jeffreys divergence (a linear combination of forward and backward KL) to balance between mode-covering and mode-seeking behavior, and derive an iterative formulation that utilizes a moving anchor for efficiency. We demonstrate the effectiveness of our approach and several design choices through experiments on abstractive summarization and Gemma models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "LLM", "Alignment", "RLHF", "Best-of-N" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/477383ef64e54c4e14059fd70569c7fedf6c8c06.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "BOND: Aligning LLMs with Best-of-N Distillation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0tAn34IkXI
Flat Posterior Does Matter For Bayesian Model Averaging
main
Active
Bayesian Neural Network;Bayesian Deep Learning;Flatness-aware Optimization;Bayesian Transfer Learning
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
3;3;5;6
3;4;3;4
3;2;2;3
1;2;2;3
3;3;3;3
4.25
3.5
2.5
2
3
0.19245
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How to measure the flatness of a BNN, as it involves a set of NNs? Do the authors average the flatness of all samples in Fig. 1 and 2?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper targets the generalization of BNNs, which is an important problem.\n- The paper provides empirical and theoretical analysis to support the need for flatness in BNNs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a sharpness-aware Bayesian neural network (BNNs) to ensure the found modes are flat. A new Bayesian transfer learning scheme is also developed to leverage the pre-trained deep neural networks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The overall goal of the paper is vague. As far as I understand, the proposed method increases the flatness of the variational parameter \\theta, not the model parameter w. However, the literature shows flatter w leads to better generalization. The seems to be a gap. The meaning of \"flatness in BNNs\" is not very clear in the paper.\n- Previous works have demonstrated the benefits of including flatness in BNNs, e.g. Möllenhoff & Khan, 2022, Nguyen et al., 2023, Li & Zhang, 2023. The additional insights offered by Sec 3 are unclear.\n- It is unclear how Theorem 1 indicates that BNN needs flatness. This theorem basically shows the relationship between the flatness of the weight-averaged model and the flatness of individual models. It does not explain the benefits of ensuring flatness in BNNs.\n- Variational inference (VI) approximates the posterior through an optimization problem. We can naively apply SAM to the objective of VI. The difference and benefit of the proposed objectives in Eq.4 and 5 over this naive version are unclear.\n- In the experiment section, the proposed method is applied to VI, SWAG, and MCMC. However, it is unclear how the method is compatible with SWAG and MCMC. \n- The proposed Bayesian transfer learning scheme is a straightforward application to transfer learning. The novelty of this part is low." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- SAM training is already more expensive than vanilla gradient descent, adding VI on top (now you need to sample to estimate ELBO), won't it be too expensive? This begs another question, how much improvement can be gained by using SA-BMA when compared with LA on SAM solution? I see LA implemented in your code but there are no results of LA in the paper. I would be interested to see an experiment comparing LA on SGD solution, LA on SAM solution, and SA-BMA.\n\n- How do you ensure VI has been trained successfully? I see in multiple cases VI ends up with higher NLL and ECE than MAP, which seems strange." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The connection between the proposed objective and existing works is well-analyzed\n\n- Well written and easy to follow" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Flat optima has been shown to connect with good generalization in point estimation for neural networks. The authors study flatness for Bayesian neural networks and propose a flatness-seeking optimizer, Sharpness-Aware Bayesian Model Averaging (SA-BMA), for VI. Specifically, the authors first show empirically that (1) BNN's posterior tends to land in a sharper loss region; (2) when making a prediction with MC estimation, using flat samples will result in better performance. Based on the empirical finding, the authors propose a new learning objective for VI, which also accounts for flatness as well as a Bayesian Transfer Learning scheme for efficient computation for large models. Experiment results have shown SA-BMA can improve generalization in few-shot classification and distribution shift." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In Bayesian deep learning in the end we have a distribution, here the authors use the averaged Hessian eigenvalues of different sampled weights as the measurement of flatness. I'm not fully convinced this is a good measurement of a flatness over a distribution. \n\n- The proposed objective is expensive to train." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. I notice that this is a resubmission paper. Compared with the last version, more analysis on the flatness of the loss landscape and the relations between flatness and general performances are included. I respect the authors' efforts in studying the geometry of loss landscape.\n\n2. The empirical analysis using Hessian eigenvalues clearly demonstrates why finding flat modes is important to the overall performance.\n\n3. Comprehensive experiments are conducted to demonstrate the effectiveness of SA-BMA." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a flatness-aware optimizer for Bayesian model averaging, which applies to a variety of Bayesian inference algorithms. The introduced optimizer SA-BMA is generalized from the SAM optimizer. This paper also has a clear empirical proof of why flatness is important and why existing Bayesian inference methods ignore flatness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The experiments on real-world datasets are limited to CIFAR10/100. I expect to see results on large-scale dataset like ImageNet to show the scalability of SA-BMA.\n\n2. Figure 5 may lead to a misunderstanding that PTL and SA-BMA change the loss surface (in the first 2 figures)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How many posterior samples have you used for Figure 2a?\n- How large are your Bayesian ensembles for the final experiments, i.e. how many times do you sample from your posterior to approximate the PPD integral?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper does propose an interesting combination of lines of work in deep learning, it missed out on evaluating whether this combination is useful in my opinion, though. I do see the plots in Figure 2 as a negative result in this way, and think based on this one could have written an interesting paper on flatness-seeking methods approximating Bayesian averages." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes to use sharpness-aware minimization for Bayesian models. It proposes a framework that can be used with 3 different posterior approximation methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I do not think there is a need for flatness-aware optimization in Bayesian models. That is because Bayesian models are building an average over all models with high likelihood (or posterior likelihood for informative priors). Taking this average will naturally lead to including a lot of models from flat optima, as they are simply wider and thus have more mass (in the prior). This in my opinion is underlined by the experiments in Figure 2b-c, where we can see that by simply using a larger Ensemble, thus approximating the true PPD more closely, we get the same effect as when choosing models that lie in wide optima. I hope I did understand this experiment right and the 30 models that you speak about are 30 samples from your posterior.\n - One more point on this: I can imagine that this argument does not work well for the particular problems that VI has, as it will always try to find a simple Gaussian distribution that represents the posterior.\n- The flatness difference in Experiment 5.1 looks marginal at mere 2x radius of the optimum and a worse likelihood. This toy experiment would be more interesting if both optima had the same likelihood, but one being *much* more narrow.\n- Your language is missing a lot of articles, but generally feels more like a draft than a paper. I guess you are not a native English speaker, I am neither, so this does not affect the score much for me, but I can recommend you to use LLMs/DeepL to improve your English writing.\n- The accuracies seen in the experiments seem to be far away from the state of the art for the models, see e.g. this torch tutorial https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/ \n- SAM is known to increase the time each step takes, this algorithm should have the same impact. A comparison of performance over time is missing, though." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024flat,\ntitle={Flat Posterior Does Matter For Bayesian Model Averaging},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0tAn34IkXI},\nnote={under review}\n}" }, "abstract": { "value": "Bayesian neural network (BNN) approximates the posterior distribution of model parameters and utilizes the posterior for prediction via Bayesian Model Averaging (BMA). The quality of the posterior approximation is critical for achieving accurate and robust predictions. It is known that flatness in the loss landscape is strongly associated with generalization performance, and it necessitates consideration to improve the quality of the posterior approximation. In this work, we empirically demonstrate that BNNs often struggle to capture the flatness. Moreover, we provide both experimental and theoretical evidence showing that BMA can be ineffective without ensuring flatness. To address this, we propose Sharpness-Aware Bayesian Model Averaging (SA-BMA), a novel optimizer that seeks flat posteriors by calculating divergence in the parameter space. SA-BMA aligns with the intrinsic nature of BNN and the generalized version of existing sharpness-aware optimizers for DNN. In addition, we suggest a Bayesian Transfer Learning scheme to efficiently leverage pre-trained DNN. We validate the efficacy of SA-BMA in enhancing generalization performance in few-shot classification and distribution shift by ensuring flat posterior." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Bayesian Neural Network", "Bayesian Deep Learning", "Flatness-aware Optimization", "Bayesian Transfer Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2e7853f768e7293e411b72ed09c67dfb34974d5c.pdf" }, "presentation": null, "primary_area": { "value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Flat Posterior Does Matter For Bayesian Model Averaging" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0tIiMNNmdm
Limitations of measure-first protocols in quantum machine learning
main
Active
quantum machine learning;machine learning;learning separation
learning theory
3;6;6
3;5;4
1;4;3
1;4;3
2;3;3
5
4
2.666667
2.666667
2.666667
0.866025
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* How can my following perceived weakness from above be remedied:\n\"This is a difference between data and not really about a separation between interesting algorithmic classes.\"\n* Is my understanding correct that the essence of the separation between the two classes boils down to the inability to compress the unseen quantum data during deployment time? What would happen if the measurement restriction during training were modified to allow the measurements to depend on the labels? Or if we must keep them blind of the labels, joint measurements across all data-points were allowed?\n* Line 289: Why is the \"more general\" case not studied, when it is arguably much more interesting than the restrictions under consideration?\n* Theorem 5: What are the assumptions of Yao's principle? Are they satisified?\n* Line 810: How can Bob output 1 when y is in R_f(x) without knowledge of *f*?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "Originality: The authors venture to create a quantum machine learning task with average-case complexity separations between measurement-first and fully-quantum quantum algorithms.\nQuality: The authors seem to know the techniques of the sub-field well (including those related to POVMs, HM problem, Yao's principle, QPRFs, ...), as well as other related work (including classical shadows, shadow tomography, ...)\nClarity: The authors do try to provide both diagrammatic, high-level and low-level explanations.\nSignificance: The authors do aim to produce a significant result by hoping to shed light on the role of information loss due to measurement when performing quantum machine learning in the average-case setting with realistic training data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors design a quantum machine learning task that exhibits a sample and time complexity separation between two classes of quantum algorithms acting on input training data consisting of quantum data and classical labels with the goal of producing a classical description of a quantum circuit that can then be deployed on unseen quantum data to produce samples from a distribution related to the training data. The main difference between the two classes of quantum algorithms is that the more powerful class is allowed to process the input quantum-classical training data in \"one go\", while the weaker class is hamstrung to first turn the quantum data into classical data, one training data-point at a time, without looking at the classical labels before being allowed to process this now-classical data together with the classical labels through a quantum algorithm to produce a classical description of the deployment quantum circuit. At deployment time, the weaker class is then further forced to measure the unseen quantum data before feeding it into the quantum circuit produced at the end of the training phase.\n\nUnsurprisingly, there is a complexity separation between the fully quantum algorithm and the hamstrung quantum algorithm which seems to essentially be a restatement of the intractability of the Hidden Matching Communication problem. Finally, the authors claim to extend the complexity separation to efficiently preparable training data sets by using pseudo-random functions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Unfortunately, the high aims and strong potential of the \"Strengths\" section, seem not to be met. I believe this is mainly due to the overly restrictive setting of the weakened class of \"measurement-first\" quantum algorithms. The most confusing and vague constraint is: \"that the measurement strategy cannot depend on the specific target concept\". Looking at Definition 5 and 7 for clarity this seems to be the following constraint on the hamstrung class:\n\nTraining:\nQuantum data + classical labels -----[Point-wise measurement]------> M(Quantum data) + classical labels = Classical data + classical labels ----[Quantum Algorithm]----> Classical description of Quantum circuit\n\nDeployment:\nQuantum data -----[Point-wise measurement]------> M(Quantum data) = Classical data ----[Quantum Algorithm given by circuit above]----> Sample\n\nTherefore it seems that the main difference between the two classes is that the weaker one actually only operates on classical data during both training and deployment while the stronger one is allowed to operate on quantum data. This is a difference between data capacity and not really about a separation between interesting algorithmic classes.\n\nSecondly, there does seem to be quite a few unclear statements or mistakes:\n\n* Lines 157-159 is unclear\n* Lines 205-210 claims a significant difference between this work and Jerbi et al.'s, but given the above characterization it is not clear that this is the case.\n* Line 229 introduces f', why the prime?\n* \\pi_x is not anywhere clearly connected to \\Lamda_x\n* z and x are used inconsistently/confusingly in multiple places\n* Eq (3) and Eq (5): the training data should not include x, perhaps g_i(x) is somehow meant (however see below)?\n* Eq (3) and Eq (5): (y, b) ~ \\pi_x and not (x,y,b)\n* Line 285: perhaps \\Tilde{\\pi}_x should be \\Tilde{\\pi}_{T_x} since the x dependence is surely only through T_x\n* Eq (8): how can x appear as an unbound function variable on the LHS and as a bound variable on the RHS?\n* Line 370: the use of Aaronson et al. is the very heart of the whole paper, yet this is not reproduced anywhere in the paper\n* Line 434: please remind me what non-uniform means in this context.\n* Eq (9): the notation of dot and unfilled function brackets is not defined/explained.\n* Lines 475-480: is unclear, especially given that there are supposed to be different random functions for each data-point.\n* Eq (11): if g_i(x) is of size n+1, doesn't this change the learning problem in unclear-ways, not least because the dimensions no longer match (2n + 1 != 2n + 2)\n* Eq (29): \\pi_x(f) -> \\pi_x(f^{(k)})\n* Line 787: is query access -> has query access" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Given the theoretical claims regarding noise robustness for the separations established in this work, could the authors add a numerical experiment showcasing the separation under noisy settings? For example, it would be beneficial to simulate your protocols with realistic noise models for near-term quantum devices. It would also be useful to see how the separation between measure-first and fully-quantum protocols changes as noise increases.\n\nThe main result (Theorem 1) is stated in terms of an existence statement. Could the author provide a more concrete description regarding the task that lead to the separation between \"measurement-first\" vs \"fully quantum\" methods?\n\nDo the authors consider the separation to hold in many natural learning problems that people are actively working on? Could the authors comment on whether the community should consider most problems to be addressable using measurement-first protocols? If not, could the authors comment on what families of problems one should expect fully-quantum protocols to be much more powerful than measurement-first protocols? Providing a few concrete examples of widely-studied quantum machine learning problems where they expect their results might be relevant would also be useful in this context." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper addresses a fundamental question about the nature of quantum advantages in machine learning and provides concrete evidence that some quantum tasks inherently require maintaining quantum states throughout processing.\n\n- The proofs are rigorous and combine multiple techniques from quantum computing, cryptography, and communication complexity.\n\n- Unlike previous work, the separation holds even for average-case performance (not just worst-case), efficiently preparable quantum states (not just arbitrary states), and scenarios with experimental noise and errors." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates a fundamental question in quantum machine learning (QML): whether quantum advantages can persist when quantum data is first measured and converted to classical information before processing. The authors establish a formal separation between two types of QML protocols:\n\n1. \"Measure-first\" protocols: Those that first measure quantum states using a fixed measurement strategy (though possibly complex and coherent) before processing\n\n2. \"Fully-quantum\" protocols: Those that can process quantum states coherently and maintain quantum data throughout the entire learning & deployment process\n\nThe main contribution is proving that there exists a learning task involving quantum measurements where measure-first protocols provably require exponentially more data than fully-quantum protocols, even when restricted to efficiently preparable quantum states. This is shown by constructing a specific learning problem based on the Hidden Matching problem and quantum pseudorandom states." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While the paper discusses robustness to noise theoretically, no numerical simulations or experimental results are provided to demonstrate the practicality of the proposed protocols.\n\n- While the separation is proven rigorously, it relies on a somewhat artificial learning task constructed specifically to demonstrate the separation. It would be valuable to understand if similar separations exist for more natural learning problems.\n\n- The paper focuses on a specific type of quantum learning problem involving Hidden Matching. It remains unclear how broadly these limitations of measure-first protocols apply to other quantum learning scenarios." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. It would be helpful to provide the explicit construction of U_x (also the learned measurement operator).\n2. The problem is quite artificial (alghough Y. Liu et al. Nat. Phy. 2021 is still an artifical construction); it would be perfect if the phase states can be substituted to other more practical quantum state (such as ground state or thermal state of a physical system)." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. This paper clearly demonstrates its main results and the comparison to related work.\n2. The authors rigorously proved the separation of sample and running time by implementing a polynomial reduction to the Hidden Matching problem, which provides new insights in the field of quantum machine learning." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In the paper entitled \"Limitations of Measure-First Protocols in Quantum Machine Learning,\" the authors constructed a learning task based on quantum phase states, which provides a separation between the full quantum protocol and classical-shadow-based protocols in terms of sample complexity. From my understanding, the construction is fundamentally based on the fact that $FBQP/qpoly\\neq FBQP/poly$, while the authors claimed that they successfully achieved the worst-to-average case reduction. In summary, this work theoretically studied the differences between two popular quantum machine learning paradigms, which advances our understanding of quantum models to some extent." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Here are some weaknesses and questions on this paper.\n1. Is the proved separation just an instance of $FBQP/qpoly \\neq FBQP/poly$ [S. Aaronson et al., 2023], where $FBQP/poly$ represents classical shadow-based algorithms (including measure multiple states by Bell measurement).\n2. The worst-to-average reduction seems very natural; if my understanding is correct: the classical shadow methods may fail for every $x$ (as shown in Eq.~7), due to the fact that $FBQP/qpoly \\neq FBQP/poly$.\n3. The main contribution of this paper relies on the Theorem~2. I know the proof idea is there, but the proof details in the Appendix B is not very easy to follow." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We demonstrate a learning separation in a supervised learning task between quantum models that can coherently process quantum inputs and those restricted to classical representations of them." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024limitations,\ntitle={Limitations of measure-first protocols in quantum machine learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0tIiMNNmdm},\nnote={under review}\n}" }, "abstract": { "value": "In recent times, there have been major developments in two distinct yet connected domains of quantum information. On the one hand, substantial progress has been made in so-called randomized measurement protocols. Here, a number of properties of unknown quantum states can be deduced from surprisingly few measurement outcomes, using schemes such as classical shadows. On the other hand, significant progress has been made in quantum machine learning. For example, exponential advantages have been proven when the data consists of quantum states and quantum algorithms can coherently measure multiple copies of input states. In this work, we aim to understand the implications and limitations of combining randomized measurement protocols with quantum machine learning, although the implications are broader. Specifically, we investigate quantum machine learning algorithms that, when dealing with quantum data, can either process it entirely using quantum methods or measure the input data through a fixed measurement scheme and utilize the resulting classical information. We prove limitations for quantum machine learning algorithms that use fixed measurement schemes on the input quantum states.\nOur results have several implications. From the perspective of randomized measurement procedures, we show limitations of measure-first protocols in the average case, improving on the state-of-the-art which only focuses on worst-case scenarios. Additionally, previous lower bounds were only known for physically unrealizable states. We improve upon this by employing quantum pseudorandom functions to prove that a learning separation also exists when dealing with physically realizable states, which may be encountered in experiments. From a machine learning perspective, our results are crucial for defining a physically meaningful task that shows fully quantum machine learning processing is not only more efficient but also necessary for solving certain problems. The tasks at hand are also realistic, as the algorithms and proven separations hold when working with efficiently preparable states and remain robust in the presence of measurement and preparation errors." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "quantum machine learning", "machine learning", "learning separation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/89945069284122c625bc159c7a6c3a84c5a587a5.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Limitations of measure-first protocols in quantum machine learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0tMcsHsHgQ
Towards Undistillable Models by Minimizing Conditional Mutual Information
main
Active
Nasty teacher;Knowledge distillation;Intellectual property protection
alignment, fairness, safety, privacy, and societal considerations
3;3;5;5
3;4;3;4
3;2;3;3
2;2;3;2
2;1;3;2
4
3.5
2.75
2.25
2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please refer to the weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper's goal is to prevent the misuse of models and serves a partial privacy protection technique, which is significant for the reliable use of AI models. \n\n2. The paper provides both theoretical and empirical evidence to demonstrate the benefits of the proposed method in enhancing the undistillability of models. T" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a defense method against knowledge distillation (KD) attacks, where the goal is to avoid the undesired usage of the outputs of deep neural networks (DNNs) by making them undistillable. The authors propose a training method that aims to minimize the conditional mutual information (CMI) across all temperature-scaled clusters, resulting in a model that cannot be effectively distilled by existing KD methods. The CMIM model is shown to be undistillable through extensive experiments on CIFAR-100, TinyImageNet, and ImageNet datasets, while outperforming state-of-the-art methods and even improving upon the conventional cross-entropy (CE) loss in terms of prediction accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The writing quality of the paper could be enhanced.\n\n2. From a methodological perspective, the overall contribution of the paper is somewhat limited. The paper utilizes an existing metric, CMI, to measure the compactness of model outputs and aims to enhance model undistillability by maximizing this compactness metric. This approach appears too trivial and straightforward. It is not clear how this method fundamentally differs from directly employing a maximum entropy term or label smoothing technique to increase output concentration. Moreover, in the field of machine learning, particularly in computer vision, numerous loss functions have been studied to enhance model output compactness, such as the Large-Margin-Softmax-based methods.\n\n3. The paper's finding that the teacher model trained with the proposed method achieves higher accuracy is not surprising. The mechanism by which the proposed method operates is similar to that of label smoothing, which is known to enhance accuracy. The authors might refer to the paper \"When does label smoothing help\" to understand that label smoothing can also produce the feature compression effect as shown in Figure 2 of this manuscript. The enhancement in accuracy due to the proposed method is expected and aligns with the effects of label smoothing, which is not a novel discovery in the field." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the Weaknesses. Other questions:\nThe proposed method seems to be limited to a single-label classification setting. Can the method potentially extend to regression or multi-label classification, where outputs are continuous or to predict multiple classes simultaneously? Can the method be adapted to protect the IP of the state-of-the-art models, e.g., LLMs, CLIP, and Diffusion models, which require it most?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The idea is intuitive and with sufficient details.\n\n2. The benchmark defense and knowledge distillation (attack) methods are exhaustive in the experiments.\n\n3. The paper is well-organized and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "To protect the intellectual property (IP) of pretrained DNNs (teachers), the authors propose a method to prevent student models from using knowledge distillation (KD) to mimic the teacher models’ behavior. Specifically, they focus on a black-box scenario where the student model can only access the inputs and output logits of the teacher model. The proposed conditional mutual information minimization (CMIM) method constrains the output probability distributions of the teacher model so that each cluster associated with a label is highly concentrated (highly peaked around a single label). Intuitively, this eliminates the inter-class information from the teacher model's output logits such that the student model receives no more information than the labels themselves, thereby protecting the IP of the teacher model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The discussions of the proposed method’s limitations are missing. The proposed method collapses the logits so that each class’s output is highly concentrated (as shown in Fig.2), the teacher model might become overly confident in its predictions. This can lead to poor calibration and deteriorate generalization capability on out-of-distribution (OoD) data. Therefore, more settings and evaluations on the protected teacher model’s performance beyond prediction accuracy are necessary.\n\n2. The proposed method involves multiple hyperparameters, e.g., such as the number of power samples $N$ and the range $[0,\\beta]$, but the ablation studies on hyperparameter sensitivity are missing. For example, have you assessed how these hyperparameters impact the model’s undistillability and accuracy? If some parameters are particularly influential, could you highlight those findings?\n\n3. The proposed method introduces computation overhead, but the comparisons of computational costs are missing. The method introduces extra computation for minimizing CMI and performing multiple transformations, what is the relative computational cost of training a CMIM-protected model compared to a standard model and other protection methods? As we can see, the experiments are primarily conducted on small datasets, with very limited testing on the larger ImageNet dataset." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Please elaborate on the computational requirements of doing the proposed alternating optimization, where the optimization of the Q’s is done over multiple minibatches (I assume for each alternating step)?\n- Proving a negative result is challenging empirically; the inability to surpass LS with tested KD methods does not necessarily imply that no viable methods exist or that the models trained were optimally configured. Without theoretical proof of undistillability, the results can unfortunately only be seen as the current state, as new (or already existing and untested) methods for KD might render the claims of this paper invalid soon. Please elaborate on why the results should be considered sufficient to prove a negative result.\n- Table 1: A concerning number of results are reported as less than 10, which is either incorrect/faulty reporting or potential issues with collapsed training. If collapsed training runs, this supports the concern above (about negative results), and the authors should investigate and elaborate clearly on this.\n- What happens if $\\alpha = 1$? Setting $\\alpha > 1$ naturally forces the simplex to be more concentrated in the corners, but this post-transform is not applicable to the probabilities a knockoff-student would train on." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The topic of undistillable models is highly relevant, particularly given the growing online prevalence of large closed-source models.\n- The paper is mostly well-written with mostly appropriately supported claims.\n- The authors provide a nice balance between theoretical and empirical results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors tackle the topic of *defending* trained models from getting *stolen* through knowledge distillation. They investigate when the teacher models are *undistillable* by knowledge distillation and introduce the CMIM method to train teachers to concentrate the predicted probability vectors in close clusters to minimize the information available for distillation. They theoretically introduce and support their method and empirically test the procedure as well as other *defence* techniques against multiple *attacking* techniques." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- L040: Missing reference to theoretical paper by Borup and Andersen (2021), “Even your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-Distillation,” NeurIPS.\n- *\"An insight is provided that in order for a DNN to be undistillable, it is desirable for the DNN to possess the trait that each cluster of the DNN’s output probability distributions corresponding to each label is highly concentrated to the extent that all probability distributions within the cluster more or less collapse into one probability distribution close to the one-hot probability vector of that label.”* Although, I am unable to provide a reference, that a concentrated probability distribution is uninformative, and thus desirable to avoid distillation is, to the best of my knowledge, common knowledge in the field, and should not be considered a contribution of this paper.\n- Replacing $\\hat{Y}$ with $\\hat{Y}^\\alpha$ in the MI and CMI is simply replacing a variable in a function; the change itself is not inherently innovative and warrant a notion of \"contribution\". However, the implications and importance of doing so may hold some importance.\n- Section 4.1 appears redundant, as the extension follows naturally by substituting variables (see also comment above).\n- Variance estimates in Appendix J reveal that multiple cases deemed \"undistillable\" in Table 1 do not definitively qualify as such.\n- Table 2 mistakenly labels \"RSP\" as \"RSG\" and \"RSD.\"\n- Omitting CE from Table 1 limits insight into the distillability of the standard training procedure, and it is unclear how much better the proposed method is to the simplest baseline.\n- (Minor) When introducing notation, some notation is used before it is introduced. Consider reordering this section so that no notation convention is used before it has been introduced.\n- Equation (3): avoid using $\\times$ if it solely represents normal multiplication.\n- L403: \"intensive\" -> \"extensive.\"\n- L520-L524 could be rephrased, as the current phrasing appear redundant and confusing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* The paper seems to rely heavily on Yang et al. (2023). What is the technical novelty of this paper besides including the power-transformed elements?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper introduces a novel objective based on conditional mutual information that includes optimizing over a power-transformed probability distribution.\n* Approximating the intractable terms of the objective is original although I am not sure that it is justified. I validated Theorem 4.1.\n* I am not an expert in this field, but the experimental part seems very comprehensive in terms of datasets, student and teacher networks, defense strategies, and compared methods.\n* The proposed approach seem to be the only one that makes the network not distillable on all benchmarks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a method for protecting black-box models against knowledge distillation by student models. The authors define a DNN as distillable if a student model learned from its output outperforms the same model learned from ground truth labels. The proposed objective for the teacher model consists of a standard CE loss and a regularization term based on a tempered conditional mutual information (CMI) between the input and network predictions. Its aim is to make the output of the network close to one-hot encoding. Since the proposed objective is not tractable, through a series of approximations the authors propose a tractable objective. The authors demonstrate their method on CIFAR-100, TinyImageNet, and ImageNet using various teacher and student models, and against other baselines methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* I believe that further justification, evidence, or analysis (theoretical or empirical) is required to relate the approximation of the second term in the objective to the original one (as $\\omega$ was taken to be a finite number). There is some discrepancy that needs to be settled as eventually instead of maximizing over $\\mathbf{\\alpha}$ (which makes sense), averaging is done over multiple values. Also, can you please share what values of $\\omega$ were used in the paper? I didn't find this information. \n\n* Experimental section:\n * I find the improvements over competing methods, and in particular label smoothing, marginal in most cases. I acknowledge that label smoothing does not adhere to the requirement of a network being undistillable, but it gets quite close to it. I am not convinced whether the minor improvements over it really make a difference in practice. Perhaps additional analysis or experiments in other scenarios can demonstrate the practical significance of your method over label smoothing. \n * This may be a criticism in general for defense methods in this domain and not specifically for this paper. It seems that the evaluation is done under the assumption that the student model has access to the input only ($\\mathbf{x}$). How likely is that setup? In my opinion, a more realistic setup is distilling a model based on a new dataset altogether. I believe that a comparison in this setting will be much more informative.\n\n* The paper has some writing issues in my opinion:\n * Some of the sentences are too long which makes it hard to understand at first pass (e.g., the first sentence of the abstract and the sentence in lines 46-50).\n * Some sentences are not clear until properly explained in the paper. For instance, \"cluster of its output probability distributions in response\nto all sample instances\" or \"cluster corresponding to each label should ideally collapse into one probability distribution\", both are in the abstract.\n * In several locations there seems to be confusion in the citation format or whether a citation is justified at all (e.g., lines 72, 80)\n * The link for the code doesn't work.\n * Table 1 is very busy. The authors should consider breaking it down into several tables/figures.\n\n* The authors claim that their training method makes the network undistillable, but it is validated only empirically. No formal proof is given. This is not an actual weakness since I acknowledge that giving such proof is hard and perhaps even impossible. Hence, it would be beneficial to discuss the limitations of the work in general and the empirical validation specifically. Perhaps adding a section on potential failure cases or datasets/methods where CMIM might not hold would help to provide a more balanced perspective on the method's applicability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024towards,\ntitle={Towards Undistillable Models by Minimizing Conditional Mutual Information},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0tMcsHsHgQ},\nnote={under review}\n}" }, "abstract": { "value": "A deep neural network (DNN) is said to be undistillable if used as a black-box input-output teacher, it can not be distilled by knowledge distillation (KD) to train a student model so that the distilled student (called knockoff student) outperforms the student trained alone with label smoothing (LS student) in terms of prediction accuracy. To protect intellectual property of DNNs, it is desirable to build undistillable DNNs. To this end, it is first observed that an undistillable DNN may have the trait that each cluster of its output probability distributions in response to all sample instances with the same label should be highly concentrated to the extent that each cluster corresponding to each label should ideally collapse into one probability distribution. Based on this observation and by measuring the concentration of each cluster in terms of conditional mutual information (CMI), a new training method called CMI minimized (CMIM) method is proposed, which trains a DNN by jointly minimizing the conventional cross entropy (CE) loss and the CMI values of all temperature scaled clusters across the entire temperature spectrum. The resulting CMIM model is shown, by extensive experiments, to be undistillable by all tested KD methods existing in the literature. That is, the knockoff students distilled by these KD methods from the CMIM model underperform the respective LS students. In addition, the CMIM model is also shown to performs better than the model trained with the CE loss alone in terms of their own prediction accuracy." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Nasty teacher", "Knowledge distillation", "Intellectual property protection" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4f277c0f9e1f75140ccc3fe5833ed2d26b10ecb0.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Towards Undistillable Models by Minimizing Conditional Mutual Information" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0tXmtd0vZG
Enhancing Decision-Making of Large Language Models via Actor-Critic
main
Active
Large Language Models;Decision-Making;Actor-Critic
foundation or frontier models, including LLMs
3;5;6;6
4;4;3;4
2;3;3;2
2;2;3;3
1;2;3;2
5
3.75
2.5
2.5
2
-0.471405
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Authors mention fine-tuning under a limited budget (e.g., 18 trajectories), following the specification on lines 1048-1057. However what is the source these trajectories? Are they human annotated trajectories? Trajectories sampled from the zero-shot architecture, perhaps conditioned on task success? How are the special tokens which indicate positive/negative judgement produced for the fine-tuning data? It is not clear, and in the small-sample regime, these details are critical.\n 1. Related: Under what policy (action distribution) is the future simulator generating? If the reweighted action distribution becomes divergent from the prior, will the future simulator be invalidated?\n1. Authors argue equation (6) on line 375 is a more sample efficient way to incorporate labeled trajectory information than conventional policy gradient via the equation on line 833. However the value-critic on line 833 does not take simulated rollouts as a parameter. Perhaps this is a typo? Otherwise the comparison is invalid.\n1. In figure 8, it is unclear why LAC+fine-tuned-actor underperforms LAC. The lack of commentary raises reader suspicion." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "In a zero-shot context, the architecture and results are facially reasonable, given there are numerous prior results indicating large foundation models can exhibit improved performance when composed via a self-reflecting architecture, and given prior art that reasonable synthetic rollouts can improve value estimation analogous to chain-of-thought for possible futures.\n\nOn balance, despite the poor exposition around fine-tuning, I believe readers would net benefit from exposure to this paper because of intriguing concepts such as: 1) exponential log-odds reweighting of a prior action distribution is superior to policy gradient for sculpting the action distribution in the small-sample regime [line 375 vs. line 833]; 2) foundational [rather than component or architecture specific] fine-tuning induces end-to-end improvement when composed [lines 1058-1066]." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Authors propose a self-reflecting flow architecture for multi-step problem solving which is informed by classical RL concepts.\nOn AlfWorld and BabyAI-text environments it exhibits good performance.\n\nAlthough components of the architecture have been previously proposed, the combination is sensible.\n* \"Lang-critic\": this components reflects upon the goal and the history and augments the prompt for the actor component.\n* \"actor\": proposes new actions given the goal, the history, and the lang-critic augmentation.\n* \"rollout simulator\": given a goal, history, and action: simulates the next few steps (under what policy?)\n* \"value-critic\": given a goal, history, proposed action, and simulated future under this action: estimate the likelihood of task completion\n\nMultiple actions are sampled from the actor at each round, scored by the value critic, the distribution is reweighted via exponential-log-odds, and then the greedy action is selected.\n\nThere are some ablations studies to provide insight into the importance of the components, and comparisons to classical RL techniques." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper has two kinds of weaknesses. \n\nThe first kind is due to the nature of academic work, which is resource constrained (small teams; limited compute). This induces a set of \"easy\" criticisms such as \"insufficient experimental validation\" or \"excessive focus on the small sample regime\". I believe both authors and readers are well-aware of practical constraints, so this reviewer will not weigh such concerns heavily.\n\nThe second kind is insufficient description to allow the reader to understand what was done. Specifically, the weakest parts of the paper are all related to the impact of fine-tuning, which is not sufficiently described (see questions). Authors could improve both the intelligibility and the impact of this paper via more detail." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Is it reasonable to summarize the algorithm differences in the following manner: ReAct includes reasoning + actor, Critic only includes future trajectory + actor, and Lang-critic includes evaluation + actor?\n- Why does the value critic require a future trajectory, and how does it perform without future trajectories? \n- How does ReAct, combined with a value critic, perform? \n- How does ReAct, combined with a language critic, perform?\n- Is the prior \\pi_{LLM} the same LLMs used for Q_{LLM} for computing the critic values?\n- How does language critic + value critic perform? (Essentially LAC without update action distribution, instead using the critic values to choose an action)" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The ability to help LLMs reason in complex decision-making scenarios is a very important task.\n- The structure of the paper was well organized and easy to follow, aside from some terminology framing issues.\n- The paper effectively demonstrates how different types of actor feedback—reasoning, evaluation, and future trajectory prediction—affect downstream performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Although large language models (LLMs) have shown impressive capabilities in natural language processing tasks, they struggle with complex reasoning tasks. One common approach to tackle this issue is to train LLMs using reinforcement learning (RL); however, RL methods have several drawbacks. The authors propose a novel gradient-free Actor-Critic framework based on LLMs to overcome these limitations. This new framework includes an actor and a critic component, distinguishing it from previous approaches. Notably, the Critic is designed to provide feedback in either language or numerical form to help update the actor." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The terminology used in the paper could be more precise, particularly in relation to terms commonly found in the reinforcement learning literature.\n- The paper is missing important baselines needed to understand the performance gain claims." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Overall, I think the paper makes a valuable contribution. However, I do think there are several areas that the authors could address to further strengthen it:\n\n(1) How would the method change when the task has a potentially infinite action space (such as in dialogue)?\n\n(2) Have the authors experimented with tasks with more expressive outcomes/rewards? I am curious if both critic can still behave well when there are more nuanced outcomes than just success or failure. \n\n(3) While the benchmarks are well-known, I think they are perhaps not realistic of tasks that people might actually want LLMs to accomplish. For example, it would be interesting to see the authors evaluate on tasks considered in the GDP-Zero paper such as donation solicitation [1].\n\n[1] https://arxiv.org/abs/2305.13660" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is clearly written and tackles an important problem, as many applications of LLMs rely on them to behave as long-term agents rather than simply generate responses. \n\nThe method is sensible, using the language critic to refine the action space, then combining predictions by the value critic with the base LLM policy via a simple perturbation of initial action probabilities. \n\nFinally, The empirical results are impressive, outperforming previous state-of-the-art techniques such as ReAct by a large margin." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new way to tune LLMs to behave as agents, by combining the initial action probabilities with predictions by a language and value critic. The language critic adds natural language feedback to various candidate actions, and the value critic uses search to assign a value (or probability of success) to those actions. The method achieves impressive empirical performance on popular benchmarks such as ALFWorld against actor-only and critic-only baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Though the method is impressive, I have some concerns about its generalizability. \n\nNamely, the authors only consider tasks with small action spaces, where evaluating each action individually is tractable. In many more realistic tasks such as dialogue, I imagine that the action space would be more open-ended and am unsure how to adapt the proposed method. \n\nIn addition, the tasks considered rely on being able to simulate future trajectories with high fidelity, which may be harder to do in more complex environments. Specifically, it is likely much harder to faithfully predict trajectories when the agent is engaging with another human rather than simply moving around in a static environment.\n\nFinally, the value critic currently only works for tasks with binary outcomes (success or failure)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "37: long-time horizon -> long horizons\n\n151-152: \"Due to the auto-regressive nature of LLM, it does not do reasoning and planning explicitly.\" This seems controversial. Chain-of-thought/o-1 are also auto-regressive decoding, but arguably they have some reasoning in them. Same with 152-154, \"Accordingly, LLM\nwith actor-only methods often struggles with complex tasks that require multiple steps of planning and reasoning\".\n\nPlease see weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper addresses an important and relevant problem of improving decision-making capabilities of LLM agents. \n\nIt is nice that policy improvement can be achieved without incurring costly gradient updates and loss backpropagation.\n\nThe paper shows improvements on two popular benchmarks including Alfworld and BabyAI." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces LLM-based Actor Critic framework (LAC) to improve decision-making capabilities of LLM agents through an integration of the actor and the critic. LAC makes use of two different critics including a language critic that provides contextual information and a value critic that provides more quantitative information. The paper also proposed a gradient-free policy improvement approach using two critics without incurring costly backpropagation processes. The effectiveness of LAC is demonstrated in Alfworld and BabyAI-test, and even surpasses GPT4 with ReAct." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The motivation that \"these methods typically adopt either an actor only or critic only approach\" (line 42-43) misses many related works. The paper relies on [1] to be the only paper that discusses actor critic methods for LLM agents but many important related works are missing. Even PPO is commonly considered as an actor-critic method where it has an actor that estimates the V function to reduce variance for policy gradient estimation. Thus, many prior works that use PPO for LLM agents should be considered as actor-critic methods (e.g. [2]). Retroformer [3] can also be considered to be an actor-critic method where the critic is a natural language based critic. Other works also applied value-based actor critic methods to LLM agent tasks (e.gl [4]).\n\nThe novelty of the method is limited. The language critic and value critic are two main proposals in this paper. However, the language critic is relatively simple and can be considered as a direct use of CoT [5] where the agent is asked to generate thoughts reflecting on the previous round actions before taking actions. The objective of the value critic is also similar to constrained decoding [6] that has been widely used in the alignment domain without the need of performing gradient updates on models.\n\nThe writing of the paper, and in particular the motivation (see above) and the experiment section can be improved. Section 5.2 and 5.3 only state the the proposed methods are better than baselines and other ablations without investigating into the reason of the gap. E.g. Why is LAC so much better than ReAct/ RAP, and is there any finding of comparing the performance of LAC with different base models. The experiment section does not provide such necessary analysis information.\n\nIt is unclear how generalizable and computationally efficient the proposed method is. In particular, it seems that the method can only be applied to tasks with a finite action space and it is unclear if the method can generalize to realistic tasks with unbounded action space such as web browsing.\n\nThe tasks used in this work are more on the simple side. It would be interesting to see if the proposed method can work in more challenging tasks such as web browsing, coding, minecraft etc.\n\n[1] Controlling large language model-based agents for large-scale decision-making: An actor-critic approach\n[2] Large language models as generalizable policies for embodied tasks\n[3] RETROFORMER: RETROSPECTIVE LARGE LANGUAGE AGENTS WITH POLICY GRADIENT OPTIMIZATION\n[4] ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL\n[5] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models\n[6] Controlled Decoding from Language Models" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose an LLM-based Actor-Critic algorithm that integrates actor and critic methods in the way that would utilize the merits of the actor-critic algorithm with the strengths of LLMs." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024enhancing,\ntitle={Enhancing Decision-Making of Large Language Models via Actor-Critic},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0tXmtd0vZG},\nnote={under review}\n}" }, "abstract": { "value": "Large Language Models (LLMs) have achieved significant advancements in natural language processing tasks, yet they encounter challenges in complex decision-making scenarios that require long-term reasoning and alignment with high-level objectives. This paper introduces a novel gradient-free LLM-based Actor-Critic framework, termed LAC, which addresses these limitations by integrating both action generation and action evaluation mechanisms. Our approach employs two distinct critics: a language-based critic that provides context-sensitive feedback and a value-based critic that offers quantitative assessments of expected long-term rewards. This dual-critic architecture enhances decision-making by leveraging the complementary strengths of both critics, enabling contextually appropriate and more robust action selection. Additionally, we propose a gradient-free policy improvement method that reduces computational overhead, facilitating efficient updates to the actor’s policy without the complexities of gradient backpropagation. We validate the effectiveness of LAC across diverse environments that cover both high-level action space (ALFWorld) and low-level action space (BabyAI-Text), demonstrating its superior performance compared to existing state-of-the-art methods. Our method outperforms other state-of-the-art baselines using the same 7B/8B open-source LLMs and even exceeds a strong baseline ReAct using GPT-4 in most settings. Our findings highlight the efficacy and generality of the dual-critic Actor-Critic framework in enhancing LLM-based decision-making." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models", "Decision-Making", "Actor-Critic" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/aa526d2048192fc98f0041512a32d1475c50b6a4.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/41b32134b540919bf8ee9e54a6457f29f536236e.zip" }, "title": { "value": "Enhancing Decision-Making of Large Language Models via Actor-Critic" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0uFTqvQhML
MagicDrive3D: Controllable 3D Generation for Any-View Rendering in Street Scenes
main
Active
controllable 3D scene generation;3D gaussian splatting;autonomous driving
generative models
3;5;6;6
5;4;5;5
2;3;3;3
2;2;3;3
2;3;3;3
5
4.75
2.75
2.5
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "see weakness box." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper presents impressive visualization results, with generated scenes that are virtually indistinguishable from real-world counterparts.\n\nIt introduces an innovative generation-first, reconstruction-later pipeline, which simplifies both scene control and data acquisition, offering a more streamlined approach to 3D scene synthesis.\n\nThe deformable Gaussian splatting (DGS) method significantly enhances the quality of both generated and reconstructed views, demonstrating robust performance in complex autonomous driving environments.\n\nThe method provides high controllability through multi-level signals, including BEV maps, 3D bounding boxes, and text descriptions, enabling precise and flexible scene generation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel approach for 3D street scene generation, with a strong emphasis on multi-condition controllability, including BEV (Bird’s Eye View) maps, 3D objects, and text descriptions. The method involves first training a video generation model, followed by scene reconstruction using deformable Gaussian splatting (DGS). This two-step approach improves the quality and temporal consistency of the generated scenes, making it particularly beneficial for data augmentation in downstream tasks. Validation on the nuScenes dataset highlights the method’s strengths in both controllability and scene reconstruction quality." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The method occasionally struggles with generating intricate objects, such as pedestrians, and detailed texture areas, like road fences, which can affect the realism of the scenes in certain contexts.\n\nThe experiments are conducted solely on the nuScenes dataset, which includes 700 training and 150 validation clips. Although widely used, this dataset may not fully capture the complexity of real-world environments, raising concerns about the method’s generalizability to more diverse and challenging scenarios.\n\nThe scholarship could be improved by referencing recent advancements in street-view generation, such as SCP-Diff: Photo-Realistic Semantic Image Synthesis with Spatial-Categorical Joint Prior [ECCV 2024]. This would help position the proposed approach more clearly within the current state of the field." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please address the concerns raised in the weaknesses section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- [S1: Significance] The paper addresses an important problem in the field of computer vision: controllable 3D scene generation. The proposed method has the potential to be used in a variety of applications, including autonomous driving simulation, virtual reality, and video gaming." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces MagicDrive3D, a new framework for controllable 3D street scene generation useful for view synthesis. The framework supports multi-condition control, including BEV road maps, 3D object bounding boxes, and text descriptions. The proposed framework MagicDrive3D first trains a video generation model and then reconstructs from the generated data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- [W1] The technical contributions of pose conditioned video generation and its relation in the framework is not clearly stated. \n - [W1.1] According to Figure 2, it looks like the video generator works without conditioning on input camera images. If that is the case, the reviewer would like to understand what’s the benefit of feeding the video generated multi-view data to Stage 2 compared to using ground-truth data? Based on my understanding, the exposure discrepancy across multi-views and dynamic objects in the generated data will pose the same challenge to Stage 2 (vs. ground-truth camera images).\n - [W1.2] If the proposed video generator works without conditioning on camera input images, please explain the steps that generate row (e) in Figure 8. In Figure 8, it is clear that the proposed system is able to take camera images as input and apply style transfer on top. \n - [W1.3] The reviewer cannot find any videos in supplementary material, which is usually the hard requirement for accepting a video generation paper. The reviewer feels video results are still required for this paper, as it highlights video generation as one important step compared to existing work in 3D street view generation.\n\n- [W2] The paper’s claim that Magic3D is the first to achieve controllable 3D street scene generation using a common driving dataset (Line 91-92) is questionable. For example, controllable 3D street scene generation has been achieved in Panoptic Neural Fields [NewRef1] on KITTI dataset. In another example, as discussed in Section 5.1 of BlockNeRF [NewRef2], 3D street scene generation has also been achieved on the single-capture subset (open-sourced) called San Francisco Mission Bay Dataset. Please discuss the relevant work in the main text and compare against them for novel view synthesis (show quantitative metrics).\n - [W2.1] The reviewer would recommend to conduct a more sophisticated literature review. For example, this paper also missed prior work that shares similar motivation but on object reconstruction from driving videos using a generative model GINA-3D [NewRef3]. \n\n- [W3] Important details regarding the FVD and FID metrics are missing. As Nuscenes dataset is relatively small, the reviewer would like to understand how many images or 16-frame video clips have been used in computing the metrics. How do you construct the real videos and generated videos (on what conditions). This is an important factor to decide whether the metrics reported in Table 2 are valid or not. \n - [W3.1] In the field of image and video generation, it is known that FID and FVD are good but not perfect. Certain adversarial artifacts can lead to unexpected changes to FID and FVD. Please consider using FID-DINOv2 [NewRef4] and FVD-VideoMAEv2 [NewRef5] as alternative metrics.\n\n- [W4] While one focus of the paper is on controllable generation, the reviewer cannot find enough details on different controllable signals. It would be good to develop quantitative metrics to measure the accuracy of control and provide more diverse examples of scene editing. This could include user studies to assess the usability and effectiveness of the control mechanisms.\n\n- [W5] The paper focuses on 3D street view synthesis but the reviewer cannot find 3D visualizations in the supplementary materials.\n\n\nReferences\n- [NewRef1] Panoptic Neural Fields: A Semantic Object-Aware Neural Scene Representation, Kundu et al., In CVPR 2022.\n- [NewRef2] Block-NeRF: Scalable Large Scene Neural View Synthesis, Tancik et al., In CVPR 2022.\n- [NewRef3] GINA-3D: Learning to Generate Implicit Neural Assets in the Wild, Shen et al., In CVPR 2023.\n- [NewRef4] Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models, Stein et al., In NeurIPS’23.\n- [NewRef5] On the Content Bias in Fréchet Video Distance, Ge et al., In CVPR 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the section of weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-structured and straightforward to understand.\n2. The concept of breaking down 3D scene generation into a sequential multi-view generative stage followed by a static reconstruction stage, utilizing two distinct representations that have proven effective in their respective areas, is particularly intriguing.\n3. The ablation studies demonstrate a significant improvement over the selected baselines (3DGS and LucidDreamer)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The manuscript introduces MagicDrive3D, a novel approach for controllable 3D street scene generation. This method divides the generation process into two distinct stages. In the first stage, a conditional generation model is trained to produce multi-view video sequences from the perspective of an ego car. The authors enhance the existing MagicDrive framework by encoding the relative pose with respect to the first frame and using these encodings as conditions for the network. In the second stage, the focus shifts to reconstruction, where the generated data is used to reconstruct the 3D scene. The authors propose several improvements to the 3DGS in terms of spatial location priors, modeling, and loss functions, specifically tailored for street view scene reconstruction. Experimental results demonstrate the effectiveness of each proposed component." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The performance on test views is not particularly strong. As noted in the manuscript, the PSNR on novel views in both test settings is below 22. While this work does advance the field of scene generation, it is not yet suitable for practical applications, such as generating synthetic data for end-to-end autonomous driving policy training.\n2. The manuscript lacks a comparison with key baselines during the reconstruction phase, specifically Street Gaussians [A].\n3. Have you attempted a long-term rollout of video diffusion models? If such a long-term rollout were conducted (like Vista [B]), would the two-stage scene generation pipeline still perform effectively?\n\n\n[A] Street Gaussians: Modeling Dynamic Urban Scenes with Gaussian Splatting\n[B] Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I'm a little questioned about the quality of the generated night scene in Figure 1, as it's blurry and doesn’t clearly convey a night setting." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed framework supports controllable scene generation using BEV maps, 3D bounding boxes, and text descriptions, which enhances its applicability in tasks like autonomous driving simulations.\n2. The introduction of deformable 3D GS effectively addresses local dynamics and exposure discrepancies, ensuring better scene generation quality." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents MagicDrive3D, a novel framework for controllable 3D street scene generation. The framework combines geometry-free video generation with geometry-focused reconstruction using 3DGS. MagicDrive3D allows for multi-condition control including BEV maps, 3D objects, and text descriptions, enabling the generation of diverse and high-quality 3D street scenes. It also improves downstream tasks like BEV segmentation and supports realistic scene simulations for applications such as autonomous driving." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. MagicDrive3D is composed of two parts: a video generation model and a 3DGS to recover 3D scenes from images, both are proposed in previous works, while showing technical improvements, still limiting the overall novelty of the paper.\n2. The comparison in Table 2 is only made with Vallinia 3D-GS, yet there are several other dynamic 3D-GS methods for road scenes (e.g., PVG[1], StreetGaussian[2]) that should also be considered for comparison.\n\n[1] Periodic Vibration Gaussian: Dynamic Urban Scene Reconstruction and Real-time Rendering\n[2] Street Gaussians: Modeling Dynamic Urban Scenes with Gaussian Splatting" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024magicdrived,\ntitle={MagicDrive3D: Controllable 3D Generation for Any-View Rendering in Street Scenes},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0uFTqvQhML},\nnote={under review}\n}" }, "abstract": { "value": "While controllable generative models for images and videos have achieved remarkable success, high-quality models for 3D scenes, particularly in unbounded scenarios like autonomous driving, remain underdeveloped due to high data acquisition costs. In this paper, we introduce MagicDrive3D, a novel pipeline for controllable 3D street scene generation that supports multi-condition control, including BEV maps, 3D objects, and text descriptions. Unlike previous methods that reconstruct before training the generative models, MagicDrive3D first trains a video generation model and then reconstructs from the generated data. This innovative approach enables easily controllable generation and static scene acquisition, resulting in high-quality scene reconstruction. To address the minor errors in generated content, we propose deformable Gaussian splatting with monocular depth initialization and appearance modeling to manage exposure discrepancies across viewpoints. Validated on the nuScenes dataset, MagicDrive3D generates diverse, high-quality 3D driving scenes that support any-view rendering and enhance downstream tasks like BEV segmentation. Our results demonstrate the framework's superior performance, showcasing its transformative potential for autonomous driving simulation and beyond." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "controllable 3D scene generation", "3D gaussian splatting", "autonomous driving" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ff96689c67ba80b981daa5ae9805d16b1a6311dc.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MagicDrive3D: Controllable 3D Generation for Any-View Rendering in Street Scenes" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0uRc3CfJIQ
ORSO: Accelerating Reward Design via Online Reward Selection and Policy Optimization
main
Active
Reinforcement Learning;Reward Design;Reward Selection
reinforcement learning
3;5;6;6;8
3;4;3;4;4
3;2;3;3;3
2;2;2;3;3
2;3;3;3;4
5.6
3.6
2.8
2.4
3
0.552771
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* What is human designed reward and how it is computed?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Soundness\n======\nThe approach is generally sound. \n\t\nSignificance & Related work\n=========\nThe paper presents an in-depth related work section, and well defined preliminaries (note redundancy of section 2) that lead to demonstrations of several results. \n\nExperimentation\n=========\nThe paper presents an in-depth ablation analysis of the performance of various selection algorithms.\n\nPresentation\n=========\nThe paper is well written." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Online Reward Selection and Policy Optimization (ORSO), an approach that defines reward selection as an online model selection problem. The approach uses exploration strategies to identify shaping reward functions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Soundness\n======\nIt is unclear in the experiments in Fig 2 what ‘human-level performance’ or ‘human-designed reward function’ is and how it is defined/computed. Note that the proof for D1 needs to be rewritten for clarity to show base case and inductive hypothesis, should proof by induction still be the chosen approach.\n\t\nExperimentation\n=========\nThe paper presents an in-depth ablation analysis of the performance of various selection algorithms, however, the impact of poorly chosen task rewards needs to be analysed. \n\nPresentation\n=========\nPresentation is good, as above, Section 2 is too short and redundant." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- In what cases would the monotonicity assumption be violated? Do the environments in the experimental set-up violate or obey the assumption? How would ORSO handle such violations?\n\n- Future work mentions exciting directions. Since the naive approach is failing, how likely is a VLM-based reward design method to fail?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is very well-written. The motivation is clearly explained, and the problem and assumptions are well described. Moreover, given the assumptions, the proposed approach and its theoretical guarantees are clear.\n\n- The experimental set-up makes sense, and the evaluated baselines allow us to see how reward design is critical, humans can be sub-optimal at it, and naive attempts are prone to fail.\n\n- The experimental results clearly showcase its advantages in comparison to the baselines. In addition, the ablation study evaluates the impact of different components of ORSO and provides detailed insights." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies automated reward shaping by posing it as an online reward selection problem. Instead of multiple training runs to observe the impact of different shaping functions, this work aims to identify the best one within a fixed time budget. To this aim, the authors develop ORSO, a regret-minimizing approach that utilizes multi-armed bandits where a candidate shaping function is an arm. More specifically, ORSO uses the D3RB algorithm to select an arm. Upon selection of an arm, ORSO trains a policy corresponding to the said arm for a fixed number of iterations and then evaluates the policy with respect to the task rewards. The paper provides regret guarantees, assuming that a learner monotonically dominates all learners and its average performance increases monotonically. \n\nThe paper evaluates a practical implementation of ORSO in continuous control tasks with varying complexity whose rewards are either sparse or unshaped. The experimental results show that ORSO is faster than an LLM-based reward design method, can surpass human-designed rewards, and performs better as the budget increases. The authors also provide an ablation study for different arm selection strategies and different numbers of candidate shaping functions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I urge the authors to move the related work, at least the most relevant parts, to the main document.\n\n- Assumption 4.2 seems limiting. A discussion of why the assumptions are viable or how they are needed would strengthen the paper's arguments. It would be even better to explain their role in causing the contrast with the regret guarantees in Pacchiano et al. (2023).\n\n- As the quality of candidate shaping functions plays an important role, an ablation study to understand the impact of wrong/redundant candidates would help the reader understand the limitations of ORSO." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Unique contributions could be made clearer and explicitly called out in the introduction.\n\n- Lines 54-59: I understand you are highlighting unique challenges compared to standard multi-arm bandit settings, yet ORSO uses the same selection algorithms typically used to solve such multi-arm bandit problems. The paper could be clearer in defining exactly which components of ORSO are key to address the unique challenges presented.\n\n- I recommend expanding on the various resampling strategies, if more than one was tried out, and their impact on performance as this seems to be a key ingredient to the method's success.\n\n- I would recommend adding the synthetically generated best-performing shaping reward functions for each task to appendix E. Are the reward functions sensible to the human reader? This has implications on how well these shaping reward functions could be further refined by human experimenters, and possibly give insights on their logical soundness.\n\n- Also, was any constraint, structure, human knowledge beyond the imposed when prompting the generation of such rewards or could the prompt be arguably generated programmatically (if so, I recommend just stating it - the code base is not available during the review process to verify)? While not directly related to the ORSO contribution, this is arguably important to showcase as ORSO heavily relies on the existence of an automated way of generating reward functions without human priors.\n\n- Please look at the weaknesses section and help clarify if any are can be addressed." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Claims: (1) ORSO reduces both time and computational costs by more than half compared to earlier methods, making reward design accessible to a wider range of researchers. (2) By formalizing the reward design problem and providing a theoretical analysis of ORSO’s regret when using the D3RB algorithm, we also contribute to the theoretical understanding of reward design in RL.\n\n- The work proposes an original formulation of the shaping reward function selection process, viewing it as an online model selection problem.\n- The regret based analysis provides a clear and intuitive way to monitor ORSO's performance and efficiency gains.\n- Thanks to the problem formulation, the method is kept elegant in its simplicity and largely amounts to the application of existing approaches into a unified framework.\n- Results on a variety of tasks and domains against available baselines support the efficiency and performance claims.\n\nClarity:\n- The writing is clear, the paper is well structured, and appropriate context is provided to the reader.\n\nSignificance:\n- This work tackles the issue of reward design for RL. This has been and continues to be one of the most significant challenges keeping RL from widespread successful deployment in the real world." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces ORSO, a method that aims to increase the efficiency of the reward shaping selection process by casting it as an automated online model selection problem.\nORSO is a general algorithmic framework that implements the following steps: (i) a reward function generation method is used to provide a set of candidate shaping reward functions, (ii) a selection algorithm is used to select a shaping reward function, (iii) an RL algorithm is used to train the policy associated with the selected shaping reward function for a set amount of iterations, (iv) the trained policy is then evaluated against the task reward function and the utility is used to update the parameters of the selection algorithm. This process is repeated until a predefined computational budget is exhausted.\nWhile the components within the ORSO framework are modular and exchangeable, this work uses (i) an LLM based generator as the reward function generation method, (ii) PPO as the RL algorithm, (iii) D3RB, as the reward function selection algorithm (ablations are additionally conducted with Exp3, D3RB,UCB, ETC, and EG).\nExperiments are conducted across tasks of varying difficulty, 3 budgets, and 3 reward functions sets.\nResults indicate that the ORSO performance in terms of task return scales with budget, and is comparable - and can surpass - that of human defined shaping reward functions; additionally ORSO is twice as fast as a prior shaping reward function selection method (EUREKA)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Contribution:\n- Ultimately, ORSO searches over a set of shaping reward functions. While the framework is simple and elegant, to my understanding, it ultimately relies on and is limited by the performance of the \"shaping reward function generator\".\n- ORSO is only benchmarked against methods for which a performant human-engineered reward function can be defined. Impact would be higher if method could generalize beyond these settings.\n- ORSO in its current form does not seem to offer the flexibility to deal with changing / annealing of shaping reward functions throughout training, a common technique in reward shaping literature." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Regarding Weakness 1, about candidate reward function generation, As described in section 5.1.2, the candidate reward functions are directly generated through LLM, Could you clarify:\n\n(a) What are the specific forms of these reward functions? Are they related to the observations, features, and/or pre-defined reward components? Can these generated rewards capture all the necessary aspects to define effective rewards?\n\n(b) If the optimal reward function is not included in the generated candidates, how does ORSO ensure that the final optimized policy is indeed optimal?\n\n2. the policy optimizes based on a given candidate reward function, would this make it difficult to ensure that the policy is optimizing the task's original objective (the environmental reward function defined by the MDP)?\n\n3. Frequent switching of reward functions may lead to significant shifts in the policy's learning objectives. For instance, in a maze task, if reward function #1 focuses on avoiding obstacles, while reward function #2 focuses on resource collection, switching between these two may lead to inconsistent learning targets. Would this cause instability in the learning process?\n\n4. I'm unclear about the evaluation metric in the experiments, specifically, in Figure 2 (left), it shows performance as a percentage of the human-designed reward function. In Section 5.1.1, the paper states, \"No design is with the task reward function r for each MDP\". I assume this refers to the original environmental reward function, which should be the primary objective the agent aims to optimize. However, in Figure 2 (left), the \"No design\" baseline is around half of the human-designed reward (I assume this figure reports cumulative rewards under each baseline's own reward function). This seems unfair and could introduce bias for deviating from the MDP’s original task objective. \n\nFor example, suppose the MDP provides rewards of $0, 0, 1$ for states $s_1, s_2, s_3$ (only 3 states). A human-designed reward function might assign $0, 1, 1$ for $s_1, s_2, s_3$. Consequently, the cumulative reward under the human-designed reward function would be higher, and it also proposes new targets (both $s_2$ and $s_3$ are equally important). From my understanding, the performance should be evaluated consistently on the original MDP reward (the true objective), meaning that the \"No design\" case should actually serve as an upper bound." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The approach is easy to implement and effective at selecting reward functions, it also shows fast convergence in terms of both computational time and sample efficiency." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed an Online Reward Selection and Policy Optimization (ORSO) algorithm for reinforcement learning. ORSO pre-generates some candidate reward functions by linearly combining some reward components or by LLM, while learning, ORSO dynamically evaluates which candidate reward function can lead to better policy optimization, then selects the optimal candidate to guide the learning process." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. As this method follows a \"pre-define and select one\" paradigm, the final optimal performance that ORSO can achieve heavily depends on how good are the pre-generated candidate reward functions.\n2. The author states that this is a reward shaping approach, but the paper doesn't compare it with any reward shaping or reward selection baselines. If the authors could compare ORSO with some representative reward shaping algorithms (such as [1][2][3][4]), it would better showcase its advantages.\n\n[1] Devidze, Rati, Parameswaran Kamalaruban, and Adish Singla. \"Exploration-guided reward shaping for reinforcement learning under sparse rewards.\" Advances in Neural Information Processing Systems 35 (2022): 5829-5842.\n\n[2] Zheng, Zeyu, Junhyuk Oh, and Satinder Singh. \"On learning intrinsic rewards for policy gradient methods.\" Advances in Neural Information Processing Systems 31 (2018).\n\n[3] Memarian, Farzan, et al. \"Self-supervised online reward shaping in sparse-reward environments.\" 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021.\n\n[4] Burda, Y., Edwards, H., Storkey, A., and Klimov, O. (2018). Exploration by random network distillation. In International Conference on Learning Representations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The proposed method generates a set of candidate reward functions for the online selection phase. Does having K reward function candidates mean that the number of candidates is fixed? If all these candidates are not suitable or not the best, what is the solution?\n\n2. The experiments compared the performance of policies trained using No Design, Human, Naive Selection, and ORSO to show the superiority of ORSO. However, the impact of selecting different reward functions for ORSO algorithm on experimental results has not been analyzed. If possible, please provide relevant experiments to demonstrate the experimental differences caused by selecting different reward functions.\n\n3. Figure 4 shows the normalized cumulative regret for different selection algorithms. The manuscript mentioned that the ORSO’s regret can become negative, indicating that it finds reward functions that outperform the human baseline. The minimum value is zero in Figure 4, I didn’t observe the negative values.\n\n4. There is a similar paper ORSO: Accelerating Reward Design via Online Reward Selection and Policy Optimization published in ARLET 2024. What is the difference between these two works? Has the ARLET paper been cited?\n\n5. The number of references is small, and more recent articles on reward shaping can be added." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The idea is a little simple but effective." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This manuscript proposes the Online Reward Selection and Policy Optimization (ORSO) to frame shaping reward selection as an online model selection problem. It automatically identifies promising shaping reward functions, balancing exploration and exploitation with provable regret guarantees. The ORSO method significantly improves sample efficiency and reduces computational time compared to traditional methods that fully evaluate each shaping reward function." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Some experiments and theories can be added." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We present ORSO, a method for efficiently designing and selecting shaping rewards in reinforcement learning." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024orso,\ntitle={{ORSO}: Accelerating Reward Design via Online Reward Selection and Policy Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0uRc3CfJIQ},\nnote={under review}\n}" }, "abstract": { "value": "Reward shaping is a critical component in reinforcement learning (RL), particularly for complex tasks where sparse rewards can hinder learning. While shaping rewards have been introduced to provide additional guidance, selecting effective shaping functions remains challenging and computationally expensive. This paper introduces Online Reward Selection and Policy Optimization (ORSO), a novel approach that frames shaping reward selection as an online model selection problem. ORSO employs principled exploration strategies to automatically identify promising shaping reward functions without human intervention, balancing exploration and exploitation with provable regret guarantees. We demonstrate ORSO's effectiveness across various continuous control tasks using the Isaac Gym simulator. Compared to traditional methods that fully evaluate each shaping reward function, ORSO significantly improves sample efficiency, reduces computational time, and consistently identifies high-quality reward functions that produce policies comparable to those generated by domain experts through hand-engineered rewards." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Reinforcement Learning", "Reward Design", "Reward Selection" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c891e02312c4fa1e69ca67b7ca0b5b74bffd5b91.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "ORSO: Accelerating Reward Design via Online Reward Selection and Policy Optimization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0vKokoPKTo
Towards Geometry Problems Solving Employing GPT-4 Vision with Few-Shot Prompting: An Empirical Study of What Matters
main
Active
Large Language Models;Mathematical Reasoning;Geometry Problem Solving;Prompting Methods
foundation or frontier models, including LLMs
3;3;3;5
4;4;4;4
3;1;2;2
2;2;2;2
2;2;2;3
3.5
4
2
2
2.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.\tPlease use $``$ rather than $’’$ across the paper\n2.\tPlease use \\citep rather than \\cite" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The reasoning task for LLM is an intriguing and promising area. The authors approach this problem from the perspective of GPS, which presents a fresh and valuable perspective.\n2. The experimental evaluation is comprehensive: multiple datasets and prompt types (e.g., CoT, PoT) are used." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper examines the use of GPT-4V for solving geometry problems through few-shot prompting, assessing how input/output formats, prompt structures, and different reasoning strategies impact performance. It explores two prompting types, Chain-of-Thought and Program-of-Thought, and analyzes their effectiveness across various datasets. Findings suggest that the model’s performance is influenced more by prompt structure than by the validity of demonstrations. Furthermore, reasoning abilities are highlighted as more essential than computation power for geometry problem-solving." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tI am confused about the motivation for why we need to answer the three questions the paper is asking (line 55-67). Do these questions really contribute to our understanding of LLM? I feel there is a logical gap between the purpose of understanding LLM and these specific questions. For instance, the question “are valid demonstrations important?” is more about performance tweaks, rather than about actually providing insights on how LLMs work for GPS problems. I cannot directly connect having answers to these performance-related questions to the underlying working schemes of LLMs.\n2.\tThe findings in the paper seem trivial to me. The answers, such as the necessity of valid demonstrations in input/output format, are not surprising. To me, what would actually be interesting is seeing cases where the LLMs can handle bad demonstrations.\n3.\tThere is almost zero algorithmic/technical contribution to this paper. It’s just a bunch of prompts, which any solid paper would have as an ablation study.\n4.\tThe writing quality needs to be significantly improved. For instance, line 74-79 are very vague and poorly explained. There lack of scientific rigor for line 193-195. The claims in line 353-354 are confusing. The conclusion in lines 353–354 regarding the importance of input-output formats, lacks clear support from preceding paragraphs.\n5.\tExperimental analysis are weak. Fig. 2 does not demonstrate significant differences across settings, making it difficult to extract meaningful conclusions from the results. The way how the datasets are sampled from the original one is not explained. Some claims are misleading: in line 378, an average domain knowledge score exceeding 1.5 reflects the involvement of extensive domain knowledge actually suggests the opposite of the written claim. Additionally, the notion that the number of digits relates to computational requirements is confusing and inaccurate: it should indicate the precision rather than the computational requirement." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Is Figure 2 the result with one-shot or two-shot demonstration? Section 5.2 claims it is with one-shot, but Section 6.2 claims it is with two-shot. Besides, it is written in Section 6.2 that “...two different background colors represent different prompting methods: the white…, the gray…”. But in Figure 2, there is no distinct background color.\n\n2. What are the “invalid reasoning” and “invalid computation” demonstrations phrased in Section 6.2? Based on the Appendix A, the invalid demonstrations for either method categories are the same, but in language and code format, respectively. When in the code format, it seems they are still valid in computation (no calculation error) but invalid in reasoning (good demo for a wrong problem)?\n\n3. There are also some writing issues in the paper that might mislead the readers:\n - Figure 1, Program-of-Thought method, there is a mismatch between p1 and C1 on the left. “The shorter base is 6 ft” in p1, but it is set to 2 in C1.\n- Section 3.2, paragraph 2. “...few-shot demonstration <pk, Ck>...”. ‘k’ should be the subscript.\n- Section 6.1, paragraph 1. “In appendix E, we further refined the distribution of…”. The choice of word “refined” is misleading, as it seems Appendix E merely collects the percentage of problems over different knowledge numbers?\n- Section 6.2, paragraph 1. “For example, the RP method … improved the accuracy by 22.3% compared to the PAL method …”, “improved” should be “outperforms” as these are two irrelevant methods." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The study reveals some behaviors of LLMs that can potentially motivate future research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the factors that impact the ability of GPT-4 Vision on geometry problem solving (GPS) ability. Experiments are conducted to examine GPT-4’s behavior under various controlled settings. Based on the result, the paper draws the conclusion that (1) the correctness of the demonstrations does not impact the model’s performance; (2) Chain-of-thought outperforms program-of-thought methods, as GPS does not require much computational power from the code-writing; (3) GPT-4V is better at solving problems of shorter description and that concerning simpler shapes, both of which indicate the problem complexity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The study of the GPT-4V under valid or invalid few-shot demonstrations in Section 5 is not straightforward to back up the claim “model’s improvement from few-shot demonstrations is due to input format, output format, and logic and structure of the demonstration” appearing throughout the paper. The study only shows that the overall performance does not degrade by using the demo of the specific invalidity mode in the paper, which is a valid solving process for a wrong problem as shown in Appendix A, compared to using valid demonstrations. It is unclear if it is because “the input format, output format, and logic and structure” is learnt. If there are no demonstrations, is the input format, output format, or logic in the output wrong? If the model is provided with demonstrations of wrong logics (e.g. perimeter = AB * BC *AC), will the model still achieve good performance? The paper can clarify those questions by showing zero-shot GPT-4 (no demonstrations) performance in Figure 2 as comparison, qualitatively comparing the model behavior with no/correct/wrong demonstrations, and testing on more invalidity mode of the demonstrations.\n\n2. While the study in Section 6 shows an interesting result that Program-of-thought (PoT) is outperformed by the Chain-of-thought (CoT), meaning that the code-writing is not suitable for the GPS problem, the analysis of the reason is not convincing. Specifically, the claim is that there are two reasons for this phenomenon: (1) PoT is better at CoT in solving complex arithmetic calculation, but the GPS task does not require much computation; (2) Reasoning in language is better at reasoning in the code. In this case, what is the performance gap between the two categories of the methods under different calculation complexity and reasoning complexity measurement on the problem level instead of the dataset level? This result will be a stronger support of the claims." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "None." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Regarding the first research question, is the difference between the findings of this paper and Wang et al. [1] merely a matter of testing problems?\n2. Why does the paper only compare chain-of-thought and program-of-thought? How about tree-of-thought [2] or graph-of-thought [3]?\n3. For the reasoning part of the second question, how did you calculate the domain knowledge accounts for each problem? Was it done manually or automatically?\n4. Why do you classify problems involving more than two domain knowledge accounts as complex reasoning, while stating that the vast majority of problems, which involve less than three-digit arithmetic, require only a small amount of computation? How do you objectively define what is a high demand for reasoning or computation?\n5. What is invalid computation? Could it be that the better performance compared to \"invalid computation\" (as mentioned in Q1) is due to invalid reasoning providing a standard input-output format, rather than an intrinsic difference between reasoning and computation?\n6. For the third research question, what is the significance of analyzing which range of problem lengths yields the optimal answering accuracy?\n7. Could the authors clarify the meaning of \"the problem length is unrelated to the method with or without prompting, but only to the model’s ability to understand semantic information\"?\n\n\n[1] Wang B, Min S, Deng X, et al. Towards understanding chain-of-thought prompting: An empirical study of what matters. \\\n[2] Yao S, Yu D, Zhao J, et al. Tree of thoughts: Deliberate problem solving with large language models. \\\n[3] Besta M, Blach N, Kubicek A, et al. Graph of thoughts: Solving elaborate problems with large language models." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Enhancing GPT's ability to solve GPS problems through few-shot prompting is a highly significant topic.\n2. The paper is clear and well-structured. It provides a thorough discussion of three key research questions.\n3. The paper makes intriguing discoveries: (1) The model’s performance improvement is not due to the quality of the demonstration, but rather to the input format, output format, and the logic and structure of the demonstration; (2) GPS tasks emphasize reasoning ability more than computational power; (3) Specialized prompting methods could be de\u0002signed to enhance the model’s performance. These findings have the potential to inspire new research directions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the impact of few-shot prompting methods on enhancing the performance of GPT-4V in solving geometry problem-solving tasks, proposing three key research questions.\n\nThe authors first investigate whether valid demonstrations are essential for performance, concluding that prompt structure and logic are more influential than correctness. They then examine whether reasoning (CoT) or computation (PoT) methods are superior for GPS tasks, finding that reasoning-based prompts generally yield better results. Finally, they analyze the influence of various prompting methods on the problem length and the geometric shapes, which all demonstrate minor improvements.\n\nThis study suggests that tailored prompting could further optimize GPS performance, paving the way for future research directions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The motivation for studying GPS tasks is not clearly articulated; the authors do not clarify what makes these types of problems uniquely challenging or valuable for research.\n2. If I understand correctly, the first research question in the paper has already been thoroughly discussed in previous work. See my questions below.\n3. The analysis and discussion of some experimental results are not sufficiently clear or rigorous. See my questions below." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. It seems that samples are of the same magnitude as the original dataset, why don't test on the full dataset? It is not clear how the data is sampled from the original dataset. \n\n2. Why not apply the OpenAI-4o model? It's not sure whether the results of this study can be true for the newest OpenAI model.\n\n3. How do you test accuracy given answers? Are they multi-choice questions?\n\n4. In Ln 514, why does the use of prompting methods have nothing to do with the improvement of answering accuracy?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper targets a hard and interesting mathematical problem called geometry problems. And it compares two series of SOTA promising methods: chain of thoughts and program of thought. It claims that LLMs often \"draw a dipper with a gourd as a model\" (Ln 194).\n\n2. A wide range of problems are studied, and various analysis experiments are performed.\n\n3. The related works are carefully reviewed, and this analysis paper is well-motivated." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the geometry problem. It observes that the model's performance gain is not due to the quality of the demonstration but to the input format, output format, and logic of the demonstration. Moreover, this analysis finds that specialized prompt methods and find-tuning of the model can optimize its performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This paper does not propose a specific method to overcome the claimed issues of LLMs, and it does not provide much insight into resolving such problems. In fact, the CoT and PoT methods are well known to the community and are already used in daily life. So it's not clear where the novelty is.\n\n2. Invalid demonstrations would definitely deteriorate the performance of GPT, and this analysis is quite unuseful. What's more concerning is whether the few-shot demonstrations are useful. Note that OpenAI-o1 discourages the use of few-shot demonstrations (refer to their official website).\n\n3. Some claims are quite vague, and the findings cannot support the conclusion. For example, Ln433 concludes that the GPS task requires a small amount of computation. Ln420 suggests that \"the method of enhancing reasoning ability is more effective than computation.\" However, it's not sure whether the proposed computation is the optimal choice of the LLMs. The probability it's the **choice or implementation of CoT / PoT** that hinders the model performance while increasing computation (e.g., making the LLM larger) should significantly improve the model performance.\n\n4. Figure 4 is quite noisy and no meaningful conclusions can be drawn from this figure. Also, it's expected that increasing problem length would deteriorate the accuracy. This analysis does not lead to significant discoveries." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024towards,\ntitle={Towards Geometry Problems Solving Employing {GPT}-4 Vision with Few-Shot Prompting: An Empirical Study of What Matters},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0vKokoPKTo},\nnote={under review}\n}" }, "abstract": { "value": "The few demonstrations (\"few-shot prompting\") can significantly improve the ability of Large Language Models (LLMs) in mathematical reasoning, including geometry problem solving (GPS). \nGPT-4 Vision (GPT-4V), as a leading example of LLMs, also demonstrates significant improvements. \nThis tremendous achievement is mainly attributed to prompting methods like \"Chain-of-Thought\" and \"Program-of-Thought,\" which leverage the in-context learning ability of the model combined with few-shot prompting to solve new problems. \nDespite the success of these prompting methods, it remains understood what the GPT-4V model learns from the demonstrations that lead to improved performance. \nIn this paper, we evaluated the answering accuracy of GPT-4V with 2-shot prompting on five geometric problem datasets and conducted a series of detailed analyses. \nFirstly, through ablation experiments with valid and invalid demonstration examples, we found that the model’s performance improvement is not due to the quality of the demonstration, but rather to the input format, output format, and logic and structure of the demonstration. \nSecondly, by analyzing the reasoning and computational requirements of geometric problems, and verifying experimental results, we found that GPS tasks emphasize reasoning ability more than computational power. \nFinally, our analysis of various prompt methods revealed that existing approaches are not effective at improving model performance concerning problem length and geometric shape. \nTherefore, specialized prompt methods could be designed to enhance the model's performance in these aspects, or fine-tuning the model by adding problem data with longer lengths or mixed geometric shapes could optimize its performance. \nOverall, developing an LLM that fully adapts to GPS tasks represents a key research direction. \nThe source code and data will be made available in a GitHub repository." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models", "Mathematical Reasoning", "Geometry Problem Solving", "Prompting Methods" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d23b3b2bcf7391f769fa8901a1b45943d2266415.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Towards Geometry Problems Solving Employing GPT-4 Vision with Few-Shot Prompting: An Empirical Study of What Matters" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0vMLqSdsKW
A Unified Causal Framework for Auditing Recommender Systems for Ethical Concerns
main
Active
recommender systems;causality;evaluation;auditing;machine learning
alignment, fairness, safety, privacy, and societal considerations
3;3;5;5
3;3;5;3
2;3;2;3
2;3;2;2
3;2;2;3
4
3.5
2.5
2.25
2.5
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How does the computational complexity scale with the number of users, items and time horizon? What are recommended approaches for large-scale recommender systems?\n\n2. What are the practical implications of assuming static embeddings during gradient computation? How would the results change with full retraining?\n\n3. Could the framework be extended to handle more complex recommendation scenarios like slate recommendations or contextual bandits?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The technical claims and methodology are very well-supported. The causal framework is rigorously developed with clear mathematical formulations. The empirical evaluation is comprehensive, with well-designed ablation studies showing impact of various stochasticity levels, time horizon lengths and model architecture choices.\n\n2. Novel formalization of reachability and stability metrics presented capture both immediate and long-term effects, handle multi-step recommendation dynamics and account for both user and adversary perspectives.\n\n3. The paper is generally well-written and logically structured. The causal framework is presented clearly with helpful examples." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a unified causal framework for auditing recommender systems with focus on user agency. The authors make three main contributions:\n1. A general causal framework that formalizes interventional and counterfactual metrics for auditing recommender systems.\n2. Two novel classes of metrics - reachability and stability, to measure user agency while accounting for recommendation dynamics.\n3. Efficient computational methods for these metrics under different levels of access to the recommender system.\n\nThe framework is evaluated empirically using both matrix factorization and recurrent neural network based recommenders, showcasing interesting trade-offs between stability and reachability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The assumption of static user/item embeddings during gradient computation could be better justified. Additional experiments showing impact of this simplification would be valuable.\n\n2. The empirical evaluation focuses on movie recommendations - testing on other domains (e.g. social media, e-commerce, etc.) would strengthen the framework's generalizability claims.\n\n3. The choice of distance metrics for stability measures (L2 distance) could be better justified. Adding discussion of metric sensitivity to adversarial perturbations and analysis of the relationship between local and global notions of reachability would be useful.\n\n4. The paper presents limited discussion of computational complexity and scalability analysis, particularly for large-scale recommender systems. The paper could analyze how the methods scale with number of users, items and time horizon." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "What are the differences and impacts of applying this model to various recommendation models?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The causal approach offers a novel way to address ethical issues, providing a structured method for defining and calculating user-centric metrics.\n\nOffering both gradient-based and black-box methods for metric computation enables broader application" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a unified causal framework for auditing recommender systems, specifically to address ethical concerns such as user agency, stability, and reachability. It categorizes auditing metrics from a causal perspective and introduces two key metrics, past- and future-reachability, and stability, which measure a user’s ability to influence recommendations. The empirical studies evaluate the metrics on different recommender models, highlighting the trade-offs between user influence on recommendations and system stability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The framework’s reliance on specific causal assumptions and models,this may reduce its generalizability across diverse recommender systems.\n\nThe paper lacks a discussion about the differences between recommendation systems." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see them in the weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "S1- This paper provides comprehensive details on the background of the problem.\n\nS2- The authors give detailed experiment settings which improves the reproducibility." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this work, authors pay attention to recommender system auditing from a causal perspective, and point out the lack of metrics for auditing user agency for the recommendation process. Therefore, two metrics are proposed, including future- and past-reachability and stability, which can measure the impact of users on their own and other users. To calculate these metrics, the authors also design a gradient-based and a black box approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1-The motivation of this paper is not quite clear. For example, what’s the actual relationship between user agency and ethical concerns? \n\nW2-The experiments are only conducted on ML-1M, which are insufficient to explain the universality of the conclusions since the recommendation senarios are diverse. Experiments on at leasr one dataset from other recommendation senarios are needed.\n\nW3- In Figure 3 for the distribution of past instability values, for MF, Past-5 shows lower proportion of 0.0 than Past-1, but for RRN, Past-5 presents higher proportion of 0.0 than Past-1. Could you please explain the reason for this contrary result?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In addition to **Q1** to **Q5** mentioned in the Weaknesses, I have several other questions:\n\n- **Q6**: What is the relationship between the proposed metrics and recommendation performance? Does a stronger recommendation model perform better according to these metrics?\n\n- **Q7**: The metric comparisons in Figure 3 are described but lack corresponding explanations. For instance, why do some items show \"a user’s recommended list is either heavily affected by the actions of an adversary or is minimally affected by them\"?\n\n- **Q8**: Is the time horizon parameter in the experimental parameters equivalent to $k$ in Definitions 4.1 and 4.2? If not, how is $k$ set in the experiments?\n\nI am happy to engage in further discussion, and if these issues are addressed, I am willing to reconsider the score." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Auditing recommender systems is a highly meaningful area of study, and the paper contributes valuable insights.\n- The article is well-written and clearly articulated, making complex concepts accessible.\n- It provides methods for auditing from both white-box and black-box perspectives, catering to different levels of system access." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this work, the authors adopt a causal perspective on recommender system auditing and present a general method for defining auditing metrics. Within this overarching causal auditing framework, they categorize existing audit metrics. Leveraging their framework, they propose two types of metrics: future-/past-reachability and stability, which respectively measure a user's ability to influence recommendations for themselves and for other users. Additionally, they introduce a gradient-based method and a black-box method for calculating these metrics, allowing auditors to assess them at various levels of system access. Empirically, the authors demonstrate the effectiveness of their proposed metrics and use them to examine the design of recommender systems." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **W1**: **Ambiguity in Definitions**: The definitions in the article lack detailed explanations, which may lead to ambiguity. For example:\n - **Q1-1**: In Definitions 4.1 and 4.2, the authors consider only the intervention on $O_{i,t}$ without accounting for its effect on $A_{i,t+1}$. Why was this setting chosen?\n - **Q1-2**: Are Definitions 4.1 and 4.2 consistent? Specifically, does past-$k$ at time $t+k$ equal future-$k$ at time $t$? It would be helpful if the authors could address this question both intuitively and formally.\n\n- **W2**: **Limited Analysis Scope**: The analysis in Section 5 is confined to $k=1$, representing only a special case of the broader definitions provided.\n - **Q2**: Please describe how the corresponding white-box and black-box methods would operate when $k > 1$.\n\n- **W3**: **Practical Applicability Concerns**: There is a gap between the theoretical propositions and practical scenarios.\n - **Q3**: Proposition 5.1 requires fixing item embeddings, while Proposition 5.2 requires fixing user embeddings. Since these conditions are difficult to meet in real recommender systems, how does this gap affect practical auditing?\n\n- **W4**: **Lack of Experimental Rationale**: Certain experimental setups lack clear justification.\n - **Q4-1**: Section 6.1 mentions different policies for future and past metrics. Why was this setup chosen? Please explain the rationale behind this decision.\n\n- **W5**: **Incomplete Experimental Validation**:\n - **Q5-1**: The use of a single dataset limits the experimental scope and generalizability of the findings.\n - **Q5-2**: The current experiments focus on analyzing existing models within the proposed framework but do not clarify why this framework or these metrics are more valid than existing auditing methods. Additional experiments, such as straightforward case studies, are needed to further validate the framework." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024a,\ntitle={A Unified Causal Framework for Auditing Recommender Systems for Ethical Concerns},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0vMLqSdsKW},\nnote={under review}\n}" }, "abstract": { "value": "As recommender systems become widely deployed in different domains, they increasingly influence their users’ beliefs and preferences. Auditing recommender systems is crucial as it not only ensures the improvement of recommendation algorithms but also provides ways to assess and address ethical concerns surrounding them. In this work, we view recommender system auditing from a causal lens and provide a general recipe for defining auditing metrics. Under this general causal auditing framework, we categorize existing auditing metrics and identify gaps in them—notably, the lack of metrics for auditing user agency while accounting for the multi-step dynamics of the recommendation process. We leverage our framework and propose two classes of such metrics: future- and past-reachability and stability, that measure the ability of a user to influence their own and other users’ recommendations, respectively. We provide both a gradient-based and a black-box approach for computing these metrics, allowing the auditor to compute them under different levels of access to the recommender system. Empirically, we demonstrate the efficacy of methods for computing the proposed metrics and inspect the design of recommender systems through these proposed metrics." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "recommender systems", "causality", "evaluation", "auditing", "machine learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b72adb3a7114b3936d6ba44bf74c115ceb2619c2.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/e7074aefe8a9a341f2948af3081f1209000c552e.zip" }, "title": { "value": "A Unified Causal Framework for Auditing Recommender Systems for Ethical Concerns" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0vtftmYQGV
SNAP-TTA: Sparse Test-Time Adaptation for Latency-Sensitive Applications
main
Active
Test-Time Adaptation;Unsupervised Domain Adaptation
transfer learning, meta learning, and lifelong learning
3;5;5;6
5;3;4;2
2;3;2;3
2;2;2;2
2;3;2;3
4.75
3.5
2.5
2
2.5
-0.923381
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I have some comments as following:\n\n1. Some results for latency and performance metrics on mobile or embedded systems would be helpful, to further validate the method’s effectiveness and robustness\n\n\n2. Some in-depth analysis of specific limitations would be helpful, such as how memory overhead might impact performance on resource-constrained devices and how SNAP-TTA handles highly dynamic data distributions in real-world applications. Additionally, there is no discussion on potential trade-offs between latency reduction and accuracy under different conditions\n\n\n3. The combined CnDRM+IoBMN method performs best, but the contribution of each component is not discussed. A brief explanation of how they work together would improve clarity. The table 5 only shows results at an adaptation rate of 0.1, the authors can mention that the complete data is in appendix." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper addresses the challenge of achieving high adaptation accuracy while maintaining computational efficiency in Sparse Test-Time Adaptation (STTA), where updates rely on only a small subset of data.\n2. SNAP-TTA demonstrates improved classification accuracy across adaptation rates (0.01 to 0.5) compared to baseline TTA methods on CIFAR10-C, CIFAR100-C, and ImageNet-C. At an adaptation rate of 0.1, SNAP-TTA reduces latency by up to 87.5% while mitigating accuracy loss, validating its effectiveness in STTA\n3. IoBMN combines memory statistics from domain-representative samples with current inference batch statistics, using a soft shrinkage function to balance them. This dynamic normalization adjustment during inference effectively addresses domain shift, ensuring model adaptability and performance stability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces SNAP-TTA, a sparse Test-Time Adaptation (STTA) framework designed for latency-sensitive applications on resource-constrained edge devices. Traditional TTA methods dynamically adjust models using unlabeled test data to handle distribution shifts, but they often incur high computational costs and latency, making them impractical for real-time edge environments. SNAP-TTA addresses these challenges by introducing two key components: (i) Class and Domain Representative Memory (CnDRM), which selects class-representative and domain-representative samples to enable effective adaptation with minimal data, and (ii) Inference-only Batch-aware Memory Normalization (IoBMN), which corrects feature distribution shifts during inference without additional training. By combining SNAP-TTA with five state-of-the-art TTA algorithms, the paper demonstrates that SNAP-TTA achieves significant latency reductions (up to 87.5%) while maintaining competitive accuracy. Experimental results on benchmarks like CIFAR10-C and ImageNet-C show SNAP-TTA’s superior performance in edge settings, making it suitable for real-world, latency-sensitive applications." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The reliance on a fixed confidence threshold of CnDRM may limit adaptability across varying data distributions and could lead to suboptimal sampling.\n2. In table 5, accuracy differences between methods are small, without statistical analysis, making it unclear if these differences are significant (In Detailed comments 4)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I am somewhat confused about the latency differences between, Tent, EATA, SAR and SNAP, all of which are sample selection-based methods. Compared to Tent, EATA does not reduce latency because, this is because in the EATA’s code, even filtered samples are still used in back-propagation (due to limitations in PyTorch), despite halving the number of samples involved in adaptation. However, in SNAP, latency is reduced. If this reduction is due to engineering optimizations, the same should ideally apply to EATA and SAR for a fair comparison. If not, the comparison could be seen as unfair.\n\nAnother area of confusion is that, based on my experience, EATA generally outperforms Tent and SAR under standard settings. However, the authors’ results show SAR and Tent performing better than EATA, which contradicts my observations. Could the authors provide further clarification on this?\n\nDoes the proposed method reduce latency for a single batch or does it show an average improvement over multiple batches?\n\nLastly, would the proposed method be effective for transformer-based models, such as ViT-base?\n\nI strongly encourage the authors to move Table 1 to the Appendix and provide additional results on ImageNet-C with various adaptation rates in the main paper, as the CIFAR-10 results are less critical and not sufficiently convincing. Currently, Table 1 occupies nearly an entire page, which I feel could be better utilized for more impactful content." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The design of the SNAP method is well-motivated and reasonable from the technical perspective. \n\nThe proposed approach is a plug-and-play module that can be integrated with existing TTA methods to reduce adaptation steps and enhance efficiency. \n\nExperimental results underscore the effectiveness of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the problem of test-time adaptation for out-of-distribution generalization. To reduce the adaptation rate and improve the overall latency of TTA, the authors propose a SNAP framework that selects partial samples for adaptation. Experimental results highlight the potential of the proposed method. However, I still have several concerns as outlined below." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "On edge devices, the most critical factor in determining whether a TTA method is feasible is actually peak memory usage, as highlighted by MECTA [A]. While this work does reduce the number of adaptation steps, it does not decrease peak memory usage. In this sense, the primary motivation for applying the proposed method to edge devices may be misplaced.\n\n[A] MECTA: Memory-Economic Continual Test-time Adaptation" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "NA" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The proposed SNAP-TTA framework addresses the latency-accuracy trade-off issue in existing TTA methods for edge devices in some cases. It reduces latency while achieving competitive accuracy, as demonstrated by extensive experiments on multiple benchmarks and with integration of several existing TTA algorithms." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on Test-Time Adaptation (TTA) for edge devices with limited computational capacity. The authors propose SNAP-TTA, a sparse TTA framework with two key components, Class and Domain Representative Memory (CnDRM) and Inference-only Batch-aware Memory Normalization (IoBMN), aiming to reduce model adaptation frequency and data usage while maintaining accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In the background section, the mention of applications like real-time health monitoring for IoT edge devices may not be entirely appropriate as these devices often have extremely limited memory.\nWith limited memory, these devices are difficult and even impossible for backward-propagation and gradient decent. In this sense, memory should perhaps be prioritized over latency as the primary concern.\n- It is unclear whether the proposed method reduces the delay per batch or the average delay (adaptation occurs once every several batches as shown in Figure 1). If it is the latter, its effectiveness for latency-sensitive applications may be limited as the inference delay could increase significantly every several batches.\n- The method reduces the cost of backpropagation by filtering samples to decrease the inference latency. However, EATA also uses a similar strategy, but in Figure 2, the delay of EATA is the same as that of Tent, and the delay of SAR is inconsistent with the results reported in its original paper.\n- The paper could compare the inference latency in Tables 1, 2, and 3.\n- In Table 6 for ImageNet-C, only the Tent method is compared, ignoring other methods, which could provide more comprehensive and convincing results.\n- In the experiments, it is not clear how the number of participating samples is controlled to meet the adaptation rate. Is it through adjusting the $tau_conf$ hyperparameter? Also, it is not described how other compared methods meet the adaptation rate.\n- The description of lines 10-15 of the algorithm in the paper is relatively brief, considering its importance for the proposed method. More detailed explanation in the paper would assist readers in understanding." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What are the lower limits of the proposed approach? For example, would SNAP enable TTA on microcontroller units (MCUs) such as Cortex-M MCUs?\n- How memory intensive is the approach? There seem to be some mechanisms in place to keep memory requirements fixed (line 264 ff), but could memory, i.e. RAM, availability still become a bottleneck of the approach on edge systems?\n- I am a bit confused about the hyperparameter \"adaptation rate\": Is this parameter specifically implemented by SNAP or is it implemented by the underlying TTA algorithms? I was wondering because, for example, in Table 1 the accuracy for the TTA algorithms without SNAP-TTA also decreases at lower adaptation rates." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The method is promising in that, at least on a Raspberry Pi 4 and when used together with STTA, SNAP provides a significant reduction in latency, as shown in Table 4, while being able to maintain accuracy comparable to using STTA alone.\n- The authors show empirically that SNAP works well with a number of different TTA algorithms (TENT, CoTTA, EATA, SAR, RoTTA) and with different adaptation rates for different datasets (CIFAR10-C, CIFAR100-C, ImageNet-C)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a sparse test-time adaptation (TTA) framework, which they call SNAP, that improves the latency-accuracy trade-off of existing TTA algorithms to enable practical use of TTA on ede devices.\nTo this end, the authors propose \"CnDRM\", a method for identifying \"important\" samples for training based on class- and domain-representative sampling, and \"IoBMN\", a method for mitigating the effects of domain shifts on the model's internal feature distributions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The claimed contribution of the paper is that SNAP can make existing TTA algorithms more latency efficient and suitable for edge devices. However, this is only demonstrated in Table 4 for one algorithm (STTA) and one target device (Raspberry Pi 4). All other experiments focus only on accuracy. And while it is an important and valuable contribution to properly demonstrate that SNAP does not reduce the effectiveness of the TTA algorithms it is applied to, I think the evaluation overall fails to adequately demonstrate the claimed contribution of latency reduction across various edge devices." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024snaptta,\ntitle={{SNAP}-{TTA}: Sparse Test-Time Adaptation for Latency-Sensitive Applications},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0vtftmYQGV},\nnote={under review}\n}" }, "abstract": { "value": "Test-Time Adaptation (TTA) methods use unlabeled test data to dynamically adjust models in response to distribution changes. However, existing TTA methods are not tailored for practical use on edge devices with limited computational capacity, resulting in a latency-accuracy trade-off. To address this problem, we propose SNAP-TTA, a sparse TTA framework significantly reducing model adaptation frequency and data usage. It achieves competitive accuracy even with an adaptation rate as low as 0.01, meaning the model adapts infrequently and uses only a small portion of the data relative to full adaptation. Our approach involves (i) Class and Domain Representative Memory (CnDRM), which identifies key samples that are both class-representative and domain-representative to facilitate adaptation with minimal data, and (ii) Inference-only Batch-aware Memory Normalization (IoBMN), which leverages representative samples to adjust normalization layers on-the-fly during inference, aligning the model effectively to changing domains. When combined with five state-of-the-art TTA algorithms, SNAP-TTA maintains the performances of these methods even with much-reduced adaptation rates from 0.01 to 0.5, making it suitable for edge devices serving latency-sensitive applications." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Test-Time Adaptation", "Unsupervised Domain Adaptation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2e690e8f7a8aa470dcc8ec5b53986b57b6603674.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "SNAP-TTA: Sparse Test-Time Adaptation for Latency-Sensitive Applications" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0wQCSXJbwt
Temporal-Difference Variational Continual Learning
main
Active
continual learning;online variational inference;temporal-difference learning
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
3;3;5;6
3;5;4;5
2;2;3;3
1;2;3;2
3;2;3;3
4.25
4.25
2.5
2
2.75
0.406181
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Already mentioned in weaknesses section" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Strengths\n\n1. In VCL or VCL variants, they formulate the KL regularization loss only using the posterior distribution on previous task. However, in this paper, the scheme that using all the n posteriors before n-steps has strong advantage for tackling the catastrophic forgetting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propsoed a new version of variational continual learning (VCL) which combines n-step regularization loss with temporal difference. The n-step loss considers all posterior and log likelihood before n steps, and the distribution that minimizes the n-step loss can cover all n tasks. As an improved version, TD($\\lambda$)-VCL uses the weighted sum of the log likelihood and KL regularization, and controls the weights using $\\lambda$. In the experiment, TD($\\lambda$)-VCL achieves better performance than other baselines in variation of MNIST expeirments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Weaknesses\n\n1. To minimize Eq.(8), we should store both the memory buffer and the posterior distribution on previous tasks. However, I think that this scheme takes large memory, and higly inefficient. Most of the VCL variants (UCL[1] or UCB[2]) only stores the posterior distribution of previous task and also outperform the VCL and other baselines. \n\n2. The authors should include other baselines ([1]. [2], and other regularization based CL methods). In the PermutedMNIST or Split MNIST experiment, the overall accuracy is too low. In [1] and [2], they achieves much better performance than the proposed methods without using large amount of memory. Therefore, I think the contribution on TD($\\lambda$)-VCL is too weak\n\n3. To strengthen the effectiveness of TD($\\lambda$)-VCL, I think the experiments on using CNN architecture with larger dataset should be carried out. I think the algorithms that are applied only at a small scale scenario does not have any advantage these days.\n\n\n\n[1] Ahn et.al., Uncertainty-based Continual Learning with Adaptive Regularization, NeurIPS 2019\n\n[2] Ebrahimi et. al., Uncertainty-guided Continual Learning with Bayesian Neural Networks, ICLR, 2020" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "To improve the impact of the method, the authors could consider building on more recent models and benchmarks or even integrating connections to neural science, potentially aligning the method more closely with the evolving landscape of continual learning." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is well-written and easily understandable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces TD-VCL, aiming to mitigate Catastrophic Forgetting in continual learning (CL) by using a variational framework inspired by reinforcement learning’s temporal-difference (TD) methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This work builds on an earlier approach to variational continual learning. While applying a temporal modification to the variational objective to mitigate model drift is intuitive, and drawing a connection to reinforcement learning is conceptually interesting, this work and its benchmarks feel largely disconnected from recent advances in continual learning. Had this work been published six years ago, it might have been more impactful, but recent developments have rendered variational models less relevant due to their limitations in scalability and stability.\n\nThe experiments are confined to benchmarks like PermutedMNIST, SplitMNIST, and SplitNotMNIST—datasets that are relatively simple and fall short of reflecting real-world continual learning challenges. More recent works typically include larger and more complex datasets such as CIFAR-100 and ImageNet, which would provide a more realistic evaluation of the method.\n\nAdditionally, the paper’s evaluation lacks comparisons to newer, stronger baselines in the field. While standard VCL and its variants are included, recent advanced methods, such as ALTA, DER, and L2P, are absent. This omission raises questions about the practical relevance and competitiveness of the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "### Please refer to the weakness" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. **Addressed a Potential Gap in Current Bayesian Continual Learning**: \n The proposed method effectively addresses the issue of Catastrophic Forgetting by utilizing multiple past posterior estimates, which helps to dilute the impact of individual errors that could compound over time.\n\n2. **Enhanced Learning Objectives**: \n By integrating n-Step KL regularization, the model can leverage a broader context from previous tasks, leading to improved performance in continual learning scenarios compared to standard Variational Continual Learning (VCL) methods.\n\n3. **Single-Step Optimization**: \n Unlike some existing methods that require complex two-step optimizations or replay mechanisms, this approach simplifies the learning process by naturally incorporating replay into the learning objective." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce the Variational Continual Learning (VCL) paper, which is a Bayesian CL approach where posterior distributions are updated recursively, highlighting the compounding effect of VCL and error accumulation due to objective depending on the posterior of an immediate previous task. To address this, the paper proposes two main solutions. First, they introduce an n-step KL regularization objective, which incorporates multiple past posterior estimates. This approach reduces the impact of individual errors and enhances the overall reliability of the model. Additionally, the authors draw parallels between their approach and temporal-difference (TD) methods from reinforcement learning – no experiment in RL though. They suggest that integrating concepts from TD learning can further improve learning outcomes by providing a more robust way to handle updates. The proposed methods were validated through experiments against standard VCL techniques and non-variational baselines, using well-known CL benchmarks. The paper also presents detailed theoretical insights to validate the claims made. The results showed improved performance, effectively mitigating the problem of catastrophic forgetting. This research offers valuable insights into developing more robust continual learning frameworks by combining variational inference with temporal-difference learning mechanisms. It would be more interesting to see the results with the larger model on complex datasets" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "## Key Points of Consideration\n\n### 1. Dependence on Hyperparameter Tuning\n- **Effectiveness Contingency**: The performance of n-Step KL regularization is heavily dependent on the appropriate setting of its hyperparameters. \n\n### 2. Increased Computational Complexity\n- **Robustness vs. Overhead**: While utilizing multiple past estimates can enhance robustness, it may introduce significant computational overhead, particularly in resource-limited environments.\n- **Training and Inference Time**: It is essential to report training and inference times, as Bayesian models are generally slower compared to deterministic counterparts.\n\n### 3. Assumption of IID Tasks\n- **Real-World Applicability**: The framework operates under the assumption that tasks are independent and identically distributed (IID). This assumption may not hold in many real-world scenarios, potentially limiting the framework's applicability.\n\n### 4. Potential for Bias in Estimates\n- **Impact of Biased Estimates**: If earlier posterior estimates are significantly biased, they could adversely affect the learning target, even with proposed mitigation strategies.\n\n### 5. Scalability of the Bayesian Framework\n- **Applicability Limitations**: Focusing on a Bayesian approach may restrict applicability to other models or frameworks that do not align with Bayesian principles. The framework may struggle with complex datasets exhibiting multiple distribution shifts, such as CIFAR10/100 and ImageNet, especially when utilizing larger architectures like ResNets and ViTs.\n\n### 6. Limited Experiments\n- **Validation Scope**: The framework has only been validated on MNIST and its variations and compared solely with the VCL paper. There are other prominent Bayesian continual learning works based on Mean-Field Variational Inference (MVFI), such as UCB [1], UCL [2], and Bayesian Structural Adaptation [3]. It would be beneficial to evaluate these frameworks after applying dilation techniques.\n- **Lack of Analysis**: The main section claims contributions, but there is a lack of empirical analysis in the results section for RL.\n\n## Contribution to Literature\nDespite its limitations, the work presents a valuable contribution to the existing literature on continual learning.\n\n## Questions for Further Clarification\n1. **Learning Strategy**: For SplitMNIST and SplitNotMNIST, which learning strategy was employed? Was it Task-Incremental Learning (TIL) or Class-Incremental Learning (CIL)?\n2. **Re-weighting Posteriors**: What is the intuition behind re-weighting the posteriors with KL-divergence to mitigate error accumulation? What are the implications when \\( n = t \\)?\n3. **Exemplar-Free Setting**: How does the framework perform in an exemplar-free setting?\n\nI will be happy to increase the score if the authors show empirical validation that the framework is scalable to larger models and complex datasets\n### References\n[1] Ahn, Hongjoon, et al. \"Uncertainty-based continual learning with adaptive regularization.\" Advances in neural information processing systems 32 (2019).\n\n[2] Ebrahimi, Sayna, et al. \"Uncertainty-guided continual learning in Bayesian neural networks–Extended abstract.\" Proc. IEEE Conf. Comput. Vis. Pattern Recognition (CVPR). 2018.\n\n[3] Kumar, Abhishek, Sunabha Chatterjee, and Piyush Rai. \"Bayesian structural adaptation for continual learning.\" International Conference on Machine Learning. PMLR, 2021." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. As most recent works on Bayesian continual learning [1,2] experiment with CIFAR and tiny ImageNet, it would be interesting to see the results when applied to such relatively more complex datasets.\n2. Since the proposed method incorporates a replay buffer, it would be interesting to see how it compares in a class-incremental learning setting against replay-based methods like ER [3].\n\n[1] Kumar, A., Chatterjee, S., Rai, P. (2021). Bayesian Structural Adaptation for Continual Learning. In Proceedings of the 38th International Conference on Machine Learning (pp. 5850–5860). PMLR.\n\n[2] Thapa, J., Li, R. (2024). Bayesian Adaptation of Network Depth and Width for Continual Learning. In Forty-first International Conference on Machine Learning.\n\n[3] Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K Dokania, Philip HS Torr, and Marc’Aurelio Ranzato. Continual learning with tiny episodic memories. arXiv preprint arXiv:1902.10486, 2019." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper makes a significant contribution by drawing on Temporal-Difference methods to mitigate error accumulation in variational continual learning. Thus proposed formulation allows the regularization using past $n$ posteriors and incorporation of a replay buffer for previous $n$ tasks into the principled framework of variational continual learning. \n2. The experiments show a performance boost compared to the baselines. The propositions and their proofs further enhance the strength of this work. \n3. The paper is well-organized and easy to follow. The authors provide a thorough analysis of their method on benchmark datasets, along with sensitivity analysis of hyper-parameters." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper focuses on mitigating the issue of cumulative error accumulation in variational continual learning due to relying on a single posterior from the past task. The paper formulates n-Step KL-VCL, which allows for regularizing network updates using past n posteriors. In doing so, it formulates the likelihood term to integrate replay samples from past n tasks. Furthermore, it proposes TD($\\lambda$)-VCL, which connects variational continual learning with TD methods from reinforcement learning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. One major weakness is that the benchmarks include small-scale MNIST variants (permuted MNIST and single-headed MNIST/not-MNIST tasks) only.\n2. The benchmarks are constrained to the task-incremental learning, where the task identifier is provided during prediction. The paper's claim of effort to raise the standards for evaluating continual learning is not strong, as recent works commonly focus on the more challenging class-incremental learning setting, which doesn't require task identifiers for prediction.\n\nI would be happy to raise the score if these weaknesses and the following questions are addressed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024temporaldifference,\ntitle={Temporal-Difference Variational Continual Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0wQCSXJbwt},\nnote={under review}\n}" }, "abstract": { "value": "A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks. This adaptability allows them to respond to potentially inevitable shifts in the data-generating distribution over time. However, in Continual Learning (CL) settings, models often struggle to balance learning new tasks (plasticity) with retaining previous knowledge (memory stability). Consequently, they are susceptible to Catastrophic Forgetting, which degrades performance and undermines the reliability of deployed systems. Variational Continual Learning methods tackle this challenge by employing a learning objective that recursively updates the posterior distribution and enforces it to stay close to the latest posterior estimate. Nonetheless, we argue that these methods may be ineffective due to compounding approximation errors over successive recursions. To mitigate this, we propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations, preventing individual errors from dominating future posterior updates and compounding over time. We reveal insightful connections between these objectives and Temporal-Difference methods, a popular learning mechanism in Reinforcement Learning and Neuroscience. We evaluate the proposed objectives on challenging versions of popular CL benchmarks, demonstrating that they outperform standard Variational CL methods and non-variational baselines, effectively alleviating Catastrophic Forgetting." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "continual learning", "online variational inference", "temporal-difference learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ef9665dbc76c5ceea264c43c13b6615819c07f1c.pdf" }, "presentation": null, "primary_area": { "value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Temporal-Difference Variational Continual Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0wfmHoKQX6
Replicate and Quantize: A Plug-and-Play Strategy for Load Balancing in Sparse Mixture-of-Experts LLMs
main
Active
mixture-of-experts;load balance
other topics in machine learning (i.e., none of the above)
3;3;5;5
3;4;4;2
2;2;2;2
2;2;2;2
3;2;3;2
4
3.25
2
2
2.5
-0.301511
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1) Can you provide more detailed implementation details, including the specific quantization techniques and hyperparameters used in your experiments, to facilitate reproducibility?\n\n2) Could you conduct additional ablation studies to demonstrate the individual contributions of the replication and quantization components in your proposed method?\n\n3) How does your method perform under different levels of model sparsity and varying numbers of experts in the SMoE models?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1) The \"Replicate and Quantize\" strategy is a novel approach that dynamically addresses load imbalance in SMoE models without requiring extensive retraining.\n\n2) The proposed strategy is plug-and-play, making it easy to integrate with existing models and practical for real-world applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel strategy called \"Replicate and Quantize\" for addressing load balancing issues in Sparse Mixture-of-Experts (SMoE) models. The authors systematically analyze the performance and functionality of each expert and introduce a metric to evaluate load balance. They propose a dynamic plug-and-play strategy that is both trainingless and near-lossless, effectively resolving load balancing problems by replicating heavily used experts with lower-bit quantized versions and quantizing the least important experts to fit within the memory budget. Empirical results demonstrate that this approach significantly reduces load imbalance with minimal impact on model performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) The paper lacks detailed implementation specifics, such as the exact quantization methods and hyperparameters used.\n\n2) There is a need for more extensive ablation studies to isolate and demonstrate the contributions of the replication and quantization components individually." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Q1. Error on Page 6, line 288: Are the X-axis and Y-axis labels inverted?\n\nQ2. Should R&Q identify heavy-hitter and important experts for each individual task, or can the identified experts be reused across tasks? The motivation behind this question is that heavy-hitters may vary depending on task characteristics. For example, experts 1-3 might be heavy-hitters for task A, while different experts could be heavy-hitters for task B.\n\nQ3. While resolving load imbalance could theoretically improve computational efficiency, how does R&Q empirically achieve this efficiency gain? Could it actually slow down inference latency due to the quantized experts? I’m asking this because the experiment section lacks an empirical analysis of memory and latency improvements. A strong answer to this question would require empirical results.\n\nQ4. Would R&Q maintain performance on more challenging tasks, such as generation tasks (e.g., perplexity, code generation, MT-Bench, etc.)?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "S1. No retraining is required. \n\nS2. Near-original accuracy (at least on classification / MCQ tasks)\n\nS3. I like how they distinguished between heavy-hitter experts and important experts, which could easily be confused as the same. They also conducted experiments to show that these concepts are distinct, although there is some correlation between them." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a plug-and-play approach (R&Q) for addressing load imbalance in Sparse Mixture-of-Experts models to improve computational efficiency without retraining. R&Q literally replicates heavily used experts in a quantized form and quantizes less important experts to maintain memory efficiency. Minimal impact on performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1. Weak presentation of algorithms and figures. Some figures lack any caption or explanation. In Algorithm 1, the choice of variable names is awkward and confusing. For example, l(x), count(expert_chosen), argmax(expert_num), EC, etc., need to be clarified with better names.\n\nW2. Weak baseline. The baseline experiments were only conducted within their framework (ours vs. random vs. heavy-hitter). They lack comparisons with other techniques that address load balancing. \n\nW3. Weak empirical analysis on computational efficiency gain. While their experiments show that R&Q improves load balancing compared to naive techniques, they don't demonstrate how this improvement directly translates to reduced inference latency. This is critical because the use of quantization could often slow down inference.\n\nW4. Weak empirical analysis on more challenging tasks, such as generation tasks (e.g., perplexity, code generation, MT-Bench, etc.)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Fig. 2 shows that the most important expert is the expert that receives the most tokens. It seems like it is rejecting, instead of confirming the authors' proposition, that the heavy-hitters are not necessarily the most important expert. Wouldn't quantizing expert 3, the heavy hitter in this case lead to performance degradation?\n\n- How does the router adapt to the case where the most important expert is replicated? Will it evenly distribute its tokens to each GPU device?\n\n- How are the expert loaded on the GPU? Are the other experts completely unaffected?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper studies an interesting problem, which is the load imbalance of MoE LLMs in inference scenarios.\n- The author conducted experiments on various tasks and model types, creating a comprehensive overview of the impact of the proposed strategy on the performance of the model." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes Replicate and Quantize, an inference-time strategy that aims to mitigate load imbalance on Mixture-of-Expert based Large Language Models. The author claimed that there exist differences between heavy-hitters and important experts in MoE models, and proposed to 1) quantize the least important expert for resource savings, and 2) replicate a quantized version of heavy hitters to reduce load imbalance. Results show that the authors' strategy improved load imbalance without reducing too much performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper's claim on its novelty, that the load imbalance of MoE models has not been studied for inference scenarios, is wrong. Plenty of research have focused on the inference scenario, such as [1], [2], [3], [4] and [5], and many of them have provided a thorough characterization of the workload already. Quantization of the MoE model has also been studied in [6]. The author should conduct **a more thorough review on the existing literature** and discuss the difference of this work with these existing ones.\n\n- The Load Imbalance Score defined at Sec 3.1 is an ill-defined metric. The overall load in a dataset is not directly related to the inference performance of the MoE model. It is the load in a certain batch that would have a major impact on the robustness of the model (preventing OOM errors) and latency (of all-to-all communications).\n\n- Algorithm 1 seems to be unnecessary. The search is quite straightforward.\n\n- The authors' proposition, that the heavy-hitters are not necessarily the most important expert, seems to be *rebuked* by the presented data in Fig. 2. See questions below.\n\n- The most important metrics, which are the inference performance of the proposed system (**memory consumption, latency, hardware utilization, and so on**), are not studied in this work. These are the most important reasons one would like to reduce the load imbalance.\n\n- Line 216, \"Wanda metric\" has been referenced, but only formally defined on line 237.\n\n- The paper is not well formatted. For example:\n - The citations are not correctly adapted to the ICLR format. e.g. Line 107 -- Jacobs et al. Jacobs et al..\n - Missing spaces \",or\" on line 115.\n - Missing reference on line 120.\n - Missing spaces \".For\" on line 406.\n \n- The authors have not provided any code or reproducibility statement.\n\n\n[1] Huang, Haiyang, et al. \"Towards MoE Deployment: Mitigating Inefficiencies in Mixture-of-Expert (MoE) Inference.\" arXiv preprint arXiv:2303.06182 (2023).\n\n[2] Gale, Trevor, et al. \"Megablocks: Efficient sparse training with mixture-of-experts.\" Proceedings of Machine Learning and Systems 5 (2023): 288-304.\n\n[3] Kong, Rui, et al. \"Serving MoE Models on Resource-constrained Edge Devices via Dynamic Expert Swapping.\" arXiv preprint arXiv:2308.15030 (2023).\n\n[4] Li, Jiamin, et al. \"Accelerating distributed MoE training and inference with lina.\" 2023 USENIX Annual Technical Conference (USENIX ATC 23). 2023.\n\n[5] Hwang, Ranggi, et al. \"Pre-gated moe: An algorithm-system co-design for fast and scalable mixture-of-expert inference.\" 2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA). IEEE, 2024.\n\n[6] Kim, Young Jin, Raffy Fahim, and Hany Hassan Awadalla. \"Mixture of Quantized Experts (MoQE): Complementary Effect of Low-bit Quantization and Robustness.\" arXiv preprint arXiv:2310.02410 (2023)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Refer to the questions in Weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1)\tThe proposed idea is simple and sound.\n2)\tOverall, this work is well-organized and easy to follow.\n3)\tThe authors have tested on various MoE base models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work provides a simple yet effective strategy for load balancing in MoE-based LLMs. Specifically, the authors first find the most heavy expert and the less important experts, and (a) replicate and quantize most heavy experts, (b) quantize less important experts. In experiments, the authors have deployed the proposed method on 4 MoE models, achieving comparable results with more balanced load among experts. In conclusion, the proposed model is sound and easy to deploy, while more in-depth evaluations and analyses should be conducted." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1)\tThe central idea of this work is the replicate and quantize strategy. Firstly, an appropriate ablation study should be conducted to verify the effectiveness of both strategies on the most heavy experts and less important experts. Secondly, whether the selections of most heavy experts and less important experts are essential? If we further quantize other experts while maintaining the activated parameters, what will be the results?\n2)\tIn Section 3.4, the authors merely give the importance score/load results on one task PIQA. Does this phenomenon also exist in other tasks and other MoE blocks? The authors are suggested to give more quantitative indicators (e.g., correlation coefficient on different settings) to support their claims.\n3)\tIn Related work, an in-depth analyses on other load balance methods (and why they do not work well in the experiments) should be given.\n4)\tIn experiments, although the authors claimed that “before that, we have tried to use the different tuning strategies to adjust the router mechanism to solve the load imbalance issues, Clearly, it does not work as we expected, and the part of the strategies emplifies the imbalanced distribution among the different experts”, which strategies are used and the corresponding results should be given. Currently, only using the raw setting as baseline is not sufficient.\n5)\tThe experimental details are insufficient. For instance, the details of adopting the proposed method on DeepSeekMoE should be given. DeepSeekMoE adopts shared and specialized experts, and whether the shared experts are also used for replicate? Moreover, it also multiplies the number of experts, which shares the similar idea of the “replicate” heavy expert part in this work.\n6)\tThe actual inference speed and cost should be given. Do all comparisons share the same activated parameters in inference?\n7)\tTypos, e.g., Page2, missing reference in the first paragraph.\n8)\tThe scalability of the proposed method is encouraged to be evaluated or discussed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024replicate,\ntitle={Replicate and Quantize: A Plug-and-Play Strategy for Load Balancing in Sparse Mixture-of-Experts {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0wfmHoKQX6},\nnote={under review}\n}" }, "abstract": { "value": "While the rapid increase in the number of model parameters poses significant benefits to the development of large language models (LLMs), computational costs are also raised. In order to tackle this difficulty, the sparse mixture-of-experts(SMoE) model was introduced to tackle LLM scaling by activating a subset of experts per input. Therefore, how to leverage the knowledge of multiple experts will be an important topic. Normally, in the most extreme scenario, employing a balanced expert allocation system will result in a time-saving of $n$ times compared to utilizing only a single expert. Thus, in this paper we (1) systematically analyzed the performance and functionality of each expert. (2) Introduced a metric to fill the blank of evaluating load balance for the sparse mixture-of-experts(SMoE) model, based on the observation. (3) Proposed a dynamic plug-and-play strategy that is both trainingless and near-lossless, effectively resolving the load balancing problem, in contrast to previous works that focused on training strategies." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "mixture-of-experts;load balance" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b69d7e11da1a869ce7af14ee99afb21f3e46b11b.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Replicate and Quantize: A Plug-and-Play Strategy for Load Balancing in Sparse Mixture-of-Experts LLMs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0whx8MhysK
Influence-Guided Diffusion for Dataset Distillation
main
Active
Dataset Distillation;Dataset Condensation;Diffusion Model;Guided Diffusion Generation
applications to computer vision, audio, language, and other modalities
5;5;6;6;8
4;4;4;4;4
3;3;3;3;3
2;2;3;3;3
3;3;3;3;4
6
4
3
2.6
3.2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "As shown in Table 5, the main contributions of the authors include the proposed influence guidance and deviation guidance. What is the relationship between these contributions and the \"train-free\" concept? Notably, even when these components are excluded from Equation 9, good results are still achieved." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is easy to read and understand.\n2. IGD appears to be superior to existing diffusion model-based approaches." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the challenges of dataset distillation, which aims to create compact yet effective datasets for training larger original datasets. Existing methods often face limitations when dealing with large, high-resolution datasets due to high resource costs and suboptimal performance, largely due to sample-wise optimizations in the pixel space. To overcome these challenges, the authors propose framing dataset distillation as a controlled diffusion generation task, leveraging the capabilities of diffusion generative models to learn target dataset distributions and generate high-quality data tailored for training.\n\nThe authors introduce the Influence-Guided Diffusion (IGD) sampling framework, which generates training-effective data without retraining the diffusion models. This is achieved by establishing a connection between the goal of dataset distillation and the trajectory influence function, using this function as an indicator to guide the diffusion process toward promoting data influence and enhancing diversity. The proposed IGD method is shown to significantly improve the training performance of distilled datasets and achieves state-of-the-art results in distilling ImageNet datasets. Notably, the method reaches an impressive performance of 60.3% on ImageNet-1K with IPC (Images Per Class) set to 50." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In the introduction, the authors introduce the concept of Influence-Guided without clearly explaining what \"Influence\" entails or why it is used for guidance. The motivation is not well established. While Figure 1 effectively shows performance, adding an additional subfigure to illustrate the motivation or highlight differences from previous methods might be more valuable.\n\n2. The primary contribution of the authors is the proposal of a train-free diffusion framework for Dataset Distillation. While train-free approaches are common in the AIGC field, how does the proposed method differ from existing ones?\n\n3. The experiments only report results on ImageNet, without including results on classic datasets such as CIFAR-10 and CIFAR-100." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The performance is still very far from the original datasets. What is the least IPC to achieve similar performance with full data?\n2. Which experiments show an improvement in diversity? The diversity should be measured in terms of FID/Recall values.\n3. The equation (7) uses cosine similarity instead of product; is it purely due to experimental results or based on some other hypothesis?\n4. How will the work be performed on different tasks apart from classification?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The motivation is reasonable\n2. The paper is well written\n3. The performance is significant" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The work proposes a guidance scheme for dataset distillation with two main contributions. The first is to do gradient matching between sampled data with the training data, and the second is to add diversity constraints among samples inside a class.The experimental results show clear improvement over other baselines" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The performance is still far from full dataset.\n2. Lack of diversity measurement experiments\n3. The design of equation (7) lacks clarification.\n4. The application of the work seems to be not flexible. From my understanding, according to one architecture, there will be a need for one-time distillation. Is it possible to have one time distillation and use that distilled datasets to validate across models? I can see Table 4 for the robustness between models, yet the performance is not the same for the used models for guidance. This results in the concern in the application in reality due to computational exhibitions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see weeknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. IGD is a training-free framework that can be easily used in any pretrained diffusion models.\n2. The target this paper hopes to solve is clear, and the proposed methods solve the problem of data influence and diversity constraint theoretically.\n3. The performance improvement of IGD used in DiT and Minimax finetuned DiT is obvious.\n4. The ablation study is adequate, including all proposed methods and hyperparameters." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes Influence-Guided Diffusion (IGD) for dataset distillation. IGD solves the problem of poor performance and high resource costs of existing methods at high resolution. IGD proposes a training-free sampling framework which can be used in pretrained diffusion models to generate training-effective data. Extensive experiments show IGD achieves state-of-the-art performance in distilling ImageNet datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In Table.1, the compared methods are missing like latest method RDED [1] mentioned in section 4.1.\n2. Although the proposed method IGD is training-free for diffusion models. It requires training a model to collect the surrogate checkpoints used in Eq. 7. The time consumption should be listed as the paper emphasizes efficiency.\n3. The model used in Eq.7 is ConvNet-6. If we change the model like for a bigger one Swin Transformer, will the performance better? Or this model choice is relatively insensitive?\n4. Is IGD can be used in other efficient diffusion sampling strategy like DPM [2] solvers?\n5. The generation time should be compared between IGD and other methods like current SOTA RDED [1].\n\nReference:\n\n[1]. Sun P, Shi B, Yu D, et al. On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 9390-9399.\n\n[2]. Lu C, Zhou Y, Bao F, et al. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps[J]. Advances in Neural Information Processing Systems, 2022, 35: 5775-5787." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to W1 and W2." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The idea of using guided diffusion to generate samples for distillation is an interesting application of diffusion models.\n\n2. The proposed two guidance terms, i.e., increasing the gradient similarity and sample diversity, are well-motivated, simple, and intuitive. \n\n3. The paper is well written and presented in general.\n\n4. In the experiments, the proposed method achieves better performance and shows effectiveness. Comprehensive ablation studies and analyses are provided." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper works on dataset distillation by generating the distilled dataset using diffusion models guided by an influence function. In the implementation, two guidance terms are used. One is to increase the similarity between the gradient using the generated sample and the average gradient using the original training samples. The other is to decrease the similarity between generated samples. Experiments are conducted on ImageNette and ImageWoof." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There are a lot of writing issues in the math part:\n- It is unclear how the derivation is transferred from stepwise (Eq4) to epochwise (Eq5).\n- C duplicated defined on L096 and L110.\n- In Sec 2.2, some z is bold, and some are not.\n- In L132, D is not clearly defined.\n- In Eq5, theta_e and theta_E are not clearly defined.\n\n2. The proposed method seems to have a high computational cost. Both computing and storage costs are high for the gradient calculation in L294. The similarity calculation with respect to all generated samples is also high in Eq8. The computing and storage costs should be clearly provided and analysed in all the experiment sections.\n\n3. I doubt the statement this method is training-free. I agree that it is training-free as commonly understood in the diffusion community. But there are still a lot of training efforts here. It is only training-free given all the checkpoints, stored gradients, and pre-trained diffusion models. I would suggest revising this statement." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please address the above weaknesses." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Strengths:\n\n1.\tThe paper is easy to follow. The motivation of importance guided synthesis is clear and method is well presented. \n\n2.\tThe new idea is reasonable and neat that bridge the diffusion-based generative models and importance-based sample selection.\n\n3.\tThe results are promising. The performance improvements on challenging datasets are remarkable. Extensive ablation studies, cross-architecture validation and visualization are implemented." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed a guided diffusion generation method for dataset distillation problem. The trajectory influence and deviation guidance are introduced to the vanilla diffusion process for generating synthetic samples as efficient training data. The results on ImageNet and its subsets demonstrates the improvements over the baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Weaknesses:\n\n1.\tThere are several hyper-parameters in the algorithm, such as influence factor, deviation factor, scale, guided range etc. The sensitiveness of performance on these hyper-parameters should be studied. How do authors search the hyper-parameters?\n\n2.\tI have a concern whether the new method cause the collapse of the distribution of the generated data, though with the deviation guidance. \n\n3.\tSince DiT and VAE pre-trained on huge dataset are utilized for generating training samples on small datasets. It naturally brings advantages over traditional dataset distillation methods. Hence, more recent methods that also use pre-trained diffusion models should be compared with. \n\n4.\tThere are also some similar works that improve the efficiency of diffusion-model generated training samples, such as [1], which should also be discussed in the paper.\n\n[1] Real-Fake: Effective Training Data Synthesis Through Distribution Matching, ICLR 2024." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a training-free influence-guided diffusion sampling method as a novel dataset distillation scheme and achieve state-of-the-art performance in distilling full-sized ImageNet datasets." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024influenceguided,\ntitle={Influence-Guided Diffusion for Dataset Distillation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0whx8MhysK},\nnote={under review}\n}" }, "abstract": { "value": "Dataset distillation aims to streamline the training process by creating a compact yet effective dataset for a much larger original dataset. However, existing methods often struggle with distilling large, high-resolution datasets due to prohibitive resource costs and limited performance, primarily stemming from sample-wise optimizations in the pixel space. Motivated by the remarkable capabilities of diffusion generative models in learning target dataset distributions and controllably sampling high-quality data tailored to user needs, we propose framing dataset distillation as a controlled diffusion generation task aimed at generating data specifically tailored for effective training purposes. By establishing a correlation between the overarching objective of dataset distillation and the trajectory influence function, we introduce the Influence-Guided Diffusion (IGD) sampling framework to generate training-effective data without the need to retrain diffusion models. An efficient guided function is designed by leveraging the trajectory influence function as an indicator to steer diffusions to produce data with influence promotion and diversity enhancement. Extensive experiments show that the training performance of distilled datasets generated by diffusions can be significantly improved by integrating with our IGD method and achieving state-of-the-art performance in distilling ImageNet datasets. Particularly, an exceptional result is achieved on the ImageNet-1K, reaching 60.3\\% at IPC=50." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Dataset Distillation", "Dataset Condensation", "Diffusion Model", "Guided Diffusion Generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/072c5a57beddf2a92422111be73b6097748b8eb9.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/008b3d40ed63394140722aa8a9e2fc60fb21b877.zip" }, "title": { "value": "Influence-Guided Diffusion for Dataset Distillation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0wmfzWPAFu
Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity
main
Active
generalized smoothness;first-order optimization;convex optimization;Polyak stepsizes;gradient clipping;adaptive optimization;acceleration
optimization
5;5;5;6;8
3;4;3;5;4
2;3;3;3;4
2;2;1;3;4
3;3;3;4;4
5.8
3.8
3
2.4
3.4
0.412514
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "**Typos and minor suggestions:**\n\nLine 203: Since you cite the conference version rather than the arXiv version of the paper, please refer to it as \"Proposition 1\".\n\nLine 383-387 / Inequality (20-21): I suggest that the authors avoid breaking the entire inequality into two labels, which can cause ambiguity as seen in the second step of (73). Using an underbrace might be a better option.\n\nLine 419: \"$\\varepsilon$-solution\".\n\nLine 1095: a multiplier of $\\exp(\\eta)$ is missing in the last term.\n\n**Questions:**\n\n* As I mentioned in the Weaknesses section, are there any practical examples (either theoretical or experimental) that can justify the importance of the *improved* complexities?\n* For Theorem 3.1, it appears that you integrate the analyses from Li et al. (2024) and Koloskova et al. (2023), substituting the normalization term $\\ell(G)$ in $\\eta$ into something related to $\\|\\nabla f(x^k)\\|$, so that the sequence enjoys the monotonic properties and can be analyzed in two cases. Could you elaborate on the intuition behind this approach?\n* You mention new technical results for $(L_0,L_1)$-smooth functions in Section 1.3. Could you specify these results for reference? \n\n\n\nHaochuan Li, Jian Qian, Yi Tian, Alexander Rakhlin, and Ali Jadbabaie. Convex and non-convex optimization under generalized smoothness. In Advances in Neural Information Processing Systems 36, 2024.\n\nAnastasia Koloskova, Hadrien Hendrikx, and Sebastian U Stich. Revisiting gradient clipping: Stochastic bias and tight convergence guarantees. In Proceedings of the 40th International Conference on Machine Learning, 2023." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper covers existing methods for convex $(L_0,L_1)$-smooth optimization, specifically Gradient Descent with Smoothed Clipping and Polyak Stepsizes. It also provides analyses of Similar Triangles Methods and Adaptive Gradient Descent under the setting of generalized smoothness. The results give an overview of existing and adapted methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper analyzes the iteration complexities of several algorithms targeted at convex $(L_0,L_1)$-smooth optimization. In comparison to previous works, the authors focus on a more refined analysis, including the elimination of dependence on the smoothness parameter $L$ (although it is not the dominant term) for variants of Gradient Descent, as well as improvements on the naive adaption of Adaptive Gradient Descent for generalized smoothness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My major concern is about the significance of the results. To be specific, for the first two variants of Gradient Descent, this paper only improves the non-dominant $\\mathcal{O}(\\sqrt{1/\\varepsilon})$ term of iteration complexity, which is inconsequential for small $\\varepsilon$, i.e., when finding a solution with good quality. For the adapted versions of Similar Triangles Method and Adaptive Gradient Descent, the iteration complexity is in the form of $\\mathcal{O}(\\sqrt{L_0L_1A\\exp(L_1A)A^2/\\varepsilon^2})$, where $A$ equals either $R_0$ or $D$, which generally aligns with a specification of Li et al. (2024). Thus, the overall contribution in terms of novel theoretical guarantees appears quite limited to me at this stage." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- In the last part of the equation (15), should it be N+1 - T instead of N+1?\n- Why is it so crucial to choose $\\gamma= 1/4$? In Theorem 6.1, to achieve a rate better than in (25)? Can it be relaxed?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. Paper improved existing convergence results for gradient methods with (smoothed) clipping and Polyak stepsizes.\n2. New convergence results for AdGD under $(L_0, L_1)$-smooth assumption.\n3. Proposed $(L_0, L_1)$-STM recovers the best-know convergence rate for accelerated methods without additional knowledge on $R_0$ and $f(x^0) - f^*$." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper focuses on convex $(L_0, L_1)$-smooth optimization and answers important open questions in this domain. In particular, new convergence rates are derived for the gradient method with smoothed clipping and Polyak stepsizes, improving existing results. The best-known convergence rate is derived for $(L_0, L_1)$-STM. Also, new results proved for AdGD and a stochastic case under the additional assumption on a shared minimizer. The statements in the paper are clear, and compared with existing results in the literature, the proofs are correct." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper is missing the conclusion and experiments sections. However, the experiments are provided in the appendix.\n2. The assumptions for stochastic problem is restrictive; however, the derived results are new and better than previous ones." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Can a version of the method that is somewhat more agnostic of the knowledge of L0 and L1 be designed, since right now this knowledge is quite necessary for selecting the correct step sizes?\n\n* Ultimately, a high-level takeaway of the work (which is likely also a takeaway that follows from the Chen paper if one could quickly bridge their stationarity to function suboptimality here) seems to be that the results are ultimately of a *local nature*. What I mean by that is that due to the unavoidable dependence of the type $\\exp(\\|x-y\\|)$, the terms involving that in the bound will be large, unless $\\|x-y\\|$ is sufficiently small, and hence, in spirit the results can be viewed as showing that the GD methods studied in the paper are good but only locally. This aspect is not a criticism of the paper, but just a comment on how one can typically interpret bounds that involve exponentials.\n\n* Do the results really offer a significant improvement over Li et al. 2024a (_Convex and Non-convex Optimization Under Generalized Smoothness_). The results provided by Li et al. 2024a for GD and NAG also do not rely on the assumption of L-smoothness, and their acceleration results for NAG **do not** have a dependence on exp(L1). The authors claim that the advantage of their results is that they do not require dependencies on $\\Vert \\nabla f(x_0) \\Vert$ and $f(x_0) - f^*$, as in Li et al. 2024a, and they argue that these quantities could potentially be exponentially large in the worst case (Line 408-420)---But in the reviewer's opinion, assuming these initial quantities to be constants is not a very strong assumption, whereas in comparison, the authors' acceleration result depends on exp(L1), which seems to be less favorable." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper makes a technical contribution to the convergence analysis of gradient descent(like) methods, under the general $(L_0,L_1)$ smoothness for differentiable functions. In the reviewer's opinion, the main contribution is a tighter bound on Clip-GD under the generalized smoothness assumption (e.g., The Polyak stepsize is chosen to minimize the upper bound in (59), so the result for GD-PS is not hard to get after having proved (45) for Clip-GD, while STM and AdGD have exponential terms in their upper bounds, and hence only of marginal interest).\n\nStrengths: This article fills a gap in the analysis of convex L0-L1 problems that are differentiable, and claims to do that without having to resort to small stepsizes as in the recent work of Li et al. The work is reasonably clearly written, and is easy to follow. Some aspects that could do with more discussion are the two-phase nature of GD (with the $>, < L_0/L_1$), and what that may mean when compared with other work (does the same happen in the nonconvex case for instance? how does it relate to bounds from the work of Li et al for instance?)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper takes a closer look at $(L_0,L_1)$-smoothness for the setting of convex optimization. There, it derives more fine-grained convergence rate guarantees than existing work, while discussing extensions to accelerated, stochastic, and certain adaptive settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* From a quick skim of the literature, it seems that Zhang et al noted in passing that twice differentiability could be dropped at the expense of some more analysis, but the full details of the differentiable case were worked out in subsequent work. This aspect is not clear from how the current work cites related work, and should be fixed.\n\n* While the related work section is overall fairly good, a more precise statement about the results of Chen et al is needed, especially because that work introduced some of the key technical tools too, and studied the nonconvex case; in particular, it would be worth noting what happens if one trivially takes their nonconvex results, and tries to adapt them to the convex case (by boosting stationarity to function suboptimality using the current assumptions). Also, their slightly more general $\\alpha$-version of Assumption 3 could be noted.\n\n* Section 5 on acceleration could be deferred to the appendix or noted in passing, and more discussion given to the exponential terms in the bound, which are tantamount to saying that essentially no practical acceleration happens, even though the technical result itself is interesting to note. Similarly, the bounds arising in Section 6 should be discussed a bit more, because due to the central assumption of the paper, ultimately the pessimistic exponential terms in D arise.\n\n* The authors's motivation for studying AdGD should be as a result further enhanced: it seems inherently unsuitable to use under the generalized smoothness assumptions since AdGD does not utilize clipping. And the result for the stochastic case (Section 7) requires a common minimizer for all the stochastic components (Assumption 4). Although such an assumption is also used in some works, this assumption is not that weak, and renders the results less applicable." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please refer to the weakness." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The analysis in this work does not rely on the L-smooth assumption, which was required in previous works. In this way, the work proves a convergence rate for the Gradient Descent (GD) method with Gradient Clipping and the Gradient Descent method with Polyak Stepsizes, with the dominant part having a smaller constant depending on $L_0$.\n\n2. This work proposes a new variant of the Similar Triangles Method that accelerates GD.\n\n3. This work provides a faster convergence rate for the Adaptive Gradient Descent Method in $(L_0,L_1)$-smoothness settings compared to the locally L-smooth setting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on analyzing optimization methods for convex problems under $(L_0,L_1)$-smoothness settings. It provides improved convergence rates for the Gradient Descent (GD) method with Gradient Clipping and the Gradient Descent method with Polyak Stepsizes. It introduces a new accelerated method based on the Similar Triangles Method and provides new convergence rates for the Adaptive Gradient Descent Method. Finally, it extends the analysis to the stochastic case in overparametrized settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. A new acceleration of GD is proposed. It would be supportive to add more remarks to highlight the theoretical merits of this acceleration compared with the STM in (Gasnikov & Nesterov, 2016). Additionally, it would be more supportive to numerically compare its performance with STM in (Gasnikov & Nesterov, 2016).\n\n2. Example 1.3 considers a logistic function with L2 regularization. However, $f(x)$ is not related to L2. It would be better to specify where the L2 regularization is.\n\n3. The discussion after Theorem 7.1 (line 521) claims that the probability must be smaller than $\\frac{8nL_1^2\\|x^0-x^*\\|^2}{\\eta\\nu(N+1)}$. It would be clearer to explain why the probability should be smaller than this value." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is well-writen, and thus easy to follow. The paper has some signification theoretical contributions for the existing algorithms. There are also new algorithms being adapated from the classical smoothness to the new (L0,L1)-smoothness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper present a study for gradient method for solving optimization problems involving (L0,L1)-smooth objective function, which was first introduced in (Zhang et al, 2020b). The analysis in the current paper is devoted fully for the convex and strongly convex case, where the convergence analysis of several methods is proopsed, both in the deterministic and stochastic setting. However, there are some major issues needed to be addressed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The analysis of the current paper is mostly based on Assymetric and Symmetric (L0,L1) smoothness, which can be hold for functions that are not twice differentiable. This is good since this class cover the original class of function in (Zhang et al, 2020b), where twice differentiability is a must. However, which meaningful classes of functions satisfy Assymetric/Symmetric (L0,L1) smoothness while not C2? I found that all the examples presented are C2, and thus it seems that the use of Assymetric and Symmetric (L0,L1) smoothness is not neccesary, which reduces the importance of the current paper much. Note that examples for L-smooth functions that are not C2 are diverse, so it is reasonable to use the gradient Lipschitz condition instead of the stronger one, bounded Hessian. I do not think it is the case for (L0,L1) smoothness.\n\n2. Now assume that the function is C2. Lemma 2.1 shows that (L0,L1) smoothness implies (not equivalent) equation (7). In comparison with Lemma 2.2 (Vankov et al., 2024) below, their equation (2.2) seems to be a shaper inequality and in fact is a equivalent condition. Based on this, I suspect that the results obtained by the paper under review is not as tight as claimed by the authors, especially when compared with (Vankov et al., 2024). \n\n3. The convergence theory of algorithms for (L0,L1)-smooth function does not explain why they are better than standard gradient descent. For example, can the authors explain why standard GD performs worse than designated algorithms for (L0,L1)-smooth function in solving logistic regression?\n\nReference.\nD Vankov, A Rodomanov, A Nedich, L Sankar, SU Stich, Optimizing (L0,L1) - Smooth Functions by Gradient Methods, https://arxiv.org/abs/2410.10800" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024methods,\ntitle={Methods for Convex \\$(L\\_0,L\\_1)\\$-Smooth Optimization: Clipping, Acceleration, and Adaptivity},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0wmfzWPAFu},\nnote={under review}\n}" }, "abstract": { "value": "Due to the non-smoothness of optimization problems in Machine Learning, generalized smoothness assumptions have gained much attention in recent years. One of the most popular assumptions of this type is $(L_0, L_1)$-smoothness (Zhang et al., 2020). In this paper, we focus on the class of (strongly) convex $(L_0, L_1)$-smooth functions and derive new convergence guarantees for several existing methods. In particular, we derive improved convergence rates for Gradient Descent with (Smoothed) Gradient Clipping and for Gradient Descent with Polyak Stepsizes. In contrast to the existing results, our rates do not rely on the standard smoothness assumption and do not suffer from the exponential dependency from the initial distance to the solution. We also extend these results to the stochastic case under the over-parameterization assumption, propose a new accelerated method for convex $(L_0, L_1)$-smooth optimization, and derive new convergence rates for Adaptive Gradient Descent (Malitsky and Mishchenko, 2020)." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "generalized smoothness", "first-order optimization", "convex optimization", "Polyak stepsizes", "gradient clipping", "adaptive optimization", "acceleration" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4f08ea818f16991fd10cc1a441e095c3edb8d42d.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0x8wWloW2O
OracleMamba: A Dynamic Market-Guided and Time State Selection Framework for Robust Stock Prediction
main
Active
deep learning;time series
learning on time series and dynamical systems
1;5;5;5
4;4;4;4
2;2;2;2
1;2;2;2
2;3;2;2
4
4
2
1.75
2.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the weakness for details." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper studies the first work that introduces Mamba into stock price forecasting, which could be promising for the development of this subdomain.\n2. The integration of a market-guided module for short-term forecasting and a SelectiveMamba module for long-term stability represents an novel hybrid approach to stock prediction. \n3. The paper is written with good clarity and thus is easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents OracleMamba, a new stock prediction framework designed to integrate both short-term and long-term market dynamics using a dynamic market-guided module and a SelectiveMamba module. The model aims to address the limitations of previous joint forecasting models by effectively balancing short-term market volatility and long-term trends. OracleMamba uses a comprehensive market-guided gating mechanism that fuses market sentiment and objective market indicators to enhance prediction accuracy, while the SelectiveMamba module captures spectral and temporal features to reduce noise and extract key signals from market data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. A key concern is the lack of clarity regarding whether the additional information used to enhance OracleMamba is also used for baseline comparison. It is also unclear how much these features specifically contribute to OracleMamba's performance. If baseline models do not incorporate the same information, the comparison may be unfair.\n2. The exact data point length in CSI300 and CSI800 for model training is not specified. Is the data daily-based or hourly-based? Given that Mamba-based methods are used, a longer financial data sequence might provide an advantage.\n3. Since the paper adopts a Mamba-based solution, it is crucial to evaluate the computational cost, including memory usage and runtime efficiency." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The market state encoding section lacks sufficient details to justify the use of market state information:\n1. The authors failed to describe the subjective context thoroughly. There is only a vague mention of analysts' reports and financial documents scraped from unspecified platforms. It is unclear what these documents are, their market coverage, their frequency, and the volume of data involved. This information is crucial. For example, if only quarterly updated earnings reports from some public companies are used, how do they align with daily updated stock prices? Additionally, how noisy is this data? The current version of the paper is problematic and does not justify the use of market context.\n2. It is unclear how the GPT-O1 model is used to convert these textual data into sentiment. The prompts used are not described. The authors did not specify what the sentiment results look like. Are they presented as sentiment scores, sentiment labels, etc.?\n3. The authors should more clearly specify the role of the experts. Do they mean that the experts are the ones who wrote the documents, or are they directly involved in the analysis?\n4. It is unclear how sectors and regions are processed and embedded in the data or how they are used. The authors seem to only mention these aspects without integrating them into their analysis.\n\nThe TSSS structure has several issues:\n1. Why are B and C input-independent? This is more like a basic SSM structure that is time-invariant, while Mamba is designed to be an improvement on such structures.\n2. What is $s$ in the calculation of DSE?\n3. The design of DTE and DSE lacks explanation, and it is unclear how they reflect the benefits claimed by the authors in the relevant section.\n\nExperiment Section:\n1. The setup for the comparison methods is unclear. Are these methods using the same data as the proposed model, or are they only using stock price data?\n2. Many SOTA time series forecasting models are not included in the comparison, such as DLinear, NLinear, Autoformer, Fedformer, PatchTST, etc.\n3. Why was the vanilla Transformer model used for comparison when there are many Transformer-based models specifically designed for time series tasks?\n4. Comparisons between the proposed model and Mamba-based models (or Mamba itself) are needed." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The authors proposed a novel 3D scanning mechanism for analyzing financial market information\n2. The problem to be solved is well formulated" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a Mamba-based framework for stock return prediction by leveraging financial market data, such as stock prices, market indices, and market sentiment." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The data used in the paper is not clearly specified\n2. The motivation behind the SSM design is not well explained\n3. The experiment lacks comparisons with some important baselines" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the \"Weaknesses\" for your response, especially the first three points." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.The article introduces Mamba into stock price forecasting.\n\n2.The article proposes a 3D scan method to capture interactions across the dimensions of time, stock, and market state.\n\n3.The method proposed in the article performs well in experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The article includes a dynamic market-guided module and the SelectiveMamba module, effectively addressing the challenges posed by noisy data. It introduces Mamba into stock price forecasting and employs a 3D scan method. By integrating market sentiment, the article incorporates subjective factors. Experiments demonstrate its performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The article lacks novelty and sufficient discussion on related work:\n\n a) While using GPT to process market sentiment is indeed innovative, there have been many previous works that utilized text data, including the use of pre-trained language models as in [1].\n\n b) It is difficult to see the relationship between the Time-Spectral method in the article and stock data. It appears to merely apply a time series method to stock data without analyzing how the Time-Spectral method aids stock prediction or offers any special improvements for it. Additionally, frequency methods are also common in time series analysis, as in [2].\n\n c) Inter-stock correlations have been used in many previous articles, as in [3].\n\n2. The article claims that the model can handle noisy data, but I do not see how or why noisy data can be addressed, or what improvements have been made compared to previous methods. Why could previous deep learning methods not handle noisy data while the current one can? Furthermore, noise in stock data is often caused by random Brownian motion, which the article does not analyze or explain theoretically.\n\n3. The article mentions the application of the Mamba model, but the method section does not clearly indicate where it is used for those unfamiliar with the Mamba model. Nor does it explain how the Mamba model contributes to stock prediction.\n\n4. The baselines are relatively weak, with only two methods specifically tailored for stock data.\n\n5. The article does not provide the code.\n\n\n[1] Yang, Linyi, et al. \"Numhtml: Numeric-oriented hierarchical transformer model for multi-task financial forecasting.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 10. 2022.\n\n[2] Liu, Shengzhong, et al. \"FOCAL: Contrastive learning for multimodal time-series sensing signals in factorized orthogonal latent space.\" Advances in Neural Information Processing Systems 36 (2024).\n\n[3] Cheng, Rui, and Qing Li. \"Modeling the momentum spillover effect for stock prediction via attribute-driven graph attention networks.\" Proceedings of the AAAI Conference on artificial intelligence. Vol. 35. No. 1. 2021." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Many studies have already combined sentiment analysis with stock price forecasting, so why is there no mention of these studies in this paper?\n- The approach of combining short- and long-term contexts has also been widely studied within the context of stock price forecasting and has been presented at major ML/AI conferences. Why wasn’t this mentioned, and what advantages does this study have compared to those prior works?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "According to the authors, this is the first study to apply Mamba to stock price forecasting (although this claim is questionable due to the lack of a detailed literature review)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper is the first to introduce Mamba for stock price forecasting, focusing on efficiently combining various short- and long-term factors necessary for stock prediction. The study aims to efficiently integrate time-series data and text data, such as analyst reports, through the dynamic market-guided module, while combining short- and long-term contexts through the SelectiveMamba module. The model was tested on Chinese stock market data, with results showing that it outperforms baseline models. However, the paper lacks a thorough literature review on stock price forecasting. Only three baseline models are referenced for comparison, two of which are studies published at least three years ago. Additionally, while the authors state they used scraped analyst reports, they fail to specify the sources of the reports, the authors of these reports, or the number of reports used, which raises concerns about the credibility of their experiment and data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The authors claim that incorporating sentiment analysis, such as analyst reports, into stock price forecasting is novel, but many studies have already pursued this approach. Moreover, multiple studies within stock price forecasting have also explored combining short- and long-term contexts. Therefore, the two main contributions claimed by the authors are not particularly new, and the lack of a comprehensive search and mention of previous studies is a significant oversight. \n\nFurthermore, the fact that testing was limited to the Chinese market decreases the reliability of the experimental results, and the lack of detailed information on the analyst reports, which are crucial data, also presents serious issues with the study’s credibility and reproducibility." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024oraclemamba,\ntitle={OracleMamba: A Dynamic Market-Guided and Time State Selection Framework for Robust Stock Prediction},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0x8wWloW2O},\nnote={under review}\n}" }, "abstract": { "value": "Stock price prediction is a complex challenge due to the inherent volatility of financial markets and the influence of diverse factors such as macroeconomic conditions, capital flows, and market sentiment. Recent joint stock forecasting models focus on extracting temporal patterns from individual stock price series and combining them to model stock correlations. However, these models face two critical limitations: first, in long-term predictions, they retain both informative and excessive states, amplifying noise and increasing complexity; second, in short-term predictions, they prioritize market indices and technical indicators, neglecting the real-time influence of market sentiment, which can drive price movements independent of traditional indicators. While state space models (SSMs) like Mamba improve efficiency and capture long-distance relationships, they still underperform compared to Transformer-based models.\nTo address these challenges, we propose OracleMamba, a novel framework that integrates a dynamic market-guided module for short-term forecasting and a SelectiveMamba module for long-term forecasting. The dynamic market-guided module fuses objective market data and subjective sentiment analysis to enhance short-term prediction accuracy. The SelectiveMamba module efficiently captures both spectral and temporal features using a 3D scan mechanism, which extracts and filters key signals from the time-series data. By integrating spectral features to identify market rhythms and temporal features to track price movements over time, the SelectiveMamba module reduces noise and preserves critical information for long-term forecasts. This framework significantly improves both model efficiency and accuracy, outperforming existing approaches across real-world stock prediction tasks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "deep learning", "time series" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/8b67f605220c50dda3f4af8b04c6306f848bed41.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "OracleMamba: A Dynamic Market-Guided and Time State Selection Framework for Robust Stock Prediction" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0xUEBQV54B
Large Language Monkeys: Scaling Inference Compute with Repeated Sampling
main
Active
Inference-Time Compute;Large Language Models
foundation or frontier models, including LLMs
3;3;5;5
4;4;4;4
2;2;3;3
1;2;2;1
3;4;2;3
4
4
2.5
1.5
3
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Maybe I missed discussion on this but is there any attempt to also parameterize the coverage scaling law in terms of the model size?\n- Can the authors compare their work with parallel work from Snell et. al. 2024 (Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters)? In particular it would be nice to see discussion on whether the exponentiated power law is also a reasonable to fit scaling curves generated by forms of test-time compute methods, like sequential corrections explored by Snell et al." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper presents an extensive analysis of coverage on math and coding tasks and plots empirical scaling laws (and fitted ones) on a multiple model families like Gemma, Pythia and Llama. One of the most interesting trends to note is that models from the same family only affected the offset of the log-linear plot, and not the slope. This is different from typical scaling laws for pre-training loss seen in prior works.\n- The authors also extend analysis to inference time compute, and show that sometimes drawing multiple samples from a smaller (and less capable model) is more compute-efficient (in terms of FLOPs), compared to a single sample from a larger and more capable model. This analysis can be immediately used to reduce the inference time for models deployed in practice, without any loss in performance, at least when a reliable verifier is available. \n- The analysis in Figure 6,7 is particularly compelling to put more effort in improving verification, since it suggests that most test-time methods for reasoning/safety etc., are not failing due to lack of coverage, but more due to the inaccuracy of the verifier." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies scaling laws for a new axis of compute for LLMs: inference time compute. They empirically find that coverage (pass@N) improves with the number of inference time samples log-linearly and can be modeled with an exponentiated power law. \nIn domains like coding, where automatic verifiers exist, i.e., verification is much easier than generation, the performance improves from 15.9% with one sample to 56% with 250 samples, outperforming the single-sample state-of-the-art of 43%. In domains lacking automatic verifiers, the authors observe that typical approaches for selecting from a set of IID sampled responses, such as majority voting and reward models, reach a performance plateau after several hundred samples and do not effectively scale with an increasing sample budget, and thus the performance on these problems is bottlenecked by the accuracy of the verifier." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While the coverage analysis expands on different models and tasks, it does not model the affect of other common parameters of interest like temperature, top-K etc. \n- Some recent works like (Snell et al. 2024, Setlur et al. 2024) show that beam search against an automated verifier improves compute efficiency, with a fixed beam size. Having analysis on this direction of compute use would make the paper stronger.\n- While the authors fit scaling laws for coverage, it would be much more useful to see if scaling laws can also be fit for the setting with trained verifiers, that may not be perfect, accounting for the error in the verifier. Currently it is unclear how the size/data used for the trained verifier affects the compute scaling laws for best-of-n inference. \n- In general, the paper presents interesting results for best-of-N inference, but they are quite narrow and expected. Broadening the laws to other forms of compute usage, or identifying how the \"learnability\" of a verifier affects these laws can help to make the work more complete.\n\n[1] Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters (C. Snell, J. Lee, K. Xu, A. Kumar)\n[2] Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning (A. Setlur, C. Nagpal, A. Fisch, X. Geng, J. Eisenstein, R. Agarwal, A. Agarwal, J. Berant, A. Kumar)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "(See above)" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* While it isn't surprising that repeated sampling improves performance, the authors have quantified this phenomena in a precise way. In particular, they find a relatively clean scaling relation between the coverage and number of samples.\n* There is good diversity in the models and datasets used, from more \"agentic\" tasks like SWE-bench to standard math/code reasoning." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work considers the question of scaling inference compute through repeated sampling. The authors find that repeated sampling can greatly improve the performance on reasoning/code tasks, particularly when there is an external verifier that can be used to check the result. At a high level, this work targets an interesting research question and the methodology looks sound. However, parts of the paper are poorly written/confusing (particularly in explicating the experimental setup + results with/without the verifier), and could benefit from some major revisions. I am willing to raise my score if the authors address my questions/concerns." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The authors should include a few more details about the experimental setup or streamline the existing writing. For example, one may have chosen a couple of different approaches on how to incorporate the verifier: the first is an iid setup, where the verifier is just used to pick the final answer out of several of attempts. The second is where the model is provided some verifier signal (e.g., is the answer right/wrong?) between attempts, and asked to review its reasoning trace. Both scenarios interesting, but there were some experimental choices made in the paper that bear some justification. \n* The improved performance with repeat sampling is not meaningful in and of itself; if you have enough attempts, the fraction that you will get right will of course go up. However, it may be useful for the reader to show an example of repeat sampling where the model gets the right answer after a small number of samples -- which part of the reasoning trace gets \"fixed\" between samples typically? Can we characterize the mistakes that the models make better?\n* I am a bit unclear on the relevant baseline here. How should we compare spending the inference compute on sampling (versus, say, doing X-of-thought approaches)?\n* A common assumption in the literature is that verification is easier than generation. Interestingly, this work seems to show that while non-oracle verification can provide some wins, the improvement is overall quite small. It would be interesting to dig into this question a bit more, and study some explicit examples where the reward model (or majority vote) fails. How much of the lack of improvement is due to the inherent noisiness of the reward model? Can the authors characterize the failure mode here a bit more precisely (is non-oracle verification almost as hard as generation in this case)?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "See weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper provides a clear, rigorous study of an important and basic topic in LLM inference. The paper tests many different benchmarks and base models and finds that the main results to be robust. A lot of these may exist in prior work, but not in a coherent and systematic way as presented here. \n\n2. The paper is well-written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a systematic evaluation of repeated sampling for LLM generation across models and benchmark problems considering both oracle verifiers (pass@k) and reward models to select answers. The paper finds smooth scaling behavior with number of samples across all settings with oracle verifiers and proposes a parametric for the inference time scaling laws, even showing that repeated sampling from smaller models can outperform FLOP-matched sampling from larger models. Finally, the paper finds that there remains a large gap between the performance of oracle verifiers and reward models or other mechanisms like majority voting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper needs to be careful about claims of novelty. The paper does discuss some related work in the intro, but could be more clear that existing work has made many of these observations. The main diffference is that this paper gives a more systematic analysis. For example alphacode [1] and others have figures that look almost identical scaling figures (Fig 6 in alphacode paper), and [2] has similar figures about scaling samples with reward models. \n\n2. I think the scaling laws presentation is missing the somewhat basic discussion that this is totally the expected behavior for computing pass@k for independent bernoulli samples, no LLMs needed. For example, try running this numpy code and you will reproduce the same kinds of scaling law curves on a log scale:\n```\np = 0.001\nT = 10000\nn_trials = 1000\n\nsamples = np.random.random((n_trials, T)) < p\npasses = np.cumsum(samples, axis=1) >= 1\npass_rate = np.mean(passes, axis=0)\n```\nOf course this is a simplification since there should be a different value of p for each problem in the test set, but the basic idea is that when you take independent samples of a bernoulli variable, this is the expected behavior. There is likely a closed form for this simple process. It would probably make sense to model this directly rather than fitting a heuristic scaling law.\n\n3. It is not clear how the presented scaling laws could be useful. The usual usefulness of pre-training scaling laws is to predict the optimal model size at a FLOP budget beyond those used for training. This kind of extrapolation is not tested in the paper. Moreover, the scaling laws are different for each specific model/task combinations and seem cheap to estimate, so it is not clear how prescriptive they are.\n\n4. The stated conclusions about verifiers seem to be too strong given that the paper only uses one reward model that is not particularly tailored to the problems studied. The off-the-shelf RM is chosen for performance on reward bench reasoning, but to my understanding this benchmark is mostly about humaneval-style coding and then the RM is being applied on math reasoning problems. For example, [2] trains task-specific RMs and does not observe the same sort of saturation.\n\n5. Some discussion of related work is missing when discussing how to improve verifiers in the future work directions. For example, [3] proposes an objective for reward modeling to allow LMs to evaluate themselves with a linear model on their representations and [4] uses models to evaluate themselves.\n\n[1] Li, Yujia, et al. \"Competition-level code generation with alphacode.\" Science 378.6624 (2022): 1092-1097.\n\n[2] Lightman, Hunter, et al. \"Let's verify step by step.\" arXiv preprint arXiv:2305.20050 (2023).\n\n[3] Li, Kenneth, et al. \"Q-Probe: A Lightweight Approach to Reward Maximization for Language Models.\" arXiv preprint arXiv:2402.14688 (2024).\n\n[4] Yuan, Weizhe, et al. \"Self-rewarding language models.\" arXiv preprint arXiv:2401.10020 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the **weakness** part." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The research problem is interesting and well-motivated. Scaling up training compute has led to remarkable success in deep learning. It is important to consider scaling inference compute.\n- The paper is easy to read.\n- The evaluation covers 5 different benchmark datasets and show consistent trends." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the potential of scaling inference compute of LLMs. The authors show that, by generating repeated samples from language models, we can increase the “coverage”, i.e., the fraction of problems that are solved by any generated sample. The improved coverage directly translates to better performances when an oracle verifier is available. The author also conduct experiments in the setting without oracle verifiers and find that the performances plateau quickly given repeated samples." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- One main claim, **scaling inference compute through repeated sampling leads to large improvements in coverage, seems trivial**. I believe that this fact is generally known in the community. Empirically, it has been observed in prior works like [1,2]. Mathematically, it is a simple consequence of equation 1. It is easy to prove that pass@$k$ monotonically increases with $k$ as long as there exists some $C_i>0$. The scaling curves are simply numerical calculation of equation 1 (correct me if this is not the case!). Thus, the novelty of this analysis seems limited.\n\n- The paper **lacks in-depth analysis on scaling laws**. In Section 3, the proposed formula (equation 2) is simply adopted from the GPT-4 technical report. Then the “curve fitting” directly fit the power law to the curve generated by a _known formula_ (i.e., equation 1). I believe that meaningful scaling laws should be distilled from experimental observations. It’s not clear to me what is the insight of fitting some curves when the underlying formula is already known.\n\n- I like the analysis on domains without automatic verifiers since it is a more realistic setting than the experiments based on oracle verifiers. However, this section is very short and **lacks a deeper exploration of verifiers**. The conclusions here are based on experiments with a single existing 8B verifier. Strengthening this analysis with verifiers of varying sizes and other well-known verification approaches (e.g., process supervision [3]) would significantly enrich this section.\n\n- Minor issues: Line 394: $k$ should be italic.\n\n[1] Program Synthesis with Large Language Models\n\n[2] Competition-level code generation with AlphaCode\n\n[3] Let’s Verify Step by Step" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We investigate sampling many solutions from an LLM when solving a problem, showing that this simple approach can be scalable and effective." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024large,\ntitle={Large Language Monkeys: Scaling Inference Compute with Repeated Sampling},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0xUEBQV54B},\nnote={under review}\n}" }, "abstract": { "value": "Scaling the amount of compute used to train language models has dramatically improved their capabilities. However, when it comes to inference, we often limit the amount of compute to only one attempt per problem. Here, we explore inference compute as another axis for scaling, using the simple technique of repeatedly sampling candidate solutions from a model. Across multiple tasks and models, we observe that coverage – the fraction of problems that are solved by any generated sample – scales with the number of samples over four orders of magnitude. Interestingly, the relationship between coverage and the number of samples is often log-linear and can be modelled with an exponentiated power law, suggesting the existence of inference-time scaling laws. In domains like coding and formal proofs, where answers can be automatically verified, these increases in coverage directly translate into improved performance. When we apply repeated sampling to SWE-bench Lite, the fraction of issues solved with DeepSeek-Coder-V2-Instruct increases from 15.9% with one sample to 56% with 250 samples, outperforming the single-sample state-of-the-art of 43%. In domains without automatic verifiers, we find that common methods for picking from a sample collection (majority voting and reward models) plateau beyond several hundred samples and fail to fully scale with the sample budget." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Inference-Time Compute", "Large Language Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5bd20618dcc31a447735c3a1795f7adb1fddcf23.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/d700bd8c98f016464214aa41d97389e18837cf67.zip" }, "title": { "value": "Large Language Monkeys: Scaling Inference Compute with Repeated Sampling" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0y3hGn1wOk
Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset
main
Active
Machine Unlearning;Vision Language Model;Privacy
alignment, fairness, safety, privacy, and societal considerations
5;5;5;6;6
4;5;4;5;3
2;3;2;3;3
2;3;2;3;3
3;3;3;4;3
5.4
4.2
2.6
2.6
3.2
-0.218218
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please reply my concerns mentioned in the Weaknesses part." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper addresses an important ethics problem of AI, i.e., to fulfill the right to be forgotten for VLM, which is underexplored relatively. To my understanding, few works have been done in the literature. \n\n2. The benchmark, together with the evaluation metrics, is validated, especially via the assessment of four baseline unlearning algorithms. The results imply that existing unlearning algorithms are far from being mature when considering both model utility and forget quality. \n\n3. The benchmark is good to foster the community’s further research on developing better unlearning methods for VLMs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an unlearning benchmark for Vision Language Models (VLMs) under the Right to be Forgotten setting. After defining the VLM unlearning tasks, this benchmark assigns a two-stage evaluation pipeline with a newly proposed Fictitious Facial Identity VQA dataset. The proposed benchmark offers a comprehensive evaluation by computing both forget quality and model utility, with further assessment under membership inference attack and adversarial privacy extraction. Another contribution of the work is its evaluating four baseline unlearning algorithms, which indicates that none of them achieve good unlearning performance considering both model utility and forget quality. In addition, the divergent performance of Preference Optimization with and without membership inference attacks underscores the importance of privacy attacks for robust evaluations. This benchmark is good to foster the community’s further research on developing better unlearning methods for VLMs under the setting of Right to be Forgotten." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I am confused by the dataset constructed. As described in Line 165~172, 400 faces are sampled, which are then divided into 400 clusters by using K-means algorithm. How can 400 faces clustered into 400 clusters? \n\nAnd, to me, a dataset with 400 faces is relatively small for evaluating the unlearning problem. I am also not convinced why only synthetic faces are used for this evaluation. Is there any difference between real faces and synthetic faces for this evaluation purpose?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My major concerns lie in the effectiveness of the proposed benchmark and the experiments. If you can well address these problems, I am happy to improve my rating." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.\tUnlike unlearning in LLMs, which primarily focuses on forgetting sensitive text information, unlearning in VLMs extends to both images and text. This paper formalizes VLM unlearning as the task of unlearning private image and text-paired information.\n2.\tTo study privacy under the Right to be Forgotten scenario, a two-stage evaluation pipeline with Fictitious Facial Identity VQA dataset is proposed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Facial Identity Unlearning Benchmark (FIUBENCH), a VLM unlearning benchmark designed to robustly evaluate the effectiveness of unlearning algorithms under the Right to be Forgotten setting. Moreover, FIUBENCH further incorporates membership inference attacks and adversarial privacy extraction to robustly evaluate unlearning performance, testing whether the private information is unlearned even under attacks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThis paper proposes FIUBENCH, a VLM unlearning benchmark designed to robustly evaluate the effectiveness of unlearning algorithms under the Right to be Forgotten setting, which is interesting. However, the effectiveness of this benchmark is unclear.\n2.\tSince the faces are generated by StyleGAN2, it is necessary to evaluate the distance between the generated face distribution and the real one. From the figure 1, the synthetic face images seem different from the real faces. Will it hurt the evaluations on the Vision Language models.\n3.\tFor the experiments, you’d better involve more Vision Language models for evaluations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please refer to the Strengths and Weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper systematically examines forgetting in Vision Language Models and introduces FIUBENCH, a new benchmark for robust evaluation of unlearning algorithms.\nThe paper is well-written and easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces FIUBENCH, a benchmark designed to evaluate unlearning algorithms for Vision Language Models (VLMs) under the Right to be Forgotten setting. FIUBENCH includes a Fictitious Facial Identity VQA dataset and a two-stage evaluation pipeline to control information exposure levels. To handle VLMs’ ability to process semantically similar queries, FIUBENCH incorporates robust evaluation metrics, including membership inference and adversarial privacy attacks. Initial results on four baseline algorithms show limitations in unlearning performance, with trade-offs between model utility and forget accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The proposed benchmark uses a forget set and a retain set to assess the forgetting quality and model utility of unlearning algorithms. However, is this setting appropriate? In my view, the privacy concerns in Vision Language Models are more about forgetting specific sensitive information, such as identity or email, rather than simply forgetting individual samples.\nThe forget set is limited to 5% of the total dataset, comprising only 20 images. Could you explain the rationale behind selecting this specific proportion? How was this number determined?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "As mentioned in weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "A novel dataset, Facial Identity Unlearning Benchmark (FIUBENCH) is constructed which can support the evaluation of the ‘Right to be Forgotten’ in Visual Language Model. Possible privacy risks are avoided through fictitious facial images. This submission is well written and easy to follow. The protocol provides settings to support different kinds of evaluations, including membership inference attack and adversarial privacy attack, etc." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This submission touches an important privacy-related topic in vision language model - the forgetting of specified content, e.g. unlearning. To evaluate the performance of unlearning, the authors construct a Facial Identity Unlearning Benchmark (FIUBENCH), with protocol and several methods as baseline. This is an interesting work. From the reviewer's point of view, this is still a preliminary work in considering of dataset size, and the methods of evaluation and face generation. However, it is worthwhile to continue to advance it to become a consensus for the community." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The database size is too small to fit the task of evaluate the performance of unlearning in the wild. Although the fictitious facial images avoid the privacy risk, the synthetic images bring some flaws, such as artifacts in style, and these could become an unexpected feature for recognition. In addition the database has taken some action to ‘Filtering out similar faces with K-means’, which leads to an imposed environment for face recognition, and make the evaluation is far from real world case." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. My first question is whether it is necessary to store individual private information within VLMs in real-world use cases. In practical applications, the best approach is to separate individual private information from the VLM and use retrieval-augmented generation (RAG) techniques to provide appropriate responses. Under such techniques, the Right to be Forgotten can be easily ensured by deleting individual private information from the relevant databases. The authors need to further elaborate on the motivation for their research.\n2. In line 290, it is described that the score range for GPT-Eval is 0-1, but in Table 1 and Table 2, there are scores that exceed this range. This appears to be an error.\n3. Line 366 mentions that “early stopping based on training loss” is employed. Are the results reported in Table 2 based on this early stopping setting? What criteria are used for early stopping? I would like to know more details.\n4. Unlearning algorithms involve a trade-off between model utility and forget quality. It might be necessary to compare the forget quality scores at fixed model utility levels, or vice versa. Alternatively, plotting a Model Utility-Forget Quality curve could be more informative. In fact, in Figure 3, representing the effectiveness of the unlearning algorithm with a single point is also unreasonable; a Model Utility-Forget Quality curve would likely be a more appropriate choice.\n5. On certain metrics, the VLM category significantly affects the model utility and forget quality of an unlearning algorithm. Why is this the case? Comparing the performance of the unlearning algorithm with the Retain Model reveals many such instances: (1) The difference between the Retain Model and GA for LLaVA-Phi-mini-3B is 42.4 (93.7 v.s. 50.6), whereas, for LLama-3.2-Vision-Instruct-11B, the difference is 84.5 (88.8 v.s. 4.30). (2) The difference between the Retain Model and KL for LLaVA-Phi-mini-3B is -29.8 (12.3 v.s. 42.1), whereas, for LLama-3.2-Vision-Instruct-11B, the difference is -1.5 (12.2 v.s. 13.7). The significant impact of the VLM category on certain metrics raises the question of whether these metrics can provide robust testing results. Please provide a more detailed discussion on this matter.\n6. In line 464, it is stated that you \"finally decided to fine-tune the parameters from LLM.\" However, from Table 3, it is evident that the MIA of $E_{x_3}$ is the highest among the four fine-tuning strategies. This choice seems to lack sufficient justification." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Based on the \"Right to be Forgotten\" setting, this paper defines a new VLM unlearning scenario closer to real-world use cases. \nThe writing is clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper, based on the Right to be Forgotten setting, defines a VLM unlearning scenario that is closer to real-world use cases. The primary contributions are as follows:\n1. Formalizing the VLM unlearning tasks. It emphasizes that VLM unlearning algorithms should focus on sensitive information linked to images rather than the visual attributes themselves.\n2. Defining a two-stage evaluation pipeline with the Fictitious Facial Identity VQA dataset. In the first stage, personal information is injected into the VLM, and in the second stage, the unlearning algorithm is performed.\n3. Providing various metrics for robust evaluation in terms of forget quality and model utility." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The rationale behind the research motivation requires further substantiation (Q1).\n2. Some experimental details are not clearly described or potentially contain errors. (Q2, Q3, Q6). \n3. The analysis of the experimental results does not sufficiently consider the characteristics of unlearning algorithms, namely, the trade-off of model utility and forget quality for unlearning algorithms (Q4). \n4. Some metrics appear to lack robustness when the VLM category changes (Q5)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A new benchmark to robustly evaluate vision language model unlearning under the Right to be Forgotten setting." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024benchmarking,\ntitle={Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0y3hGn1wOk},\nnote={under review}\n}" }, "abstract": { "value": "Machine unlearning has emerged as an effective strategy for forgetting specific information in the training data. However, with the increasing integration of visual data, privacy concerns in Vision Language Models (VLMs) remain underexplored. To address this, we introduce Facial Identity Unlearning Benchmark (FIUBench), a novel VLM unlearning benchmark designed to robustly evaluate the effectiveness of unlearning algorithms under the Right to be Forgotten setting. Specifically, we formulate the VLM unlearning task via constructing the Fictitious Facial Identity VQA dataset and apply a two-stage evaluation pipeline that is designed to precisely control the sources of information and their exposure levels. In terms of evaluation, since VLM supports various forms of ways to ask questions with the same semantic meaning, we also provide robust evaluation metrics including membership inference attacks and carefully designed adversarial privacy attacks to evaluate the performance of algorithms. Through the evaluation of four baseline VLM unlearning algorithms within FIUBench, we find that all methods remain limited in their unlearning performance, with significant trade-offs between model utility and forget quality. Furthermore, our findings also highlight the importance of privacy attacks for robust evaluations. We hope FIUBench will drive progress in developing more effective VLM unlearning algorithms." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Machine Unlearning", "Vision Language Model", "Privacy" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c898d570441d98718a30a45e987e6079a81c568a.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0yTf37PXcH
Improving Multi-modal Large Language Model through Boosting Vision Capabilities
main
Active
Multi-modal Large Language Model;Boosting Vision Capabilities;Multi-modal Lora;Ladder Adapter
applications to computer vision, audio, language, and other modalities
3;5;5;5;8
4;5;4;4;5
2;3;2;3;3
2;3;1;2;3
2;4;3;3;4
5.2
4.4
2.6
2.2
3.2
0.663403
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In terms of motivation, the paper aims to resolve MLLM visual perception issues such as color recognition, object counting, small object understanding, and spatial location. However, the structural designs of QLadder and MM-LoRA do not seem specifically tailored to address these problems, leading to the impression that performance improvements may stem from data rather than a well-targeted structural design, which appears somewhat forced into explaining the results." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well written and structured" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes Arcana, a multi-modal large language model (MLLM) designed to improve visual perception capabilities. Arcana introduces two key techniques: MM-LoRA and QLadder. MM-LoRA enables separate vision and language pathways, reducing modality interference, while QLadder enhances the visual encoder's ability to capture fine-grained visual details. Extensive experimentation across benchmarks like VQAv2 and TextVQA demonstrates Arcana’s improvement over existing MLLMs in both zero-shot and fine-tuning scenarios, highlighting its capacity for accurate visual reasoning and multi-modal alignment" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The structural innovations of MM-LoRA and QLadder are not sufficiently solid, as the design does not appear to specifically address identified issues such as color recognition, object counting, small object understanding, and spatial location." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What was the main problem that was being addressed? Was it limited data adaptation, was it visual capabilities? If it was just visual capabilities, how does LORA or a few-learnable tokens based adaptation compare against scaling up?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The identified problem of the lack of strong visual capabilities (e.g. detection, localization, color-comprehension etc.) in current vision language models is interesting and worth studying\n- It's also interesting to see the need for modality specific adaptation \n- The paper is easy to comprehend and well supported by various block diagrams" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper seeks to improve visual understanding capability of vision language model. It introduces two components to enhance the capacity of the VLM: Multimodal (MM) -LoRA and Query Ladder. The MM-LoRA increases the capacity of the decoder by introducing low-rank adaptable matrices separately for the vision and language modalities. The QLadder increases the capacity of the encoder by incorporating learnable tokens at the input. Overall, this approach shows benefits of individual components and also competitive performance across MM/Language benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The summary section (line 519-520) mentions \"achieving notable performance improvements even with limited data resources\". However, the problem of limited data sources is not convincing. For instance, given that LLM’s and Visual-Encoders are trained with web-scale data, it’s not clear how and why would data be limited. Perhaps, the authors want to focus on specific domains (say, medical) where curating data might be difficult due to privacy concerns. But, for the kind of problems mentioned in the paper (detection, localization, color-comprehension), its not clear why data is limited.\n* The paper lacks explanations for why components like LORA and QLadder should improve visual capabilities like detection, localization, color-comprehension. While the attention visualization (line 469-482) demonstrates the effect of these components to visual-token-attention, it’s not clear why that itself should improve performance. Further, some of the statements like “promotes cooperation between different modalities” (line 478) and “enriches the visual information” (line 481) are not corroborated with any intuition or experiments. \n* The contributions of the proposed components are unclear. For instance, the benefit of LORA for limited-data-adaptation has been well studied in the past (e.g. [1]). The importance of introducing additional visual tokens to visual encoders has also been shown in [2]. In the light of the prior works, the paper should more clearly distinguish it’s technical contributions. \n* Are the benefits of Qladder/MM-LORA consistent across scales? If we increase the scale of LLM and Visual Encoder, will Qladder/MM-LORA still show benefits?\n* Miscellaneous\n * Is the beta gamma ration study consistent across a range of LORA ranks (say 64 - 1024)? here it was set to 256\n * Why was LORA applied only to linear layers?\n * In qualitative evaluations (Fig. 5), comparisons should be made with other models to clearly show qualitative gains from using Qladder/MM-LORA\n\n[1] https://arxiv.org/abs/2106.09685\n[2] https://arxiv.org/abs/2309.16588" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "It is well known that MLLMs often exhibit limitations in their visual capabilities, and this work addresses this important issue. Additionally, the paper is well-written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work aims to enhance the visual capabilities of MLLMs through two main contributions: (1) the introduction of a MM-LoRA for the LLM decoder, and (2) the development of a Query module for visual encoders." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The proposed method leverages additional learning parameters to enhance the visual capabilities of MLLMs. Recent studies (e.g., LLaVA-Next, VILA-1.5, Qwen-VL-2) have shown that simply improving image resolution using various (*any resolution*) techniques is a straightforward and effective way to address this issue. I am skeptical that the proposed method will achieve performance comparable to these AnyRes approaches, particularly on tasks requiring high resolution. The proposed method appears limited by the visual encoder, despite the incorporation of additional LoRA modules.\n- The focus of this study is on the visual capability of MLLMs. However, only one ViT is examined, and there are no ablations on different ViTs. This raises doubts about the generalizability of the proposed approach.\n- The improvements from the proposed method should be evaluated based on the ablation studies, rather than relying on Table 1 and 2, as the model Arcana reported in Table 1 and 2 is trained on a combination of large datasets (comparing to LLaVA-1.5 presented in Table 1 and 2). However, it is important to note that only a limited selection of four benchmarks is presented in ablations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1.Was the visual encoder tuning in Table 7 conducted at the pre-training or instruction fine-tuning stage?\n\n2.Have you tried adding LoRA to the visual encoder as well?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The presentation and writing are clear and easy to follow. Figure 1 in the introduction effectively illustrates the background, motivation, and main results of this paper.\n\n2. Tables 1 and 2 show that Arcana achieves better performance than previous MLLM baselines (e.g., LLaVA-1.5, mPLUG-Owl2, etc.) on visual question answering and multi-modal conversation benchmarks.\n\n3. The ablation studies in Tables 4 and 5 clearly validate the effectiveness of MM-LoRA and QLadder.\n\n4. The ablation study demonstrates that QLadder significantly improves MMVP performance, which requires robust visual capabilities. In Table 6, adding QLadder boosts MMVP performance by 3.6%." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel Multi-modal Large Language Model (MLLM) called Arcana, designed to enhance visual understanding capabilities. It introduces two key components: Multimodal LoRA (MM-LoRA) and the Query Ladder adapter (QLadder). MM-LoRA consists of two parallel LoRAs (one for vision and one for language) to disentangle the modalities and enhance their specialized capabilities. QLadder aggregates intermediate representations from the visual encoder, further boosting the visual abilities of the MLLM. Experimental results demonstrate that Arcana outperforms previous MLLM baselines (e.g., LLaVA-1.5, mPLUG-Owl2, etc.) on visual question answering and multi-modal conversation benchmarks. Notably, the ablation study shows that QLadder significantly improves MMVP performance, which requires strong vision capabilities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There is a lack of comparison with the latest open-source VLMs: LLaVA-OneVision, Qwen2-VL, InternVL2, etc. While these methods may use higher-quality training data and achieve stronger results, it is essential for readers to be aware of the current SoTAs. You may also explain why direct comparisons may not be feasible. It is acceptable for a research paper to fall short of SoTA results due to data quality differences, but these results should still be presented for context.\n\n2. MMVP is crucial for demonstrating visual capability, but only QLadder is ablated on the MMVP benchmark. Why not conduct an ablation of MM-LoRA on MMVP as well? This would provide stronger support for the claims." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No need for this." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Compared with Q-Former, why does the proposed Q-Ladder not require an additional stage for alignment with the vision encoder?\n\n2. Is X_q in Q-Ladder a set of learnable tokens? Why not use instruction tokens for initialization, as done in Q-Former?\n\n3. In the visualizations, it’s difficult to conclude that (b) demonstrates more attention on vision tokens compared to (a). But interestingly, It mainly appears that (b) has more sink tokens [1]. \n\n4. In Table 4, why the Q-Ladder results on 13B model are absent?\n\n\n[1] Xiao et al. Efficient Streaming Language Models with Attention Sinks. ICLR, 2024." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is quite easy to follow. People can quickly grasp the core design and the underlying motivation of the proposed two improvements. \n2. The presentation is quite ok for me.\n3. The proposed method has little impact on the efficiency and the memory cost." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new MLLM named Arcana, mainly offering two improvements for boosting model comprehension on vision information. The first one is MM-LoRA, which learns two separate sets of LoRA parameters for vision and text tokens respectively, aiming to decouple the learning spaces of different modalities and better integrate the multi-modal knowledge. The other one is Q-Ladder, compared with Q-Former, it selects the vision features of different layers in ViT as the key/value vectors for different layers of Q-Ladder, instead of only using the last-layer vision token features. The experiments include the evaluation on VQA benchmarks, multi-modal benchmarks, and language benchmarks, with some ablation studies and further explorations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I think the proposed MM-LoRA is greatly inspired by some previous works like P-LoRA [1] in InternLM-XComposer2, visual-only modules in mPLUG-Owl2 [2] and CogVLM [3], which somehow reduces the novelty of MM-LoRA. The authors should tell the differences between MM-LoRA and these methods, along with some experiments on effectiveness and efficiency to prove the necessity of MM-LoRA. \n\n2. The baselines listed in Table 1, 2 are relatively old. I notice Arcana adopts ShareGPT4V data for training, but its benchmark performance seems not good as ShareGPT4V 7B model. So it is recommended to include some more advanced baseline MLLMs. \n\n3. It seems that the hyper-parameters introduced by MM-LoRA and Q-Ladder are not so robust and can easily affect the model performance. The authors choose the best hyper-parameters according to the ablation results. So does these hyper-parameters still work for different base LLM or architectures?\n\n\n[1] Dong et al. InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Models. Arxiv preprint 2401.16420, 2024.\n\n[2] Ye et al. mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with Modality Collaboration. CVPR, 2024.\n\n[3] Wang et al. CogVLM: Visual Expert for Pretrained Language Models. Arxiv preprint 2311.03079, 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024improving,\ntitle={Improving Multi-modal Large Language Model through Boosting Vision Capabilities},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0yTf37PXcH},\nnote={under review}\n}" }, "abstract": { "value": "We focus on improving the visual understanding capability for boosting the vision-language models. We propose \\textbf{Arcana}, a multiModal language model, which introduces two crucial techniques. First, we present Multimodal LoRA (MM-LoRA), a module designed to enhance the decoder. Unlike traditional language-driven decoders, MM-LoRA consists of two parallel LoRAs -- one for vision and one for language -- each with its own parameters. This disentangled parameters design allows for more specialized learning in each modality and better integration of multimodal information. Second, we introduce the Query Ladder adapter (QLadder) to improve the visual encoder. QLadder employs a learnable ``\\textit{ladder}'' structure to deeply aggregates the intermediate representations from the frozen pretrained visual encoder (e.g., CLIP image encoder). This enables the model to learn new and informative visual features, as well as remaining the powerful capabilities of the pretrained visual encoder. These techniques collectively enhance Arcana's visual perception power, enabling it to leverage improved visual information for more accurate and contextually relevant outputs across various multimodal scenarios. Extensive experiments and ablation studies demonstrate the effectiveness and generalization capability of our Arcana." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multi-modal Large Language Model", "Boosting Vision Capabilities", "Multi-modal Lora", "Ladder Adapter" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d321b58d3cb09a4db18dd2239034ead871e62561.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/158acdd917037e05b9dba66db4f359f8f4b1424f.pdf" }, "title": { "value": "Improving Multi-modal Large Language Model through Boosting Vision Capabilities" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
0yVP49SDg0
Mamba-HMIL: Hierarchical Multiple Instance Learning via State Space Model for Whole Slide Image Diagnosis
main
Active
Whole Slide Images;Hierarchical Multiple Instance Learning;State Space Model.
applications to physical sciences (physics, chemistry, biology, etc.)
1;3;3;6
5;3;5;5
1;2;1;3
1;2;1;2
1;2;2;3
3.25
4.5
1.75
1.5
2
0.080845
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Overall, the experimental design is comprehensive and thorough. Mamba-HMIL is compared against many-to-all relevant baselines in the literature, from simple permutation-invariant pooling baselines (ABMIL, CLAM-MB), to Transformer MIL architectures that learn token dependencies (TransMIL, HIPT), to also direct extensions of Mamba applied to MIL architectures (Mamba+ABMIL, Mamba+DSMIL, Mamba+CLAM-MB) as well as Mamba-MIL. Evaluation on survival prediction is also appreciated and validates the strength of Mamba-HMIL in learning context-aware features for understanding the tumor microenvironment. Good attention-to-detail to the survival prediction baselines in evaluating recent SOTA early multimodal fusion architectures like MCAT and PIBD. This study also presents good depth of experiments, in not only ablating the components of Mamba-HMIL (MoE, SF/AF), but also validating other components in MIL including different pretrained encoders (ResNet-50 vs UNI) and hierarchical feature extraction (10X, 20X, 10X+20X).\n- While direct extensions of Mamba are not a technical novelty, Mamba-HMIL has good performance gains and consistently achieves the best performance across all tasks. These performance gains are on top of comparisons to MIL architectures with Mamba extensions, which suggests that the architecture modifications are not ad hoc.\n- Interestingly, unimodal Mamba-HMIL outperforms many multimodal fusion comparisons, including Mamba+CLAM-SB, PORPOISE, and MCAT. This is a good finding that should be highlighted more." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work presents Mamba-HMIL, a hierarchical MIL method leveraging the state space modeling for weakly-supervised tasks in computational pathology. MAMBA-HMIL includes several components: (1) state-space modeling (Mamba), (2) Mixture of Experts (MoE) blocks, and (3) sequence fusion / adaptive fusion blocks. Mamba-HMIL is evaluated on cancer subtyping and survival prediction tasks, and compared with relevant baselines in the literature (ABMIL, CLAM, DSMIL, HIPT, and Mamba extensions to MIL). Ablation experiments for each component is performed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Though method seems strong, many of the components of Mamba-HMIL itself are either not novel or not studied enough to demonstrate why we see strong improvement in performance. I would like to understand how these components \"stabilize SSM training\". It is not clear how SSM training is unstable when direct extension of Mamba works quite well for all MIL architectures across all tasks.\n- I cannot review the code for this submission. As many of the contributions are empirical, it would be valuable to validate the contributions of this work empirically before acceptance.\n- Is it possible to visualize token-to-token interactions by Mamba-HMIL, besides attention weights from global attention pooling? How do token interactions change across different Mamba layers?\n- There are several outstanding typos in this work. There is missing-or-extra spacing after Mamba-HMIL on many lines. There are many misspellings like \"Propoise\". In the data description of survival prediction on Line 327, TCGA-LUSC instead of TCGA-BLCA is written. Many citations are missing and marked as ?." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "No questions. The paper needs to be improved significantly." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "This paper addresses an important problem of WSI diagnosis" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents hierarchical multiple instance learning using state space model for whole slide image diagnosis. The method propose to use several components such as hierarchical feature extractor, the state space model, and mixture of experts. The experiments are performed on two datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper is poorly written, has no contribution and is just the combination of different components without any clear motivation and reasoning. I would request the authors to clearly explain the reason behind choosing each component of approach and re-write the paper for better clarity." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Can the authors clarify the motivation behind this work, given that multiple studies have already applied Mamba to WSI analysis?\n\n2. The performance of Mamba-MIL and HIPT on NSCLC is inconsistent with the results reported in the original papers, where both achieved an AUC above 0.95. I did not verify all baseline methods in this paper, but the authors should thoroughly review the experimental results and explain why the baseline methods underperformed significantly compared to the original studies.\n\n3. In Section 4.4, the authors compare Mamba-HMIL to ABMIL with an ImageNet-pretrained encoder to evaluate the effectiveness of the Mamba Block. Is this a fair comparison?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "No notable strengths were identified in this work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors present a state-space model-based hierarchical multiple instance learning (Mamba-HMIL) approach for cancer diagnosis using whole slide images (WSI), which involves three stages. In the first stage, hierarchical encoders are used to extract multiscale features. In the second stage, a state-space model (SSM) aggregates features to assess correlations among instances. The third stage introduces an adaptive selection module that filters out disease-negative patches prior to classification. The proposed method was evaluated on four public datasets for subtype classification and survival prediction, where it was benchmarked against existing approaches." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Lack of novelty**: This work appears to be a straightforward combination of existing methods. The hierarchical encoder is similar to DSMIL [1], and the Mamba architecture and mixture of experts (MoE) module are identical to previously established designs. The adaptive selection (AS) module consists only of an MLP layer and a Sigmoid function. Additionally, this is not the first application of Mamba for WSI analysis. Overall, this approach lacks substantial innovation.\n\n2. **Insufficient comparison with existing SSM methods**: The paper does not provide a thorough comparison with similar state-space model-based approaches, such as Vim4Path [2] and MamMIL [3]. \n\n3. **Writing quality**: The manuscript appears to lack careful proofreading. For example, there are confusing question marks on Lines 67 and 297.\n\n[1] Li, Bin, Yin Li, and Kevin W. Eliceiri. \"Dual-stream multiple instance learning network for whole slide image classification with self-supervised contrastive learning.\" In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 14318-14328. 2021.\n\n[2] Nasiri-Sarvi, Ali, Vincent Quoc-Huy Trinh, Hassan Rivaz, and Mahdi S. Hosseini. \"Vim4Path: Self-Supervised Vision Mamba for Histopathology Images.\" In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 6894-6903. 2024.\n\n[3] Fang, Zijie, Yifeng Wang, Ye Zhang, Zhi Wang, Jian Zhang, Xiangyang Ji, and Yongbing Zhang. \"Mammil: Multiple instance learning for whole slide images with state space models.\" *arXiv preprint arXiv:2403.05160* (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- For Hierarchical feature extractor\n - How are the features from different resolutions aggregated? Is it addition or concatenation. \n - How do the number of parameters change with the number of resolutions? Is the number of model parameters controlled for when comparing performance?\nGiven this is one of the key contributions, the lack of discussion or detail around this makes it hard to put results in context.\n\n- How are the different Sequences of instances (s, s2, .. sn) generated for input to Mamba. \n\n- How is the visualization in Figure 3 generated? There is no reference to the figure anywhere in the paper.\n\n- Its unclear what GAS aggregation is? There is no reference or explanation about it." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper discusses use of State space models for MIL with applications in pathology. While there is some prior work - MambaMIL which discusses state space models for MIL, the authors expand on it by incorporating multi-resolution feature aggregation, MoE and adaptive selection of sequences.\nThe authors compare against several existing MIL models to evaluate the approach across two common WSI level benchmark problems - subtyping and survival prediction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes MambaHMIL, a variant of MambaMIL which uses Hierarchical feature extraction, Mixture of Experts and adaptive fusion of sequences to improve performance over existing MIL baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper doesn't provide good motivation on why the specific additions/design choices made are relevant in the context of the problems and why they help. It primarily feels the authors did extensive hyper-parameer selection on the datasets and its unclear how these parameters or choices generalize. There isn't much discussion into why such parameters could be optimal for the dataset or the problem, which makes it challenging to come away with clear take-aways.\nFor example how does Mamba help with with performance and how does it compare with Attention based aggregation. The authors do show some comparison against HIPT, TransMIL here but its hard to compare and put these in context without discussing #params.\nSimilarly there are some ablations adding mamba to existing MIL models but there isnt much discussion on how its helping and if the improvements are just due to additional params\nHow multi-resolution helps and why adding more resolutions hampers performance? \nWhat are the different sequences and what is the relevance of their fusion/aggregations.\n\nIt also doesn't give clear details on how some these these are implemented. Added some of these in questions section below.\nThe authors mention MoE was added to stabilize training, but its unclear what the instability was and how it improved stability.\nIts also unclear how much MoE helped as there are no ablations with/out MoE. \n\nThe details around SSM, mixture of experts and adaptive selection are also not described clearly with no clear equations to describe the formulation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024mambahmil,\ntitle={Mamba-{HMIL}: Hierarchical Multiple Instance Learning via State Space Model for Whole Slide Image Diagnosis},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0yVP49SDg0},\nnote={under review}\n}" }, "abstract": { "value": "Multiple instance learning (MIL) has been widely employed for gigapixel whole slide image (WSI) classification. Existing MIL methods, however, are found wanting to align with the clinical practice of pathologists, who typically scrutinize WSIs at varied scales and compare the local regions in a global perspective. Given that WSIs usually boast immense dimensions peppered with large regions not pertinent to diagnosis, we propose a novel hierarchical multiple instance learning method based on the state space model, called Mamba-HMIL, for WSI classification. Mamba-HMIL consists of three primary modules to enhance the performance of MIL. First, the hierarchical feature extractor harvests features across diverse scales. Second, for capturing the correlation among patches, the state space model demonstrates robust modeling capabilities. A Mixture of Experts (MoE) module is for stable SSM training. Third, the adaptive selection model strives to reduce redundancies by focusing on disease-positive regions. We evaluate Mamba-HMIL on two WSI subtype datasets (TCGA-NSCLC and TCGA-RCC) and two WSI survival datasets (TCGA-BRCA and TCGA-BLCA). Our results suggest that Mamba-HMIL outperforms existing MIL methods on both WSI tasks. Our code will be made publicly available." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Whole Slide Images", "Hierarchical Multiple Instance Learning", "State Space Model." ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b9c0ba4bff85871877abf382a53613dc757358f5.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Mamba-HMIL: Hierarchical Multiple Instance Learning via State Space Model for Whole Slide Image Diagnosis" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]