arxiv_id
stringlengths 7
11
| title
stringlengths 7
243
| abstract
stringlengths 3
2.79k
| link
stringlengths 21
49
| authors
sequencelengths 1
451
| updated
stringlengths 20
20
| published
stringlengths 20
20
|
---|---|---|---|---|---|---|
2312.09597 | Deep Generative Models for Detector Signature Simulation: A Taxonomic
Review | In modern collider experiments, the quest to explore fundamental interactions between elementary particles has reached unparalleled levels of precision. Signatures from particle physics detectors are low-level objects (such as energy depositions or tracks) encoding the physics of collisions (the final state particles of hard scattering interactions). The complete simulation of them in a detector is a computational and storage-intensive task. To address this computational bottleneck in particle physics, alternative approaches have been developed, introducing additional assumptions and trade off accuracy for speed.The field has seen a surge in interest in surrogate modeling the detector simulation, fueled by the advancements in deep generative models. These models aim to generate responses that are statistically identical to the observed data. In this paper, we conduct a comprehensive and exhaustive taxonomic review of the existing literature on the simulation of detector signatures from both methodological and application-wise perspectives. Initially, we formulate the problem of detector signature simulation and discuss its different variations that can be unified. Next, we classify the state-of-the-art methods into five distinct categories based on their underlying model architectures, summarizing their respective generation strategies. Finally, we shed light on the challenges and opportunities that lie ahead in detector signature simulation, setting the stage for future research and development. | http://arxiv.org/abs/2312.09597v2 | [
"Baran Hashemi",
"Claudius Krause"
] | 2024-07-12T22:11:43Z | 2023-12-15T08:27:39Z |
2407.09702 | Investigating the Interplay of Prioritized Replay and Generalization | Experience replay is ubiquitous in reinforcement learning, to reuse past data and improve sample efficiency. Though a variety of smart sampling schemes have been introduced to improve performance, uniform sampling by far remains the most common approach. One exception is Prioritized Experience Replay (PER), where sampling is done proportionally to TD errors, inspired by the success of prioritized sweeping in dynamic programming. The original work on PER showed improvements in Atari, but follow-up results are mixed. In this paper, we investigate several variations on PER, to attempt to understand where and when PER may be useful. Our findings in prediction tasks reveal that while PER can improve value propagation in tabular settings, behavior is significantly different when combined with neural networks. Certain mitigations -- like delaying target network updates to control generalization and using estimates of expected TD errors in PER to avoid chasing stochasticity -- can avoid large spikes in error with PER and neural networks, but nonetheless generally do not outperform uniform replay. In control tasks, none of the prioritized variants consistently outperform uniform replay. | http://arxiv.org/pdf/2407.09702v1 | [
"Parham Mohammad Panahi",
"Andrew Patterson",
"Martha White",
"Adam White"
] | 2024-07-12T21:56:24Z | 2024-07-12T21:56:24Z |
2406.11779 | Compact Proofs of Model Performance via Mechanistic Interpretability | We propose using mechanistic interpretability -- techniques for reverse engineering model weights into human-interpretable algorithms -- to derive and compactly prove formal guarantees on model performance. We prototype this approach by formally proving lower bounds on the accuracy of 151 small transformers trained on a Max-of-$K$ task. We create 102 different computer-assisted proof strategies and assess their length and tightness of bound on each of our models. Using quantitative metrics, we find that shorter proofs seem to require and provide more mechanistic understanding. Moreover, we find that more faithful mechanistic understanding leads to tighter performance bounds. We confirm these connections by qualitatively examining a subset of our proofs. Finally, we identify compounding structureless noise as a key challenge for using mechanistic interpretability to generate compact proofs on model performance. | http://arxiv.org/pdf/2406.11779v8 | [
"Jason Gross",
"Rajashree Agrawal",
"Thomas Kwa",
"Euan Ong",
"Chun Hei Yip",
"Alex Gibson",
"Soufiane Noubir",
"Lawrence Chan"
] | 2024-07-12T21:51:34Z | 2024-06-17T17:34:25Z |
2407.09698 | RIO-CPD: A Riemannian Geometric Method for Correlation-aware Online
Change Point Detection | The objective of change point detection is to identify abrupt changes at potentially multiple points within a data sequence. This task is particularly challenging in the online setting where various types of changes can occur, including shifts in both the marginal and joint distributions of the data. This paper tackles these challenges by sequentially tracking correlation matrices on the Riemannian geometry, where the geodesic distances accurately capture the development of correlations. We propose Rio-CPD, a non-parametric correlation-aware online change point detection framework that combines the Riemannian geometry of the manifold of symmetric positive definite matrices and the cumulative sum statistic (CUSUM) for detecting change points. Rio-CPD enhances CUSUM by computing the geodesic distance from present observations to the Fr'echet mean of previous observations. With careful choice of metrics equipped to the Riemannian geometry, Rio-CPD is simple and computationally efficient. Experimental results on both synthetic and real-world datasets demonstrate that Rio-CPD outperforms existing methods in detection accuracy and efficiency. | http://arxiv.org/pdf/2407.09698v1 | [
"Chengyuan Deng",
"Zhengzhang Chen",
"Xujiang Zhao",
"Haoyu Wang",
"Junxiang Wang",
"Haifeng Chen",
"Jie Gao"
] | 2024-07-12T21:42:51Z | 2024-07-12T21:42:51Z |
2407.09693 | A Mathematical Framework, a Taxonomy of Modeling Paradigms, and a Suite
of Learning Techniques for Neural-Symbolic Systems | The field of Neural-Symbolic (NeSy) systems is growing rapidly. Proposed approaches show great promise in achieving symbiotic unions of neural and symbolic methods. However, each NeSy system differs in fundamental ways. There is a pressing need for a unifying theory to illuminate the commonalities and differences in approaches and enable further progress. In this paper, we introduce Neural-Symbolic Energy-Based Models (NeSy-EBMs), a unifying mathematical framework for discriminative and generative modeling with probabilistic and non-probabilistic NeSy approaches. We utilize NeSy-EBMs to develop a taxonomy of modeling paradigms focusing on a system's neural-symbolic interface and reasoning capabilities. Additionally, we introduce a suite of learning techniques for NeSy-EBMs. Importantly, NeSy-EBMs allow the derivation of general expressions for gradients of prominent learning losses, and we provide four learning approaches that leverage methods from multiple domains, including bilevel and stochastic policy optimization. Finally, we present Neural Probabilistic Soft Logic (NeuPSL), an open-source NeSy-EBM library designed for scalability and expressivity, facilitating real-world application of NeSy systems. Through extensive empirical analysis across multiple datasets, we demonstrate the practical advantages of NeSy-EBMs in various tasks, including image classification, graph node labeling, autonomous vehicle situation awareness, and question answering. | http://arxiv.org/pdf/2407.09693v1 | [
"Charles Dickens",
"Connor Pryor",
"Changyu Gao",
"Alon Albalak",
"Eriq Augustine",
"William Wang",
"Stephen Wright",
"Lise Getoor"
] | 2024-07-12T21:26:21Z | 2024-07-12T21:26:21Z |
2402.15429 | ProTIP: Probabilistic Robustness Verification on Text-to-Image Diffusion
Models against Stochastic Perturbation | Text-to-Image (T2I) Diffusion Models (DMs) have shown impressive abilities in generating high-quality images based on simple text descriptions. However, as is common with many Deep Learning (DL) models, DMs are subject to a lack of robustness. While there are attempts to evaluate the robustness of T2I DMs as a binary or worst-case problem, they cannot answer how robust in general the model is whenever an adversarial example (AE) can be found. In this study, we first introduce a probabilistic notion of T2I DMs' robustness; and then establish an efficient framework, ProTIP, to evaluate it with statistical guarantees. The main challenges stem from: i) the high computational cost of the generation process; and ii) determining if a perturbed input is an AE involves comparing two output distributions, which is fundamentally harder compared to other DL tasks like classification where an AE is identified upon misprediction of labels. To tackle the challenges, we employ sequential analysis with efficacy and futility early stopping rules in the statistical testing for identifying AEs, and adaptive concentration inequalities to dynamically determine the "just-right" number of stochastic perturbations whenever the verification target is met. Empirical experiments validate the effectiveness and efficiency of ProTIP over common T2I DMs. Finally, we demonstrate an application of ProTIP to rank commonly used defence methods. | http://arxiv.org/pdf/2402.15429v2 | [
"Yi Zhang",
"Yun Tang",
"Wenjie Ruan",
"Xiaowei Huang",
"Siddartha Khastgir",
"Paul Jennings",
"Xingyu Zhao"
] | 2024-07-12T21:25:42Z | 2024-02-23T16:48:56Z |
2407.09691 | EVOLVE: Predicting User Evolution and Network Dynamics in Social Media
Using Fine-Tuned GPT-like Model | Social media platforms are extensively used for sharing personal emotions, daily activities, and various life events, keeping people updated with the latest happenings. From the moment a user creates an account, they continually expand their network of friends or followers, freely interacting with others by posting, commenting, and sharing content. Over time, user behavior evolves based on demographic attributes and the networks they establish. In this research, we propose a predictive method to understand how a user evolves on social media throughout their life and to forecast the next stage of their evolution. We fine-tune a GPT-like decoder-only model (we named it E-GPT: Evolution-GPT) to predict the future stages of a user's evolution in online social media. We evaluate the performance of these models and demonstrate how user attributes influence changes within their network by predicting future connections and shifts in user activities on social media, which also addresses other social media challenges such as recommendation systems. | http://arxiv.org/pdf/2407.09691v1 | [
"Ismail Hossain",
"Md Jahangir Alam",
"Sai Puppala",
"Sajedul Talukder"
] | 2024-07-12T21:20:57Z | 2024-07-12T21:20:57Z |
2407.09690 | Private Heterogeneous Federated Learning Without a Trusted Server
Revisited: Error-Optimal and Communication-Efficient Algorithms for Convex
Losses | We revisit the problem of federated learning (FL) with private data from people who do not trust the server or other silos/clients. In this context, every silo (e.g. hospital) has data from several people (e.g. patients) and needs to protect the privacy of each person's data (e.g. health records), even if the server and/or other silos try to uncover this data. Inter-Silo Record-Level Differential Privacy (ISRL-DP) prevents each silo's data from being leaked, by requiring that silo i's communications satisfy item-level differential privacy. Prior work arXiv:2203.06735 characterized the optimal excess risk bounds for ISRL-DP algorithms with homogeneous (i.i.d.) silo data and convex loss functions. However, two important questions were left open: (1) Can the same excess risk bounds be achieved with heterogeneous (non-i.i.d.) silo data? (2) Can the optimal risk bounds be achieved with fewer communication rounds? In this paper, we give positive answers to both questions. We provide novel ISRL-DP FL algorithms that achieve the optimal excess risk bounds in the presence of heterogeneous silo data. Moreover, our algorithms are more communication-efficient than the prior state-of-the-art. For smooth loss functions, our algorithm achieves the optimal excess risk bound and has communication complexity that matches the non-private lower bound. Additionally, our algorithms are more computationally efficient than the previous state-of-the-art. | http://arxiv.org/pdf/2407.09690v1 | [
"Changyu Gao",
"Andrew Lowy",
"Xingyu Zhou",
"Stephen J. Wright"
] | 2024-07-12T21:20:44Z | 2024-07-12T21:20:44Z |
2312.09187 | Vision-Language Models as a Source of Rewards | Building generalist agents that can accomplish many goals in rich open-ended environments is one of the research frontiers for reinforcement learning. A key limiting factor for building generalist agents with RL has been the need for a large number of reward functions for achieving different goals. We investigate the feasibility of using off-the-shelf vision-language models, or VLMs, as sources of rewards for reinforcement learning agents. We show how rewards for visual achievement of a variety of language goals can be derived from the CLIP family of models, and used to train RL agents that can achieve a variety of language goals. We showcase this approach in two distinct visual domains and present a scaling trend showing how larger VLMs lead to more accurate rewards for visual goal achievement, which in turn produces more capable RL agents. | http://arxiv.org/pdf/2312.09187v3 | [
"Kate Baumli",
"Satinder Baveja",
"Feryal Behbahani",
"Harris Chan",
"Gheorghe Comanici",
"Sebastian Flennerhag",
"Maxime Gazeau",
"Kristian Holsheimer",
"Dan Horgan",
"Michael Laskin",
"Clare Lyle",
"Hussain Masoom",
"Kay McKinney",
"Volodymyr Mnih",
"Alexander Neitz",
"Dmitry Nikulin",
"Fabio Pardo",
"Jack Parker-Holder",
"John Quan",
"Tim Rocktäschel",
"Himanshu Sahni",
"Tom Schaul",
"Yannick Schroecker",
"Stephen Spencer",
"Richie Steigerwald",
"Luyu Wang",
"Lei Zhang"
] | 2024-07-12T21:14:32Z | 2023-12-14T18:06:17Z |
2407.09685 | Accelerating the inference of string generation-based chemical reaction
models for industrial applications | Template-free SMILES-to-SMILES translation models for reaction prediction and single-step retrosynthesis are of interest for industrial applications in computer-aided synthesis planning systems due to their state-of-the-art accuracy. However, they suffer from slow inference speed. We present a method to accelerate inference in autoregressive SMILES generators through speculative decoding by copying query string subsequences into target strings in the right places. We apply our method to the molecular transformer implemented in Pytorch Lightning and achieve over 3X faster inference in reaction prediction and single-step retrosynthesis, with no loss in accuracy. | http://arxiv.org/pdf/2407.09685v1 | [
"Mikhail Andronov",
"Natalia Andronova",
"Michael Wand",
"Jürgen Schmidhuber",
"Djork-Arnè Clevert"
] | 2024-07-12T20:55:59Z | 2024-07-12T20:55:59Z |
2309.00770 | Bias and Fairness in Large Language Models: A Survey | Rapid advancements of large language models (LLMs) have enabled the processing, understanding, and generation of human-like text, with increasing integration into systems that touch our social sphere. Despite this success, these models can learn, perpetuate, and amplify harmful social biases. In this paper, we present a comprehensive survey of bias evaluation and mitigation techniques for LLMs. We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing, defining distinct facets of harm and introducing several desiderata to operationalize fairness for LLMs. We then unify the literature by proposing three intuitive taxonomies, two for bias evaluation, namely metrics and datasets, and one for mitigation. Our first taxonomy of metrics for bias evaluation disambiguates the relationship between metrics and evaluation datasets, and organizes metrics by the different levels at which they operate in a model: embeddings, probabilities, and generated text. Our second taxonomy of datasets for bias evaluation categorizes datasets by their structure as counterfactual inputs or prompts, and identifies the targeted harms and social groups; we also release a consolidation of publicly-available datasets for improved access. Our third taxonomy of techniques for bias mitigation classifies methods by their intervention during pre-processing, in-training, intra-processing, and post-processing, with granular subcategories that elucidate research trends. Finally, we identify open problems and challenges for future work. Synthesizing a wide range of recent research, we aim to provide a clear guide of the existing literature that empowers researchers and practitioners to better understand and prevent the propagation of bias in LLMs. | http://arxiv.org/pdf/2309.00770v3 | [
"Isabel O. Gallegos",
"Ryan A. Rossi",
"Joe Barrow",
"Md Mehrab Tanjim",
"Sungchul Kim",
"Franck Dernoncourt",
"Tong Yu",
"Ruiyi Zhang",
"Nesreen K. Ahmed"
] | 2024-07-12T20:29:57Z | 2023-09-02T00:32:55Z |
2403.17993 | Mixing Artificial and Natural Intelligence: From Statistical Mechanics
to AI and Back to Turbulence | The paper reflects on the future role of AI in scientific research, with a special focus on turbulence studies, and examines the evolution of AI, particularly through Diffusion Models rooted in non-equilibrium statistical mechanics. It underscores the significant impact of AI on advancing reduced, Lagrangian models of turbulence through innovative use of deep neural networks. Additionally, the paper reviews various other AI applications in turbulence research and outlines potential challenges and opportunities in the concurrent advancement of AI and statistical hydrodynamics. This discussion sets the stage for a future where AI and turbulence research are intricately intertwined, leading to more profound insights and advancements in both fields. | http://arxiv.org/pdf/2403.17993v3 | [
"Michael Chertkov"
] | 2024-07-12T20:25:55Z | 2024-03-26T12:45:52Z |
2407.09679 | Physics-Informed Learning of Characteristic Trajectories for Smoke
Reconstruction | We delve into the physics-informed neural reconstruction of smoke and obstacles through sparse-view RGB videos, tackling challenges arising from limited observation of complex dynamics. Existing physics-informed neural networks often emphasize short-term physics constraints, leaving the proper preservation of long-term conservation less explored. We introduce Neural Characteristic Trajectory Fields, a novel representation utilizing Eulerian neural fields to implicitly model Lagrangian fluid trajectories. This topology-free, auto-differentiable representation facilitates efficient flow map calculations between arbitrary frames as well as efficient velocity extraction via auto-differentiation. Consequently, it enables end-to-end supervision covering long-term conservation and short-term physics priors. Building on the representation, we propose physics-informed trajectory learning and integration into NeRF-based scene reconstruction. We enable advanced obstacle handling through self-supervised scene decomposition and seamless integrated boundary constraints. Our results showcase the ability to overcome challenges like occlusion uncertainty, density-color ambiguity, and static-dynamic entanglements. Code and sample tests are at url{https://github.com/19reborn/PICT_smoke}. | http://arxiv.org/pdf/2407.09679v1 | [
"Yiming Wang",
"Siyu Tang",
"Mengyu Chu"
] | 2024-07-12T20:19:41Z | 2024-07-12T20:19:41Z |
2401.11694 | Parametric Matrix Models | We present a general class of machine learning algorithms called parametric matrix models. In contrast with most existing machine learning models that imitate the biology of neurons, parametric matrix models use matrix equations that emulate the physics of quantum systems. Similar to how physics problems are usually solved, parametric matrix models learn the governing equations that lead to the desired outputs. Parametric matrix models can be efficiently trained from empirical data, and the equations may use algebraic, differential, or integral relations. While originally designed for scientific computing, we prove that parametric matrix models are universal function approximators that can be applied to general machine learning problems. After introducing the underlying theory, we apply parametric matrix models to a series of different challenges that show their performance for a wide range of problems. For all the challenges tested here, parametric matrix models produce accurate results within an efficient and interpretable computational framework that allows for input feature extrapolation. | http://arxiv.org/pdf/2401.11694v4 | [
"Patrick Cook",
"Danny Jammooa",
"Morten Hjorth-Jensen",
"Daniel D. Lee",
"Dean Lee"
] | 2024-07-12T20:08:17Z | 2024-01-22T05:26:18Z |
2407.09658 | BoBa: Boosting Backdoor Detection through Data Distribution Inference in
Federated Learning | Federated learning, while being a promising approach for collaborative model training, is susceptible to poisoning attacks due to its decentralized nature. Backdoor attacks, in particular, have shown remarkable stealthiness, as they selectively compromise predictions for inputs containing triggers. Previous endeavors to detect and mitigate such attacks are based on the Independent and Identically Distributed (IID) data assumption where benign model updates exhibit high-level similarity in multiple feature spaces due to IID data. Thus, outliers are detected as backdoor attacks. Nevertheless, non-IID data presents substantial challenges in backdoor attack detection, as the data variety introduces variance among benign models, making outlier detection-based mechanisms less effective. We propose a novel distribution-aware anomaly detection mechanism, BoBa, to address this problem. In order to differentiate outliers arising from data variety versus backdoor attack, we propose to break down the problem into two steps: clustering clients utilizing their data distribution followed by a voting-based detection. Based on the intuition that clustering and subsequent backdoor detection can drastically benefit from knowing client data distributions, we propose a novel data distribution inference mechanism. To improve detection robustness, we introduce an overlapping clustering method, where each client is associated with multiple clusters, ensuring that the trustworthiness of a model update is assessed collectively by multiple clusters rather than a single cluster. Through extensive evaluations, we demonstrate that BoBa can reduce the attack success rate to lower than 0.001 while maintaining high main task accuracy across various attack strategies and experimental settings. | http://arxiv.org/pdf/2407.09658v1 | [
"Ning Wang",
"Shanghao Shi",
"Yang Xiao",
"Yimin Chen",
"Y. Thomas Hou",
"Wenjing Lou"
] | 2024-07-12T19:38:42Z | 2024-07-12T19:38:42Z |
2407.08693 | Robotic Control via Embodied Chain-of-Thought Reasoning | A key limitation of learned robot control policies is their inability to generalize outside their training data. Recent works on vision-language-action models (VLAs) have shown that the use of large, internet pre-trained vision-language models as the backbone of learned robot policies can substantially improve their robustness and generalization ability. Yet, one of the most exciting capabilities of large vision-language models in other domains is their ability to reason iteratively through complex problems. Can that same capability be brought into robotics to allow policies to improve performance by reasoning about a given task before acting? Naive use of "chain-of-thought" (CoT) style prompting is significantly less effective with standard VLAs because of the relatively simple training examples that are available to them. Additionally, purely semantic reasoning about sub-tasks, as is common in regular CoT, is insufficient for robot policies that need to ground their reasoning in sensory observations and the robot state. To this end, we introduce Embodied Chain-of-Thought Reasoning (ECoT) for VLAs, in which we train VLAs to perform multiple steps of reasoning about plans, sub-tasks, motions, and visually grounded features like object bounding boxes and end effector positions, before predicting the robot action. We design a scalable pipeline for generating synthetic training data for ECoT on large robot datasets. We demonstrate, that ECoT increases the absolute success rate of OpenVLA, the current strongest open-source VLA policy, by 28% across challenging generalization tasks, without any additional robot training data. Additionally, ECoT makes it easier for humans to interpret a policy's failures and correct its behavior using natural language. | http://arxiv.org/pdf/2407.08693v2 | [
"Michał Zawalski",
"William Chen",
"Karl Pertsch",
"Oier Mees",
"Chelsea Finn",
"Sergey Levine"
] | 2024-07-12T19:19:34Z | 2024-07-11T17:31:01Z |
2405.18267 | CT-based brain ventricle segmentation via diffusion Schrödinger Bridge
without target domain ground truths | Efficient and accurate brain ventricle segmentation from clinical CT scans is critical for emergency surgeries like ventriculostomy. With the challenges in poor soft tissue contrast and a scarcity of well-annotated databases for clinical brain CTs, we introduce a novel uncertainty-aware ventricle segmentation technique without the need of CT segmentation ground truths by leveraging diffusion-model-based domain adaptation. Specifically, our method employs the diffusion Schr"odinger Bridge and an attention recurrent residual U-Net to capitalize on unpaired CT and MRI scans to derive automatic CT segmentation from those of the MRIs, which are more accessible. Importantly, we propose an end-to-end, joint training framework of image translation and segmentation tasks, and demonstrate its benefit over training individual tasks separately. By comparing the proposed method against similar setups using two different GAN models for domain adaptation (CycleGAN and CUT), we also reveal the advantage of diffusion models towards improved segmentation and image translation quality. With a Dice score of 0.78$pm$0.27, our proposed method outperformed the compared methods, including SynSeg-Net, while providing intuitive uncertainty measures to further facilitate quality control of the automatic segmentation outcomes. The implementation of our proposed method is available at: https://github.com/HealthX-Lab/DiffusionSynCTSeg. | http://arxiv.org/pdf/2405.18267v2 | [
"Reihaneh Teimouri",
"Marta Kersten-Oertel",
"Yiming Xiao"
] | 2024-07-12T19:17:42Z | 2024-05-28T15:17:58Z |
2407.09645 | Hamilton-Jacobi Reachability in Reinforcement Learning: A Survey | Recent literature has proposed approaches that learn control policies with high performance while maintaining safety guarantees. Synthesizing Hamilton-Jacobi (HJ) reachable sets has become an effective tool for verifying safety and supervising the training of reinforcement learning-based control policies for complex, high-dimensional systems. Previously, HJ reachability was limited to verifying low-dimensional dynamical systems -- this is because the computational complexity of the dynamic programming approach it relied on grows exponentially with the number of system states. To address this limitation, in recent years, there have been methods that compute the reachability value function simultaneously with learning control policies to scale HJ reachability analysis while still maintaining a reliable estimate of the true reachable set. These HJ reachability approximations are used to improve the safety, and even reward performance, of learned control policies and can solve challenging tasks such as those with dynamic obstacles and/or with lidar-based or vision-based observations. In this survey paper, we review the recent developments in the field of HJ reachability estimation in reinforcement learning that would provide a foundational basis for further research into reliability in high-dimensional systems. | http://arxiv.org/pdf/2407.09645v1 | [
"Milan Ganai",
"Sicun Gao",
"Sylvia Herbert"
] | 2024-07-12T19:04:39Z | 2024-07-12T19:04:39Z |
2407.09642 | Seq-to-Final: A Benchmark for Tuning from Sequential Distributions to a
Final Time Point | Distribution shift over time occurs in many settings. Leveraging historical data is necessary to learn a model for the last time point when limited data is available in the final period, yet few methods have been developed specifically for this purpose. In this work, we construct a benchmark with different sequences of synthetic shifts to evaluate the effectiveness of 3 classes of methods that 1) learn from all data without adapting to the final period, 2) learn from historical data with no regard to the sequential nature and then adapt to the final period, and 3) leverage the sequential nature of historical data when tailoring a model to the final period. We call this benchmark Seq-to-Final to highlight the focus on using a sequence of time periods to learn a model for the final time point. Our synthetic benchmark allows users to construct sequences with different types of shift and compare different methods. We focus on image classification tasks using CIFAR-10 and CIFAR-100 as the base images for the synthetic sequences. We also evaluate the same methods on the Portraits dataset to explore the relevance to real-world shifts over time. Finally, we create a visualization to contrast the initializations and updates from different methods at the final time step. Our results suggest that, for the sequences in our benchmark, methods that disregard the sequential structure and adapt to the final time point tend to perform well. The approaches we evaluate that leverage the sequential nature do not offer any improvement. We hope that this benchmark will inspire the development of new algorithms that are better at leveraging sequential historical data or a deeper understanding of why methods that disregard the sequential nature are able to perform well. | http://arxiv.org/pdf/2407.09642v1 | [
"Christina X Ji",
"Ahmed M Alaa",
"David Sontag"
] | 2024-07-12T19:03:42Z | 2024-07-12T19:03:42Z |
2407.09632 | Granger Causality in Extremes | We introduce a rigorous mathematical framework for Granger causality in extremes, designed to identify causal links from extreme events in time series. Granger causality plays a pivotal role in uncovering directional relationships among time-varying variables. While this notion gains heightened importance during extreme and highly volatile periods, state-of-the-art methods primarily focus on causality within the body of the distribution, often overlooking causal mechanisms that manifest only during extreme events. Our framework is designed to infer causality mainly from extreme events by leveraging the causal tail coefficient. We establish equivalences between causality in extremes and other causal concepts, including (classical) Granger causality, Sims causality, and structural causality. We prove other key properties of Granger causality in extremes and show that the framework is especially helpful under the presence of hidden confounders. We also propose a novel inference method for detecting the presence of Granger causality in extremes from data. Our method is model-free, can handle non-linear and high-dimensional time series, outperforms current state-of-the-art methods in all considered setups, both in performance and speed, and was found to uncover coherent effects when applied to financial and extreme weather observations. | http://arxiv.org/pdf/2407.09632v1 | [
"Juraj Bodik",
"Olivier Pasche"
] | 2024-07-12T18:41:07Z | 2024-07-12T18:41:07Z |
2407.09628 | Accelerating Electron Dynamics Simulations through Machine Learned Time
Propagators | Time-dependent density functional theory (TDDFT) is a widely used method to investigate electron dynamics under various external perturbations such as laser fields. In this work, we present a novel approach to accelerate real time TDDFT based electron dynamics simulations using autoregressive neural operators as time-propagators for the electron density. By leveraging physics-informed constraints and high-resolution training data, our model achieves superior accuracy and computational speed compared to traditional numerical solvers. We demonstrate the effectiveness of our model on a class of one-dimensional diatomic molecules. This method has potential in enabling real-time, on-the-fly modeling of laser-irradiated molecules and materials with varying experimental parameters. | http://arxiv.org/pdf/2407.09628v1 | [
"Karan Shah",
"Attila Cangi"
] | 2024-07-12T18:29:48Z | 2024-07-12T18:29:48Z |
2406.18120 | ArzEn-LLM: Code-Switched Egyptian Arabic-English Translation and Speech
Recognition Using LLMs | Motivated by the widespread increase in the phenomenon of code-switching between Egyptian Arabic and English in recent times, this paper explores the intricacies of machine translation (MT) and automatic speech recognition (ASR) systems, focusing on translating code-switched Egyptian Arabic-English to either English or Egyptian Arabic. Our goal is to present the methodologies employed in developing these systems, utilizing large language models such as LLama and Gemma. In the field of ASR, we explore the utilization of the Whisper model for code-switched Egyptian Arabic recognition, detailing our experimental procedures including data preprocessing and training techniques. Through the implementation of a consecutive speech-to-text translation system that integrates ASR with MT, we aim to overcome challenges posed by limited resources and the unique characteristics of the Egyptian Arabic dialect. Evaluation against established metrics showcases promising results, with our methodologies yielding a significant improvement of $56%$ in English translation over the state-of-the-art and $9.3%$ in Arabic translation. Since code-switching is deeply inherent in spoken languages, it is crucial that ASR systems can effectively handle this phenomenon. This capability is crucial for enabling seamless interaction in various domains, including business negotiations, cultural exchanges, and academic discourse. Our models and code are available as open-source resources. Code: url{http://github.com/ahmedheakl/arazn-llm}}, Models: url{http://huggingface.co/collections/ahmedheakl/arazn-llm-662ceaf12777656607b9524e}. | http://arxiv.org/pdf/2406.18120v2 | [
"Ahmed Heakl",
"Youssef Zaghloul",
"Mennatullah Ali",
"Rania Hossam",
"Walid Gomaa"
] | 2024-07-12T18:22:26Z | 2024-06-26T07:19:51Z |
2406.18125 | ResumeAtlas: Revisiting Resume Classification with Large-Scale Datasets
and Large Language Models | The increasing reliance on online recruitment platforms coupled with the adoption of AI technologies has highlighted the critical need for efficient resume classification methods. However, challenges such as small datasets, lack of standardized resume templates, and privacy concerns hinder the accuracy and effectiveness of existing classification models. In this work, we address these challenges by presenting a comprehensive approach to resume classification. We curated a large-scale dataset of 13,389 resumes from diverse sources and employed Large Language Models (LLMs) such as BERT and Gemma1.1 2B for classification. Our results demonstrate significant improvements over traditional machine learning approaches, with our best model achieving a top-1 accuracy of 92% and a top-5 accuracy of 97.5%. These findings underscore the importance of dataset quality and advanced model architectures in enhancing the accuracy and robustness of resume classification systems, thus advancing the field of online recruitment practices. | http://arxiv.org/pdf/2406.18125v2 | [
"Ahmed Heakl",
"Youssef Mohamed",
"Noran Mohamed",
"Aly Elsharkawy",
"Ahmed Zaky"
] | 2024-07-12T18:19:28Z | 2024-06-26T07:25:18Z |
2407.09618 | The Heterophilic Graph Learning Handbook: Benchmarks, Models,
Theoretical Analysis, Applications and Challenges | Homophily principle, ie{} nodes with the same labels or similar attributes are more likely to be connected, has been commonly believed to be the main reason for the superiority of Graph Neural Networks (GNNs) over traditional Neural Networks (NNs) on graph-structured data, especially on node-level tasks. However, recent work has identified a non-trivial set of datasets where GNN's performance compared to the NN's is not satisfactory. Heterophily, i.e. low homophily, has been considered the main cause of this empirical observation. People have begun to revisit and re-evaluate most existing graph models, including graph transformer and its variants, in the heterophily scenario across various kinds of graphs, e.g. heterogeneous graphs, temporal graphs and hypergraphs. Moreover, numerous graph-related applications are found to be closely related to the heterophily problem. In the past few years, considerable effort has been devoted to studying and addressing the heterophily issue. In this survey, we provide a comprehensive review of the latest progress on heterophilic graph learning, including an extensive summary of benchmark datasets and evaluation of homophily metrics on synthetic graphs, meticulous classification of the most updated supervised and unsupervised learning methods, thorough digestion of the theoretical analysis on homophily/heterophily, and broad exploration of the heterophily-related applications. Notably, through detailed experiments, we are the first to categorize benchmark heterophilic datasets into three sub-categories: malignant, benign and ambiguous heterophily. Malignant and ambiguous datasets are identified as the real challenging datasets to test the effectiveness of new models on the heterophily challenge. Finally, we propose several challenges and future directions for heterophilic graph representation learning. | http://arxiv.org/pdf/2407.09618v1 | [
"Sitao Luan",
"Chenqing Hua",
"Qincheng Lu",
"Liheng Ma",
"Lirong Wu",
"Xinyu Wang",
"Minkai Xu",
"Xiao-Wen Chang",
"Doina Precup",
"Rex Ying",
"Stan Z. Li",
"Jian Tang",
"Guy Wolf",
"Stefanie Jegelka"
] | 2024-07-12T18:04:32Z | 2024-07-12T18:04:32Z |
2309.15091 | VideoDirectorGPT: Consistent Multi-scene Video Generation via LLM-Guided
Planning | Recent text-to-video (T2V) generation methods have seen significant advancements. However, the majority of these works focus on producing short video clips of a single event (i.e., single-scene videos). Meanwhile, recent large language models (LLMs) have demonstrated their capability in generating layouts and programs to control downstream visual modules. This prompts an important question: can we leverage the knowledge embedded in these LLMs for temporally consistent long video generation? In this paper, we propose VideoDirectorGPT, a novel framework for consistent multi-scene video generation that uses the knowledge of LLMs for video content planning and grounded video generation. Specifically, given a single text prompt, we first ask our video planner LLM (GPT-4) to expand it into a 'video plan', which includes the scene descriptions, the entities with their respective layouts, the background for each scene, and consistency groupings of the entities. Next, guided by this video plan, our video generator, named Layout2Vid, has explicit control over spatial layouts and can maintain temporal consistency of entities across multiple scenes, while being trained only with image-level annotations. Our experiments demonstrate that our proposed VideoDirectorGPT framework substantially improves layout and movement control in both single- and multi-scene video generation and can generate multi-scene videos with consistency, while achieving competitive performance with SOTAs in open-domain single-scene T2V generation. Detailed ablation studies, including dynamic adjustment of layout control strength with an LLM and video generation with user-provided images, confirm the effectiveness of each component of our framework and its future potential. | http://arxiv.org/pdf/2309.15091v2 | [
"Han Lin",
"Abhay Zala",
"Jaemin Cho",
"Mohit Bansal"
] | 2024-07-12T18:03:29Z | 2023-09-26T17:36:26Z |
2310.15047 | Implicit meta-learning may lead language models to trust more reliable
sources | We demonstrate that LLMs may learn indicators of document usefulness and modulate their updates accordingly. We introduce random strings ("tags") as indicators of usefulness in a synthetic fine-tuning dataset. Fine-tuning on this dataset leads to implicit meta-learning (IML): in further fine-tuning, the model updates to make more use of text that is tagged as useful. We perform a thorough empirical investigation of this phenomenon, finding (among other things) that (i) it occurs in both pretrained LLMs and those trained from scratch, as well as on a vision task, and (ii) larger models and smaller batch sizes tend to give more IML. We also use probing to examine how IML changes the way models store knowledge in their parameters. Finally, we reflect on what our results might imply about capabilities, risks, and controllability of future AI systems. Our code can be found at https://github.com/krasheninnikov/internalization. | http://arxiv.org/pdf/2310.15047v4 | [
"Dmitrii Krasheninnikov",
"Egor Krasheninnikov",
"Bruno Mlodozeniec",
"Tegan Maharaj",
"David Krueger"
] | 2024-07-12T18:03:25Z | 2023-10-23T15:50:08Z |
2407.09602 | Real-time gravitational-wave inference for binary neutron stars using
machine learning | Mergers of binary neutron stars (BNSs) emit signals in both the gravitational-wave (GW) and electromagnetic (EM) spectra. Famously, the 2017 multi-messenger observation of GW170817 led to scientific discoveries across cosmology, nuclear physics, and gravity. Central to these results were the sky localization and distance obtained from GW data, which, in the case of GW170817, helped to identify the associated EM transient, AT 2017gfo, 11 hours after the GW signal. Fast analysis of GW data is critical for directing time-sensitive EM observations; however, due to challenges arising from the length and complexity of signals, it is often necessary to make approximations that sacrifice accuracy. Here, we develop a machine learning approach that performs complete BNS inference in just one second without making any such approximations. This is enabled by a new method for explicit integration of physical domain knowledge into neural networks. Our approach enhances multi-messenger observations by providing (i) accurate localization even before the merger; (ii) improved localization precision by $sim30%$ compared to approximate low-latency methods; and (iii) detailed information on luminosity distance, inclination, and masses, which can be used to prioritize expensive telescope time. Additionally, the flexibility and reduced cost of our method open new opportunities for equation-of-state and waveform systematics studies. Finally, we demonstrate that our method scales to extremely long signals, up to an hour in length, thus serving as a blueprint for data analysis for next-generation ground- and space-based detectors. | http://arxiv.org/pdf/2407.09602v1 | [
"Maximilian Dax",
"Stephen R. Green",
"Jonathan Gair",
"Nihar Gupte",
"Michael Pürrer",
"Vivien Raymond",
"Jonas Wildberger",
"Jakob H. Macke",
"Alessandra Buonanno",
"Bernhard Schölkopf"
] | 2024-07-12T18:00:02Z | 2024-07-12T18:00:02Z |
2407.09475 | Adaptive Prediction Ensemble: Improving Out-of-Distribution
Generalization of Motion Forecasting | Deep learning-based trajectory prediction models for autonomous driving often struggle with generalization to out-of-distribution (OOD) scenarios, sometimes performing worse than simple rule-based models. To address this limitation, we propose a novel framework, Adaptive Prediction Ensemble (APE), which integrates deep learning and rule-based prediction experts. A learned routing function, trained concurrently with the deep learning model, dynamically selects the most reliable prediction based on the input scenario. Our experiments on large-scale datasets, including Waymo Open Motion Dataset (WOMD) and Argoverse, demonstrate improvement in zero-shot generalization across datasets. We show that our method outperforms individual prediction models and other variants, particularly in long-horizon prediction and scenarios with a high proportion of OOD data. This work highlights the potential of hybrid approaches for robust and generalizable motion prediction in autonomous driving. | http://arxiv.org/pdf/2407.09475v1 | [
"Jinning Li",
"Jiachen Li",
"Sangjae Bae",
"David Isele"
] | 2024-07-12T17:57:00Z | 2024-07-12T17:57:00Z |
2407.09468 | Beyond Euclid: An Illustrated Guide to Modern Machine Learning with
Geometric, Topological, and Algebraic Structures | The enduring legacy of Euclidean geometry underpins classical machine learning, which, for decades, has been primarily developed for data lying in Euclidean space. Yet, modern machine learning increasingly encounters richly structured data that is inherently nonEuclidean. This data can exhibit intricate geometric, topological and algebraic structure: from the geometry of the curvature of space-time, to topologically complex interactions between neurons in the brain, to the algebraic transformations describing symmetries of physical systems. Extracting knowledge from such non-Euclidean data necessitates a broader mathematical perspective. Echoing the 19th-century revolutions that gave rise to non-Euclidean geometry, an emerging line of research is redefining modern machine learning with non-Euclidean structures. Its goal: generalizing classical methods to unconventional data types with geometry, topology, and algebra. In this review, we provide an accessible gateway to this fast-growing field and propose a graphical taxonomy that integrates recent advances into an intuitive unified framework. We subsequently extract insights into current challenges and highlight exciting opportunities for future development in this field. | http://arxiv.org/pdf/2407.09468v1 | [
"Sophia Sanborn",
"Johan Mathe",
"Mathilde Papillon",
"Domas Buracas",
"Hansen J Lillemark",
"Christian Shewmake",
"Abby Bertics",
"Xavier Pennec",
"Nina Miolane"
] | 2024-07-12T17:48:36Z | 2024-07-12T17:48:36Z |
2403.12014 | EnvGen: Generating and Adapting Environments via LLMs for Training
Embodied Agents | Recent SOTA approaches for embodied learning via interaction directly employ large language models (LLMs) as agents to determine the next steps in an environment. Due to their world knowledge and reasoning capabilities, LLM agents achieve stronger performance than previous smaller agents based on reinforcement learning (RL); however, frequently calling LLMs is slow and expensive. Instead of directly employing LLMs as agents, can we use LLMs' reasoning capabilities to adaptively create training environments to help smaller RL agents learn useful skills that they are weak at? We propose EnvGen, a novel framework to address this question. We first prompt an LLM to generate training environments by giving it the task description and simulator objectives that the agents should learn and then asking it to generate a set of environment configurations (e.g., different terrains, items initially given to agents, etc.). Next, we train a small RL agent in a mixture of the original and LLM-generated environments. Then, we enable the LLM to continuously adapt the generated environments to progressively improve the skills that the agent is weak at, by providing feedback to the LLM in the form of the agent's performance. We demonstrate the usefulness of EnvGen with comprehensive experiments in Crafter and Heist environments. We find that a small RL agent trained with EnvGen can outperform SOTA methods, including a GPT-4 agent, and learns long-horizon tasks significantly faster. We also show that using an LLM to adapt environments dynamically outperforms curriculum learning approaches and how the environments are adapted to help improve RL agents' weaker skills over time. Additionally, EnvGen is substantially more efficient as it only uses a small number of LLM calls (e.g., 4 in total), whereas LLM agents require thousands of calls. Lastly, we present detailed ablation studies for EnvGen design choices. | http://arxiv.org/pdf/2403.12014v2 | [
"Abhay Zala",
"Jaemin Cho",
"Han Lin",
"Jaehong Yoon",
"Mohit Bansal"
] | 2024-07-12T17:39:19Z | 2024-03-18T17:51:16Z |
2407.09453 | Weight Block Sparsity: Training, Compilation, and AI Engine Accelerators | Nowadays, increasingly larger Deep Neural Networks (DNNs) are being developed, trained, and utilized. These networks require significant computational resources, putting a strain on both advanced and limited devices. Our solution is to implement {em weight block sparsity}, which is a structured sparsity that is friendly to hardware. By zeroing certain sections of the convolution and fully connected layers parameters of pre-trained DNN models, we can efficiently speed up the DNN's inference process. This results in a smaller memory footprint, faster communication, and fewer operations. Our work presents a vertical system that allows for the training of convolution and matrix multiplication weights to exploit 8x8 block sparsity on a single GPU within a reasonable amount of time. Compilers recognize this sparsity and use it for both data compaction and computation splitting into threads. Blocks like these take full advantage of both spatial and temporal locality, paving the way for fast vector operations and memory reuse. By using this system on a Resnet50 model, we were able to reduce the weight by half with minimal accuracy loss, resulting in a two-times faster inference speed. We will present performance estimates using accurate and complete code generation for AIE2 configuration sets (AMD Versal FPGAs) with Resnet50, Inception V3, and VGG16 to demonstrate the necessary synergy between hardware overlay designs and software stacks for compiling and executing machine learning applications. | http://arxiv.org/pdf/2407.09453v1 | [
"Paolo D'Alberto",
"Taehee Jeong",
"Akshai Jain",
"Shreyas Manjunath",
"Mrinal Sarmah",
"Samuel Hsu",
"Yaswanth Raparti",
"Nitesh Pipralia"
] | 2024-07-12T17:37:49Z | 2024-07-12T17:37:49Z |
2311.09184 | Benchmarking Generation and Evaluation Capabilities of Large Language
Models for Instruction Controllable Summarization | While large language models (LLMs) can already achieve strong performance on standard generic summarization benchmarks, their performance on more complex summarization task settings is less studied. Therefore, we benchmark LLMs on instruction controllable text summarization, where the model input consists of both a source article and a natural language requirement for desired summary characteristics. To this end, we curate an evaluation-only dataset for this task setting and conduct human evaluations of five LLM-based systems to assess their instruction-following capabilities in controllable summarization. We then benchmark LLM-based automatic evaluation for this task with 4 different evaluation protocols and 11 LLMs, resulting in 40 evaluation methods. Our study reveals that instruction controllable text summarization remains a challenging task for LLMs, since (1) all LLMs evaluated still make factual and other types of errors in their summaries; (2) no LLM-based evaluation methods can achieve a strong alignment with human annotators when judging the quality of candidate summaries; (3) different LLMs show large performance gaps in summary generation and evaluation capabilities. We make our collected benchmark InstruSum publicly available to facilitate future research in this direction. | http://arxiv.org/pdf/2311.09184v2 | [
"Yixin Liu",
"Alexander R. Fabbri",
"Jiawen Chen",
"Yilun Zhao",
"Simeng Han",
"Shafiq Joty",
"Pengfei Liu",
"Dragomir Radev",
"Chien-Sheng Wu",
"Arman Cohan"
] | 2024-07-12T17:35:18Z | 2023-11-15T18:25:26Z |
2407.09450 | Human-like Episodic Memory for Infinite Context LLMs | Large language models (LLMs) have shown remarkable capabilities, but still struggle with processing extensive contexts, limiting their ability to maintain coherence and accuracy over long sequences. In contrast, the human brain excels at organising and retrieving episodic experiences across vast temporal scales, spanning a lifetime. In this work, we introduce EM-LLM, a novel approach that integrates key aspects of human episodic memory and event cognition into LLMs, enabling them to effectively handle practically infinite context lengths while maintaining computational efficiency. EM-LLM organises sequences of tokens into coherent episodic events using a combination of Bayesian surprise and graph-theoretic boundary refinement in an on-line fashion. When needed, these events are retrieved through a two-stage memory process, combining similarity-based and temporally contiguous retrieval for efficient and human-like access to relevant information. Experiments on the LongBench dataset demonstrate EM-LLM's superior performance, outperforming the state-of-the-art InfLLM model with an overall relative improvement of 4.3% across various tasks, including a 33% improvement on the PassageRetrieval task. Furthermore, our analysis reveals strong correlations between EM-LLM's event segmentation and human-perceived events, suggesting a bridge between this artificial system and its biological counterpart. This work not only advances LLM capabilities in processing extended contexts but also provides a computational framework for exploring human memory mechanisms, opening new avenues for interdisciplinary research in AI and cognitive science. | http://arxiv.org/pdf/2407.09450v1 | [
"Zafeirios Fountas",
"Martin A Benfeghoul",
"Adnan Oomerjee",
"Fenia Christopoulou",
"Gerasimos Lampouras",
"Haitham Bou-Ammar",
"Jun Wang"
] | 2024-07-12T17:34:03Z | 2024-07-12T17:34:03Z |
2407.09441 | The $μ\mathcal{G}$ Language for Programming Graph Neural Networks | Graph neural networks form a class of deep learning architectures specifically designed to work with graph-structured data. As such, they share the inherent limitations and problems of deep learning, especially regarding the issues of explainability and trustworthiness. We propose $mumathcal{G}$, an original domain-specific language for the specification of graph neural networks that aims to overcome these issues. The language's syntax is introduced, and its meaning is rigorously defined by a denotational semantics. An equivalent characterization in the form of an operational semantics is also provided and, together with a type system, is used to prove the type soundness of $mumathcal{G}$. We show how $mumathcal{G}$ programs can be represented in a more user-friendly graphical visualization, and provide examples of its generality by showing how it can be used to define some of the most popular graph neural network models, or to develop any custom graph processing application. | http://arxiv.org/pdf/2407.09441v1 | [
"Matteo Belenchia",
"Flavio Corradini",
"Michela Quadrini",
"Michele Loreti"
] | 2024-07-12T17:27:43Z | 2024-07-12T17:27:43Z |
2407.09590 | Diversifying the Expert Knowledge for Task-Agnostic Pruning in Sparse
Mixture-of-Experts | By increasing model parameters but activating them sparsely when performing a task, the use of Mixture-of-Experts (MoE) architecture significantly improves the performance of Large Language Models (LLMs) without increasing the inference cost. However, the memory consumption due to the growing number of experts presents a challenge to the deployment of these models in many real world settings. Our empirical study reveals that some experts encode redundant knowledge during pre-training. We thus propose a method of grouping and pruning similar experts to improve model's parameter efficiency. We validate the effectiveness of our method by pruning two state-of-the-art MoE models, Mixtral-8x7B and Mixtral-8x22B. Evaluation shows that our method outperforms other model pruning methods on a range of natural language tasks. To facilitate future research, we will release our code and the pruned MoE models. | http://arxiv.org/pdf/2407.09590v1 | [
"Zeliang Zhang",
"Xiaodong Liu",
"Hao Cheng",
"Chenliang Xu",
"Jianfeng Gao"
] | 2024-07-12T17:25:02Z | 2024-07-12T17:25:02Z |
2407.01437 | Needle in the Haystack for Memory Based Large Language Models | Current large language models (LLMs) often perform poorly on simple fact retrieval tasks. Here we investigate if coupling a dynamically adaptable external memory to a LLM can alleviate this problem. For this purpose, we test Larimar, a recently proposed language model architecture which uses an external associative memory, on long-context recall tasks including passkey and needle-in-the-haystack tests. We demonstrate that the external memory of Larimar, which allows fast write and read of an episode of text samples, can be used at test time to handle contexts much longer than those seen during training. We further show that the latent readouts from the memory (to which long contexts are written) control the decoder towards generating correct outputs, with the memory stored off of the GPU. Compared to existing transformer-based LLM architectures for long-context recall tasks that use larger parameter counts or modified attention mechanisms, a relatively smaller size Larimar is able to maintain strong performance without any task-specific training or training on longer contexts. | http://arxiv.org/pdf/2407.01437v2 | [
"Elliot Nelson",
"Georgios Kollias",
"Payel Das",
"Subhajit Chaudhury",
"Soham Dan"
] | 2024-07-12T17:20:34Z | 2024-07-01T16:32:16Z |
2407.04268 | NeuFair: Neural Network Fairness Repair with Dropout | This paper investigates neuron dropout as a post-processing bias mitigation for deep neural networks (DNNs). Neural-driven software solutions are increasingly applied in socially critical domains with significant fairness implications. While neural networks are exceptionally good at finding statistical patterns from data, they may encode and amplify existing biases from the historical data. Existing bias mitigation algorithms often require modifying the input dataset or the learning algorithms. We posit that the prevalent dropout methods that prevent over-fitting during training by randomly dropping neurons may be an effective and less intrusive approach to improve the fairness of pre-trained DNNs. However, finding the ideal set of neurons to drop is a combinatorial problem. We propose NeuFair, a family of post-processing randomized algorithms that mitigate unfairness in pre-trained DNNs via dropouts during inference after training. Our randomized search is guided by an objective to minimize discrimination while maintaining the model's utility. We show that our design of randomized algorithms is effective and efficient in improving fairness (up to 69%) with minimal or no model performance degradation. We provide intuitive explanations of these phenomena and carefully examine the influence of various hyperparameters of search algorithms on the results. Finally, we empirically and conceptually compare NeuFair to different state-of-the-art bias mitigators. | http://arxiv.org/pdf/2407.04268v2 | [
"Vishnu Asutosh Dasu",
"Ashish Kumar",
"Saeid Tizpaz-Niari",
"Gang Tan"
] | 2024-07-12T17:10:14Z | 2024-07-05T05:45:34Z |
2407.09434 | A Perspective on Foundation Models for the Electric Power Grid | Foundation models (FMs) currently dominate news headlines. They employ advanced deep learning architectures to extract structural information autonomously from vast datasets through self-supervision. The resulting rich representations of complex systems and dynamics can be applied to many downstream applications. Therefore, FMs can find uses in electric power grids, challenged by the energy transition and climate change. In this paper, we call for the development of, and state why we believe in, the potential of FMs for electric grids. We highlight their strengths and weaknesses amidst the challenges of a changing grid. We argue that an FM learning from diverse grid data and topologies could unlock transformative capabilities, pioneering a new approach in leveraging AI to redefine how we manage complexity and uncertainty in the electric grid. Finally, we discuss a power grid FM concept, namely GridFM, based on graph neural networks and show how different downstream tasks benefit. | http://arxiv.org/pdf/2407.09434v1 | [
"Hendrik F. Hamann",
"Thomas Brunschwiler",
"Blazhe Gjorgiev",
"Leonardo S. A. Martins",
"Alban Puech",
"Anna Varbella",
"Jonas Weiss",
"Juan Bernabe-Moreno",
"Alexandre Blondin Massé",
"Seong Choi",
"Ian Foster",
"Bri-Mathias Hodge",
"Rishabh Jain",
"Kibaek Kim",
"Vincent Mai",
"François Mirallès",
"Martin De Montigny",
"Octavio Ramos-Leaños",
"Hussein Suprême",
"Le Xie",
"El-Nasser S. Youssef",
"Arnaud Zinflou",
"Alexander J. Belvi",
"Ricardo J. Bessa",
"Bishnu Prasad Bhattari",
"Johannes Schmude",
"Stanislav Sobolevsky"
] | 2024-07-12T17:09:47Z | 2024-07-12T17:09:47Z |
2403.19629 | Metric Learning from Limited Pairwise Preference Comparisons | We study metric learning from preference comparisons under the ideal point model, in which a user prefers an item over another if it is closer to their latent ideal item. These items are embedded into $mathbb{R}^d$ equipped with an unknown Mahalanobis distance shared across users. While recent work shows that it is possible to simultaneously recover the metric and ideal items given $mathcal{O}(d)$ pairwise comparisons per user, in practice we often have a limited budget of $o(d)$ comparisons. We study whether the metric can still be recovered, even though it is known that learning individual ideal items is now no longer possible. We show that in general, $o(d)$ comparisons reveal no information about the metric, even with infinitely many users. However, when comparisons are made over items that exhibit low-dimensional structure, each user can contribute to learning the metric restricted to a low-dimensional subspace so that the metric can be jointly identified. We present a divide-and-conquer approach that achieves this, and provide theoretical recovery guarantees and empirical validation. | http://arxiv.org/pdf/2403.19629v2 | [
"Zhi Wang",
"Geelon So",
"Ramya Korlakai Vinayak"
] | 2024-07-12T16:56:18Z | 2024-03-28T17:46:25Z |
2407.09427 | Flow-Based Generative Emulation of Grids of Stellar Evolutionary Models | We present a flow-based generative approach to emulate grids of stellar evolutionary models. By interpreting the input parameters and output properties of these models as multi-dimensional probability distributions, we train conditional normalizing flows to learn and predict the complex relationships between grid inputs and outputs in the form of conditional joint distributions. Leveraging the expressive power and versatility of these flows, we showcase their ability to emulate a variety of evolutionary tracks and isochrones across a continuous range of input parameters. In addition, we describe a simple Bayesian approach for estimating stellar parameters using these flows and demonstrate its application to asteroseismic datasets of red giants observed by the Kepler mission. By applying this approach to red giants in open clusters NGC 6791 and NGC 6819, we illustrate how large age uncertainties can arise when fitting only to global asteroseismic and spectroscopic parameters without prior information on initial helium abundances and mixing length parameter values. We also conduct inference using the flow at a large scale by determining revised estimates of masses and radii for 15,388 field red giants. These estimates show improved agreement with results from existing grid-based modelling, reveal distinct population-level features in the red clump, and suggest that the masses of Kepler red giants previously determined using the corrected asteroseismic scaling relations have been overestimated by 5-10%. | http://arxiv.org/pdf/2407.09427v1 | [
"Marc Hon",
"Yaguang Li",
"Joel Ong"
] | 2024-07-12T16:54:17Z | 2024-07-12T16:54:17Z |
2406.04313 | Improving Alignment and Robustness with Circuit Breakers | AI systems can take harmful actions and are highly vulnerable to adversarial attacks. We present an approach, inspired by recent advances in representation engineering, that interrupts the models as they respond with harmful outputs with "circuit breakers." Existing techniques aimed at improving alignment, such as refusal training, are often bypassed. Techniques such as adversarial training try to plug these holes by countering specific attacks. As an alternative to refusal training and adversarial training, circuit-breaking directly controls the representations that are responsible for harmful outputs in the first place. Our technique can be applied to both text-only and multimodal language models to prevent the generation of harmful outputs without sacrificing utility -- even in the presence of powerful unseen attacks. Notably, while adversarial robustness in standalone image recognition remains an open challenge, circuit breakers allow the larger multimodal system to reliably withstand image "hijacks" that aim to produce harmful content. Finally, we extend our approach to AI agents, demonstrating considerable reductions in the rate of harmful actions when they are under attack. Our approach represents a significant step forward in the development of reliable safeguards to harmful behavior and adversarial attacks. | http://arxiv.org/pdf/2406.04313v4 | [
"Andy Zou",
"Long Phan",
"Justin Wang",
"Derek Duenas",
"Maxwell Lin",
"Maksym Andriushchenko",
"Rowan Wang",
"Zico Kolter",
"Matt Fredrikson",
"Dan Hendrycks"
] | 2024-07-12T16:51:07Z | 2024-06-06T17:57:04Z |
2306.15552 | A Survey on Deep Learning Hardware Accelerators for Heterogeneous HPC
Platforms | Recent trends in deep learning (DL) imposed hardware accelerators as the most viable solution for several classes of high-performance computing (HPC) applications such as image classification, computer vision, and speech recognition. This survey summarizes and classifies the most recent advances in designing DL accelerators suitable to reach the performance requirements of HPC applications. In particular, it highlights the most advanced approaches to support deep learning accelerations including not only GPU and TPU-based accelerators but also design-specific hardware accelerators such as FPGA-based and ASIC-based accelerators, Neural Processing Units, open hardware RISC-V-based accelerators and co-processors. The survey also describes accelerators based on emerging memory technologies and computing paradigms, such as 3D-stacked Processor-In-Memory, non-volatile memories (mainly, Resistive RAM and Phase Change Memories) to implement in-memory computing, Neuromorphic Processing Units, and accelerators based on Multi-Chip Modules. Among emerging technologies, we also include some insights into quantum-based accelerators and photonics. To conclude, the survey classifies the most influential architectures and technologies proposed in the last years, with the purpose of offering the reader a comprehensive perspective in the rapidly evolving field of deep learning. | http://arxiv.org/pdf/2306.15552v2 | [
"Cristina Silvano",
"Daniele Ielmini",
"Fabrizio Ferrandi",
"Leandro Fiorin",
"Serena Curzel",
"Luca Benini",
"Francesco Conti",
"Angelo Garofalo",
"Cristian Zambelli",
"Enrico Calore",
"Sebastiano Fabio Schifano",
"Maurizio Palesi",
"Giuseppe Ascia",
"Davide Patti",
"Nicola Petra",
"Davide De Caro",
"Luciano Lavagno",
"Teodoro Urso",
"Valeria Cardellini",
"Gian Carlo Cardarilli",
"Robert Birke",
"Stefania Perri"
] | 2024-07-12T16:50:59Z | 2023-06-27T15:24:24Z |
2407.09415 | A Benchmark Environment for Offline Reinforcement Learning in Racing
Games | Offline Reinforcement Learning (ORL) is a promising approach to reduce the high sample complexity of traditional Reinforcement Learning (RL) by eliminating the need for continuous environmental interactions. ORL exploits a dataset of pre-collected transitions and thus expands the range of application of RL to tasks in which the excessive environment queries increase training time and decrease efficiency, such as in modern AAA games. This paper introduces OfflineMania a novel environment for ORL research. It is inspired by the iconic TrackMania series and developed using the Unity 3D game engine. The environment simulates a single-agent racing game in which the objective is to complete the track through optimal navigation. We provide a variety of datasets to assess ORL performance. These datasets, created from policies of varying ability and in different sizes, aim to offer a challenging testbed for algorithm development and evaluation. We further establish a set of baselines for a range of Online RL, ORL, and hybrid Offline to Online RL approaches using our environment. | http://arxiv.org/pdf/2407.09415v1 | [
"Girolamo Macaluso",
"Alessandro Sestini",
"Andrew D. Bagdanov"
] | 2024-07-12T16:44:03Z | 2024-07-12T16:44:03Z |
2407.04622 | On scalable oversight with weak LLMs judging strong LLMs | Scalable oversight protocols aim to enable humans to accurately supervise superhuman AI. In this paper we study debate, where two AI's compete to convince a judge; consultancy, where a single AI tries to convince a judge that asks questions; and compare to a baseline of direct question-answering, where the judge just answers outright without the AI. We use large language models (LLMs) as both AI agents and as stand-ins for human judges, taking the judge models to be weaker than agent models. We benchmark on a diverse range of asymmetries between judges and agents, extending previous work on a single extractive QA task with information asymmetry, to also include mathematics, coding, logic and multimodal reasoning asymmetries. We find that debate outperforms consultancy across all tasks when the consultant is randomly assigned to argue for the correct/incorrect answer. Comparing debate to direct question answering, the results depend on the type of task: in extractive QA tasks with information asymmetry debate outperforms direct question answering, but in other tasks without information asymmetry the results are mixed. Previous work assigned debaters/consultants an answer to argue for. When we allow them to instead choose which answer to argue for, we find judges are less frequently convinced by the wrong answer in debate than in consultancy. Further, we find that stronger debater models increase judge accuracy, though more modestly than in previous studies. | http://arxiv.org/pdf/2407.04622v2 | [
"Zachary Kenton",
"Noah Y. Siegel",
"János Kramár",
"Jonah Brown-Cohen",
"Samuel Albanie",
"Jannis Bulian",
"Rishabh Agarwal",
"David Lindner",
"Yunhao Tang",
"Noah D. Goodman",
"Rohin Shah"
] | 2024-07-12T16:38:12Z | 2024-07-05T16:29:15Z |
2403.09302 | StainFuser: Controlling Diffusion for Faster Neural Style Transfer in
Multi-Gigapixel Histology Images | Stain normalization algorithms aim to transform the color and intensity characteristics of a source multi-gigapixel histology image to match those of a target image, mitigating inconsistencies in the appearance of stains used to highlight cellular components in the images. We propose a new approach, StainFuser, which treats this problem as a style transfer task using a novel Conditional Latent Diffusion architecture, eliminating the need for handcrafted color components. With this method, we curate SPI-2M the largest stain normalization dataset to date of over 2 million histology images with neural style transfer for high-quality transformations. Trained on this data, StainFuser outperforms current state-of-the-art deep learning and handcrafted methods in terms of the quality of normalized images and in terms of downstream model performance on the CoNIC dataset. | http://arxiv.org/pdf/2403.09302v2 | [
"Robert Jewsbury",
"Ruoyu Wang",
"Abhir Bhalerao",
"Nasir Rajpoot",
"Quoc Dang Vu"
] | 2024-07-12T16:27:06Z | 2024-03-14T11:49:43Z |
2407.09387 | Meta-Analysis with Untrusted Data | [See paper for full abstract] Meta-analysis is a crucial tool for answering scientific questions. It is usually conducted on a relatively small amount of ``trusted'' data -- ideally from randomized, controlled trials -- which allow causal effects to be reliably estimated with minimal assumptions. We show how to answer causal questions much more precisely by making two changes. First, we incorporate untrusted data drawn from large observational databases, related scientific literature and practical experience -- without sacrificing rigor or introducing strong assumptions. Second, we train richer models capable of handling heterogeneous trials, addressing a long-standing challenge in meta-analysis. Our approach is based on conformal prediction, which fundamentally produces rigorous prediction intervals, but doesn't handle indirect observations: in meta-analysis, we observe only noisy effects due to the limited number of participants in each trial. To handle noise, we develop a simple, efficient version of fully-conformal kernel ridge regression, based on a novel condition called idiocentricity. We introduce noise-correcting terms in the residuals and analyze their interaction with a ``variance shaving'' technique. In multiple experiments on healthcare datasets, our algorithms deliver tighter, sounder intervals than traditional ones. This paper charts a new course for meta-analysis and evidence-based medicine, where heterogeneity and untrusted data are embraced for more nuanced and precise predictions. | http://arxiv.org/pdf/2407.09387v1 | [
"Shiva Kaul",
"Geoffrey J. Gordon"
] | 2024-07-12T16:07:53Z | 2024-07-12T16:07:53Z |
2309.06212 | Long-term drought prediction using deep neural networks based on
geospatial weather data | The problem of high-quality drought forecasting up to a year in advance is critical for agriculture planning and insurance. Yet, it is still unsolved with reasonable accuracy due to data complexity and aridity stochasticity. We tackle drought data by introducing an end-to-end approach that adopts a spatio-temporal neural network model with accessible open monthly climate data as the input. Our systematic research employs diverse proposed models and five distinct environmental regions as a testbed to evaluate the efficacy of the Palmer Drought Severity Index (PDSI) prediction. Key aggregated findings are the exceptional performance of a Transformer model, EarthFormer, in making accurate short-term (up to six months) forecasts. At the same time, the Convolutional LSTM excels in longer-term forecasting. | http://arxiv.org/pdf/2309.06212v6 | [
"Alexander Marusov",
"Vsevolod Grabar",
"Yury Maximov",
"Nazar Sotiriadi",
"Alexander Bulkin",
"Alexey Zaytsev"
] | 2024-07-12T16:05:50Z | 2023-09-12T13:28:06Z |
2407.09381 | The Effectiveness of Curvature-Based Rewiring and the Role of
Hyperparameters in GNNs Revisited | Message passing is the dominant paradigm in Graph Neural Networks (GNNs). The efficiency of message passing, however, can be limited by the topology of the graph. This happens when information is lost during propagation due to being oversquashed when travelling through bottlenecks. To remedy this, recent efforts have focused on graph rewiring techniques, which disconnect the input graph originating from the data and the computational graph, on which message passing is performed. A prominent approach for this is to use discrete graph curvature measures, of which several variants have been proposed, to identify and rewire around bottlenecks, facilitating information propagation. While oversquashing has been demonstrated in synthetic datasets, in this work we reevaluate the performance gains that curvature-based rewiring brings to real-world datasets. We show that in these datasets, edges selected during the rewiring process are not in line with theoretical criteria identifying bottlenecks. This implies they do not necessarily oversquash information during message passing. Subsequently, we demonstrate that SOTA accuracies on these datasets are outliers originating from sweeps of hyperparameters -- both the ones for training and dedicated ones related to the rewiring algorithm -- instead of consistent performance gains. In conclusion, our analysis nuances the effectiveness of curvature-based rewiring in real-world datasets and brings a new perspective on the methods to evaluate GNN accuracy improvements. | http://arxiv.org/pdf/2407.09381v1 | [
"Floriano Tori",
"Vincent Holst",
"Vincent Ginis"
] | 2024-07-12T16:03:58Z | 2024-07-12T16:03:58Z |
2407.09378 | Graph Neural Network Causal Explanation via Neural Causal Models | Graph neural network (GNN) explainers identify the important subgraph that ensures the prediction for a given graph. Until now, almost all GNN explainers are based on association, which is prone to spurious correlations. We propose {name}, a GNN causal explainer via causal inference. Our explainer is based on the observation that a graph often consists of a causal underlying subgraph. {name} includes three main steps: 1) It builds causal structure and the corresponding structural causal model (SCM) for a graph, which enables the cause-effect calculation among nodes. 2) Directly calculating the cause-effect in real-world graphs is computationally challenging. It is then enlightened by the recent neural causal model (NCM), a special type of SCM that is trainable, and design customized NCMs for GNNs. By training these GNN NCMs, the cause-effect can be easily calculated. 3) It uncovers the subgraph that causally explains the GNN predictions via the optimized GNN-NCMs. Evaluation results on multiple synthetic and real-world graphs validate that {name} significantly outperforms existing GNN explainers in exact groundtruth explanation identification | http://arxiv.org/pdf/2407.09378v1 | [
"Arman Behnam",
"Binghui Wang"
] | 2024-07-12T15:56:33Z | 2024-07-12T15:56:33Z |
2407.09375 | HiPPO-Prophecy: State-Space Models can Provably Learn Dynamical Systems
in Context | This work explores the in-context learning capabilities of State Space Models (SSMs) and presents, to the best of our knowledge, the first theoretical explanation of a possible underlying mechanism. We introduce a novel weight construction for SSMs, enabling them to predict the next state of any dynamical system after observing previous states without parameter fine-tuning. This is accomplished by extending the HiPPO framework to demonstrate that continuous SSMs can approximate the derivative of any input signal. Specifically, we find an explicit weight construction for continuous SSMs and provide an asymptotic error bound on the derivative approximation. The discretization of this continuous SSM subsequently yields a discrete SSM that predicts the next state. Finally, we demonstrate the effectiveness of our parameterization empirically. This work should be an initial step toward understanding how sequence models based on SSMs learn in context. | http://arxiv.org/pdf/2407.09375v1 | [
"Federico Arangath Joseph",
"Kilian Haefeli",
"Noah Liniger",
"Caglar Gulcehre"
] | 2024-07-12T15:56:11Z | 2024-07-12T15:56:11Z |
2407.09373 | Towards Personalised Patient Risk Prediction Using Temporal Hospital
Data Trajectories | Quantifying a patient's health status provides clinicians with insight into patient risk, and the ability to better triage and manage resources. Early Warning Scores (EWS) are widely deployed to measure overall health status, and risk of adverse outcomes, in hospital patients. However, current EWS are limited both by their lack of personalisation and use of static observations. We propose a pipeline that groups intensive care unit patients by the trajectories of observations data throughout their stay as a basis for the development of personalised risk predictions. Feature importance is considered to provide model explainability. Using the MIMIC-IV dataset, six clusters were identified, capturing differences in disease codes, observations, lengths of admissions and outcomes. Applying the pipeline to data from just the first four hours of each ICU stay assigns the majority of patients to the same cluster as when the entire stay duration is considered. In-hospital mortality prediction models trained on individual clusters had higher F1 score performance in five of the six clusters when compared against the unclustered patient cohort. The pipeline could form the basis of a clinical decision support tool, working to improve the clinical characterisation of risk groups and the early detection of patient deterioration. | http://arxiv.org/pdf/2407.09373v1 | [
"Thea Barnes",
"Enrico Werner",
"Jeffrey N. Clark",
"Raul Santos-Rodriguez"
] | 2024-07-12T15:53:26Z | 2024-07-12T15:53:26Z |
2407.09370 | Learning High-Frequency Functions Made Easy with Sinusoidal Positional
Encoding | Fourier features based positional encoding (PE) is commonly used in machine learning tasks that involve learning high-frequency features from low-dimensional inputs, such as 3D view synthesis and time series regression with neural tangent kernels. Despite their effectiveness, existing PEs require manual, empirical adjustment of crucial hyperparameters, specifically the Fourier features, tailored to each unique task. Further, PEs face challenges in efficiently learning high-frequency functions, particularly in tasks with limited data. In this paper, we introduce sinusoidal PE (SPE), designed to efficiently learn adaptive frequency features closely aligned with the true underlying function. Our experiments demonstrate that SPE, without hyperparameter tuning, consistently achieves enhanced fidelity and faster training across various tasks, including 3D view synthesis, Text-to-Speech generation, and 1D regression. SPE is implemented as a direct replacement for existing PEs. Its plug-and-play nature lets numerous tasks easily adopt and benefit from SPE. | http://arxiv.org/pdf/2407.09370v1 | [
"Chuanhao Sun",
"Zhihang Yuan",
"Kai Xu",
"Luo Mai",
"Siddharth N",
"Shuo Chen",
"Mahesh K. Marina"
] | 2024-07-12T15:51:53Z | 2024-07-12T15:51:53Z |
2303.07814 | MS-TCRNet: Multi-Stage Temporal Convolutional Recurrent Networks for
Action Segmentation Using Sensor-Augmented Kinematics | Action segmentation is a challenging task in high-level process analysis, typically performed on video or kinematic data obtained from various sensors. This work presents two contributions related to action segmentation on kinematic data. Firstly, we introduce two versions of Multi-Stage Temporal Convolutional Recurrent Networks (MS-TCRNet), specifically designed for kinematic data. The architectures consist of a prediction generator with intra-stage regularization and Bidirectional LSTM or GRU-based refinement stages. Secondly, we propose two new data augmentation techniques, World Frame Rotation and Hand Inversion, which utilize the strong geometric structure of kinematic data to improve algorithm performance and robustness. We evaluate our models on three datasets of surgical suturing tasks: the Variable Tissue Simulation (VTS) Dataset and the newly introduced Bowel Repair Simulation (BRS) Dataset, both of which are open surgery simulation datasets collected by us, as well as the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), a well-known benchmark in robotic surgery. Our methods achieved state-of-the-art performance. | http://arxiv.org/pdf/2303.07814v2 | [
"Adam Goldbraikh",
"Omer Shubi",
"Or Rubin",
"Carla M Pugh",
"Shlomi Laufer"
] | 2024-07-12T15:48:09Z | 2023-03-14T11:44:58Z |
2402.09497 | Instruction Tuning for Secure Code Generation | Modern language models (LMs) have gained widespread acceptance in everyday and professional contexts, particularly in programming. An essential procedure enabling this adoption is instruction tuning, which substantially enhances LMs' practical utility by training them to follow user instructions and human preferences. However, existing instruction tuning schemes overlook a crucial aspect: the security of generated code. As a result, even the state-of-the-art instruction-tuned LMs frequently produce unsafe code, posing significant security risks. In this work, we introduce SafeCoder to address this gap. SafeCoder performs security-centric fine-tuning using a diverse and high-quality dataset that we collected using an automated pipeline. We integrate the security fine-tuning with standard instruction tuning, to facilitate a joint optimization of both security and utility. Despite its simplicity, we show that SafeCoder is effective across a variety of popular LMs and datasets. It is able to drastically improve security (by about 30%), while preserving utility. | http://arxiv.org/pdf/2402.09497v2 | [
"Jingxuan He",
"Mark Vero",
"Gabriela Krasnopolska",
"Martin Vechev"
] | 2024-07-12T15:45:57Z | 2024-02-14T15:47:46Z |
2402.14012 | Chasing Convex Functions with Long-term Constraints | We introduce and study a family of online metric problems with long-term constraints. In these problems, an online player makes decisions $mathbf{x}_t$ in a metric space $(X,d)$ to simultaneously minimize their hitting cost $f_t(mathbf{x}_t)$ and switching cost as determined by the metric. Over the time horizon $T$, the player must satisfy a long-term demand constraint $sum_{t} c(mathbf{x}_t) geq 1$, where $c(mathbf{x}_t)$ denotes the fraction of demand satisfied at time $t$. Such problems can find a wide array of applications to online resource allocation in sustainable energy/computing systems. We devise optimal competitive and learning-augmented algorithms for the case of bounded hitting cost gradients and weighted $ell_1$ metrics, and further show that our proposed algorithms perform well in numerical experiments. | http://arxiv.org/pdf/2402.14012v2 | [
"Adam Lechowicz",
"Nicolas Christianson",
"Bo Sun",
"Noman Bashir",
"Mohammad Hajiesmaili",
"Adam Wierman",
"Prashant Shenoy"
] | 2024-07-12T15:44:38Z | 2024-02-21T18:51:42Z |
2407.09360 | Novel clustered federated learning based on local loss | This paper proposes LCFL, a novel clustering metric for evaluating clients' data distributions in federated learning. LCFL aligns with federated learning requirements, accurately assessing client-to-client variations in data distribution. It offers advantages over existing clustered federated learning methods, addressing privacy concerns, improving applicability to non-convex models, and providing more accurate classification results. LCFL does not require prior knowledge of clients' data distributions. We provide a rigorous mathematical analysis, demonstrating the correctness and feasibility of our framework. Numerical experiments with neural network instances highlight the superior performance of LCFL over baselines on several clustered federated learning benchmarks. | http://arxiv.org/pdf/2407.09360v1 | [
"Endong Gu",
"Yongxin Chen",
"Hao Wen",
"Xingju Cai",
"Deren Han"
] | 2024-07-12T15:37:05Z | 2024-07-12T15:37:05Z |
2211.05667 | What Makes a Good Explanation?: A Harmonized View of Properties of
Explanations | Interpretability provides a means for humans to verify aspects of machine learning (ML) models and empower human+ML teaming in situations where the task cannot be fully automated. Different contexts require explanations with different properties. For example, the kind of explanation required to determine if an early cardiac arrest warning system is ready to be integrated into a care setting is very different from the type of explanation required for a loan applicant to help determine the actions they might need to take to make their application successful. Unfortunately, there is a lack of standardization when it comes to properties of explanations: different papers may use the same term to mean different quantities, and different terms to mean the same quantity. This lack of a standardized terminology and categorization of the properties of ML explanations prevents us from both rigorously comparing interpretable machine learning methods and identifying what properties are needed in what contexts. In this work, we survey properties defined in interpretable machine learning papers, synthesize them based on what they actually measure, and describe the trade-offs between different formulations of these properties. In doing so, we enable more informed selection of task-appropriate formulations of explanation properties as well as standardization for future work in interpretable machine learning. | http://arxiv.org/pdf/2211.05667v3 | [
"Zixi Chen",
"Varshini Subhash",
"Marton Havasi",
"Weiwei Pan",
"Finale Doshi-Velez"
] | 2024-07-12T15:34:29Z | 2022-11-10T16:04:28Z |
2210.06112 | Efficient Bayesian Updates for Deep Learning via Laplace Approximations | Since training deep neural networks takes significant computational resources, extending the training dataset with new data is difficult, as it typically requires complete retraining. Moreover, specific applications do not allow costly retraining due to time or computational constraints. We address this issue by proposing a novel Bayesian update method for deep neural networks by using a last-layer Laplace approximation. Concretely, we leverage second-order optimization techniques on the Gaussian posterior distribution of a Laplace approximation, computing the inverse Hessian matrix in closed form. This way, our method allows for fast and effective updates upon the arrival of new data in a stationary setting. A large-scale evaluation study across different data modalities confirms that our updates are a fast and competitive alternative to costly retraining. Furthermore, we demonstrate its applicability in a deep active learning scenario by using our update to improve existing selection strategies. | http://arxiv.org/pdf/2210.06112v2 | [
"Denis Huseljic",
"Marek Herde",
"Lukas Rauch",
"Paul Hahn",
"Zhixin Huang",
"Daniel Kottke",
"Stephan Vogt",
"Bernhard Sick"
] | 2024-07-12T15:23:28Z | 2022-10-12T12:16:46Z |
2407.07933 | Identification and Estimation of the Bi-Directional MR with Some Invalid
Instruments | We consider the challenging problem of estimating causal effects from purely observational data in the bi-directional Mendelian randomization (MR), where some invalid instruments, as well as unmeasured confounding, usually exist. To address this problem, most existing methods attempt to find proper valid instrumental variables (IVs) for the target causal effect by expert knowledge or by assuming that the causal model is a one-directional MR model. As such, in this paper, we first theoretically investigate the identification of the bi-directional MR from observational data. In particular, we provide necessary and sufficient conditions under which valid IV sets are correctly identified such that the bi-directional MR model is identifiable, including the causal directions of a pair of phenotypes (i.e., the treatment and outcome). Moreover, based on the identification theory, we develop a cluster fusion-like method to discover valid IV sets and estimate the causal effects of interest. We theoretically demonstrate the correctness of the proposed algorithm. Experimental results show the effectiveness of our method for estimating causal effects in bi-directional MR. | http://arxiv.org/pdf/2407.07933v2 | [
"Feng Xie",
"Zhen Yao",
"Lin Xie",
"Yan Zeng",
"Zhi Geng"
] | 2024-07-12T15:15:58Z | 2024-07-10T12:58:30Z |
2407.06124 | Structured Generations: Using Hierarchical Clusters to guide Diffusion
Models | This paper introduces Diffuse-TreeVAE, a deep generative model that integrates hierarchical clustering into the framework of Denoising Diffusion Probabilistic Models (DDPMs). The proposed approach generates new images by sampling from a root embedding of a learned latent tree VAE-based structure, it then propagates through hierarchical paths, and utilizes a second-stage DDPM to refine and generate distinct, high-quality images for each data cluster. The result is a model that not only improves image clarity but also ensures that the generated samples are representative of their respective clusters, addressing the limitations of previous VAE-based methods and advancing the state of clustering-based generative modeling. | http://arxiv.org/pdf/2407.06124v2 | [
"Jorge da Silva Goncalves",
"Laura Manduchi",
"Moritz Vandenhirtz",
"Julia E. Vogt"
] | 2024-07-12T15:15:03Z | 2024-07-08T17:00:28Z |
2407.09336 | Guidelines for Augmentation Selection in Contrastive Learning for Time
Series Classification | Self-supervised contrastive learning has become a key technique in deep learning, particularly in time series analysis, due to its ability to learn meaningful representations without explicit supervision. Augmentation is a critical component in contrastive learning, where different augmentations can dramatically impact performance, sometimes influencing accuracy by over 30%. However, the selection of augmentations is predominantly empirical which can be suboptimal, or grid searching that is time-consuming. In this paper, we establish a principled framework for selecting augmentations based on dataset characteristics such as trend and seasonality. Specifically, we construct 12 synthetic datasets incorporating trend, seasonality, and integration weights. We then evaluate the effectiveness of 8 different augmentations across these synthetic datasets, thereby inducing generalizable associations between time series characteristics and augmentation efficiency. Additionally, we evaluated the induced associations across 6 real-world datasets encompassing domains such as activity recognition, disease diagnosis, traffic monitoring, electricity usage, mechanical fault prognosis, and finance. These real-world datasets are diverse, covering a range from 1 to 12 channels, 2 to 10 classes, sequence lengths of 14 to 1280, and data frequencies from 250 Hz to daily intervals. The experimental results show that our proposed trend-seasonality-based augmentation recommendation algorithm can accurately identify the effective augmentations for a given time series dataset, achieving an average Recall@3 of 0.667, outperforming baselines. Our work provides guidance for studies employing contrastive learning in time series analysis, with wide-ranging applications. All the code, datasets, and analysis results will be released at https://github.com/DL4mHealth/TS-Contrastive-Augmentation-Recommendation. | http://arxiv.org/pdf/2407.09336v1 | [
"Ziyu Liu",
"Azadeh Alavi",
"Minyi Li",
"Xiang Zhang"
] | 2024-07-12T15:13:16Z | 2024-07-12T15:13:16Z |
2407.07674 | Feasibility Study on Active Learning of Smart Surrogates for Scientific
Simulations | High-performance scientific simulations, important for comprehension of complex systems, encounter computational challenges especially when exploring extensive parameter spaces. There has been an increasing interest in developing deep neural networks (DNNs) as surrogate models capable of accelerating the simulations. However, existing approaches for training these DNN surrogates rely on extensive simulation data which are heuristically selected and generated with expensive computation -- a challenge under-explored in the literature. In this paper, we investigate the potential of incorporating active learning into DNN surrogate training. This allows intelligent and objective selection of training simulations, reducing the need to generate extensive simulation data as well as the dependency of the performance of DNN surrogates on pre-defined training simulations. In the problem context of constructing DNN surrogates for diffusion equations with sources, we examine the efficacy of diversity- and uncertainty-based strategies for selecting training simulations, considering two different DNN architecture. The results set the groundwork for developing the high-performance computing infrastructure for Smart Surrogates that supports on-the-fly generation of simulation data steered by active learning strategies to potentially improve the efficiency of scientific simulations. | http://arxiv.org/pdf/2407.07674v2 | [
"Pradeep Bajracharya",
"Javier Quetzalcóatl Toledo-Marín",
"Geoffrey Fox",
"Shantenu Jha",
"Linwei Wang"
] | 2024-07-12T15:10:53Z | 2024-07-10T14:00:20Z |
2403.11938 | State space representations of the Roesser type for convolutional layers | From the perspective of control theory, convolutional layers (of neural networks) are 2-D (or N-D) linear time-invariant dynamical systems. The usual representation of convolutional layers by the convolution kernel corresponds to the representation of a dynamical system by its impulse response. However, many analysis tools from control theory, e.g., involving linear matrix inequalities, require a state space representation. For this reason, we explicitly provide a state space representation of the Roesser type for 2-D convolutional layers with $c_mathrm{in}r_1 + c_mathrm{out}r_2$ states, where $c_mathrm{in}$/$c_mathrm{out}$ is the number of input/output channels of the layer and $r_1$/$r_2$ characterizes the width/length of the convolution kernel. This representation is shown to be minimal for $c_mathrm{in} = c_mathrm{out}$. We further construct state space representations for dilated, strided, and N-D convolutions. | http://arxiv.org/pdf/2403.11938v2 | [
"Patricia Pauli",
"Dennis Gramlich",
"Frank Allgöwer"
] | 2024-07-12T15:08:15Z | 2024-03-18T16:35:13Z |
2407.09324 | Provable Privacy Advantages of Decentralized Federated Learning via
Distributed Optimization | Federated learning (FL) emerged as a paradigm designed to improve data privacy by enabling data to reside at its source, thus embedding privacy as a core consideration in FL architectures, whether centralized or decentralized. Contrasting with recent findings by Pasquini et al., which suggest that decentralized FL does not empirically offer any additional privacy or security benefits over centralized models, our study provides compelling evidence to the contrary. We demonstrate that decentralized FL, when deploying distributed optimization, provides enhanced privacy protection - both theoretically and empirically - compared to centralized approaches. The challenge of quantifying privacy loss through iterative processes has traditionally constrained the theoretical exploration of FL protocols. We overcome this by conducting a pioneering in-depth information-theoretical privacy analysis for both frameworks. Our analysis, considering both eavesdropping and passive adversary models, successfully establishes bounds on privacy leakage. We show information theoretically that the privacy loss in decentralized FL is upper bounded by the loss in centralized FL. Compared to the centralized case where local gradients of individual participants are directly revealed, a key distinction of optimization-based decentralized FL is that the relevant information includes differences of local gradients over successive iterations and the aggregated sum of different nodes' gradients over the network. This information complicates the adversary's attempt to infer private data. To bridge our theoretical insights with practical applications, we present detailed case studies involving logistic regression and deep neural networks. These examples demonstrate that while privacy leakage remains comparable in simpler models, complex models like deep neural networks exhibit lower privacy risks under decentralized FL. | http://arxiv.org/pdf/2407.09324v1 | [
"Wenrui Yu",
"Qiongxiu Li",
"Milan Lopuhaä-Zwakenberg",
"Mads Græsbøll Christensen",
"Richard Heusdens"
] | 2024-07-12T15:01:09Z | 2024-07-12T15:01:09Z |
2403.15371 | Can large language models explore in-context? | We investigate the extent to which contemporary Large Language Models (LLMs) can engage in exploration, a core capability in reinforcement learning and decision making. We focus on native performance of existing LLMs, without training interventions. We deploy LLMs as agents in simple multi-armed bandit environments, specifying the environment description and interaction history entirely in-context, i.e., within the LLM prompt. We experiment with GPT-3.5, GPT-4, and Llama2, using a variety of prompt designs, and find that the models do not robustly engage in exploration without substantial interventions: i) Across all of our experiments, only one configuration resulted in satisfactory exploratory behavior: GPT-4 with chain-of-thought reasoning and an externally summarized interaction history, presented as sufficient statistics; ii) All other configurations did not result in robust exploratory behavior, including those with chain-of-thought reasoning but unsummarized history. Although these findings can be interpreted positively, they suggest that external summarization -- which may not be possible in more complex settings -- is important for obtaining desirable behavior from LLM agents. We conclude that non-trivial algorithmic interventions, such as fine-tuning or dataset curation, may be required to empower LLM-based decision making agents in complex settings. | http://arxiv.org/pdf/2403.15371v2 | [
"Akshay Krishnamurthy",
"Keegan Harris",
"Dylan J. Foster",
"Cyril Zhang",
"Aleksandrs Slivkins"
] | 2024-07-12T14:52:49Z | 2024-03-22T17:50:43Z |
2407.01394 | Gloss2Text: Sign Language Gloss translation using LLMs and Semantically
Aware Label Smoothing | Sign language translation from video to spoken text presents unique challenges owing to the distinct grammar, expression nuances, and high variation of visual appearance across different speakers and contexts. The intermediate gloss annotations of videos aim to guide the translation process. In our work, we focus on {em Gloss2Text} translation stage and propose several advances by leveraging pre-trained large language models (LLMs), data augmentation, and novel label-smoothing loss function exploiting gloss translation ambiguities improving significantly the performance of state-of-the-art approaches. Through extensive experiments and ablation studies on the PHOENIX Weather 2014T dataset, our approach surpasses state-of-the-art performance in {em Gloss2Text} translation, indicating its efficacy in addressing sign language translation and suggesting promising avenues for future research and development. | http://arxiv.org/pdf/2407.01394v2 | [
"Pooya Fayyazsanavi",
"Antonios Anastasopoulos",
"Jana Košecká"
] | 2024-07-12T14:44:33Z | 2024-07-01T15:46:45Z |
2407.09297 | Learning Distances from Data with Normalizing Flows and Score Matching | Density-based distances (DBDs) offer an elegant solution to the problem of metric learning. By defining a Riemannian metric which increases with decreasing probability density, shortest paths naturally follow the data manifold and points are clustered according to the modes of the data. We show that existing methods to estimate Fermat distances, a particular choice of DBD, suffer from poor convergence in both low and high dimensions due to i) inaccurate density estimates and ii) reliance on graph-based paths which are increasingly rough in high dimensions. To address these issues, we propose learning the densities using a normalizing flow, a generative model with tractable density estimation, and employing a smooth relaxation method using a score model initialized from a graph-based proposal. Additionally, we introduce a dimension-adapted Fermat distance that exhibits more intuitive behavior when scaled to high dimensions and offers better numerical properties. Our work paves the way for practical use of density-based distances, especially in high-dimensional spaces. | http://arxiv.org/pdf/2407.09297v1 | [
"Peter Sorrenson",
"Daniel Behrend-Uriarte",
"Christoph Schnörr",
"Ullrich Köthe"
] | 2024-07-12T14:30:41Z | 2024-07-12T14:30:41Z |
2312.08307 | 3DReact: Geometric deep learning for chemical reactions | Geometric deep learning models, which incorporate the relevant molecular symmetries within the neural network architecture, have considerably improved the accuracy and data efficiency of predictions of molecular properties. Building on this success, we introduce 3DReact, a geometric deep learning model to predict reaction properties from three-dimensional structures of reactants and products. We demonstrate that the invariant version of the model is sufficient for existing reaction datasets. We illustrate its competitive performance on the prediction of activation barriers on the GDB7-22-TS, Cyclo-23-TS and Proparg-21-TS datasets in different atom-mapping regimes. We show that, compared to existing models for reaction property prediction, 3DReact offers a flexible framework that exploits atom-mapping information, if available, as well as geometries of reactants and products (in an invariant or equivariant fashion). Accordingly, it performs systematically well across different datasets, atom-mapping regimes, as well as both interpolation and extrapolation tasks. | http://arxiv.org/abs/2312.08307v2 | [
"Puck van Gerwen",
"Ksenia R. Briling",
"Charlotte Bunne",
"Vignesh Ram Somnath",
"Ruben Laplaza",
"Andreas Krause",
"Clemence Corminboeuf"
] | 2024-07-12T14:15:23Z | 2023-12-13T17:26:54Z |
2403.18717 | Semi-Supervised Learning for Deep Causal Generative Models | Developing models that are capable of answering questions of the form "How would x change if y had been z?'" is fundamental to advancing medical image analysis. Training causal generative models that address such counterfactual questions, though, currently requires that all relevant variables have been observed and that the corresponding labels are available in the training data. However, clinical data may not have complete records for all patients and state of the art causal generative models are unable to take full advantage of this. We thus develop, for the first time, a semi-supervised deep causal generative model that exploits the causal relationships between variables to maximise the use of all available data. We explore this in the setting where each sample is either fully labelled or fully unlabelled, as well as the more clinically realistic case of having different labels missing for each sample. We leverage techniques from causal inference to infer missing values and subsequently generate realistic counterfactuals, even for samples with incomplete labels. | http://arxiv.org/pdf/2403.18717v2 | [
"Yasin Ibrahim",
"Hermione Warr",
"Konstantinos Kamnitsas"
] | 2024-07-12T14:13:41Z | 2024-03-27T16:06:37Z |
2407.09276 | H2O-Danube3 Technical Report | We present H2O-Danube3, a series of small language models consisting of H2O-Danube3-4B, trained on 6T tokens and H2O-Danube3-500M, trained on 4T tokens. Our models are pre-trained on high quality Web data consisting of primarily English tokens in three stages with different data mixes before final supervised tuning for chat version. The models exhibit highly competitive metrics across a multitude of academic, chat, and fine-tuning benchmarks. Thanks to its compact architecture, H2O-Danube3 can be efficiently run on a modern smartphone, enabling local inference and rapid processing capabilities even on mobile devices. We make all models openly available under Apache 2.0 license further democratizing LLMs to a wider audience economically. | http://arxiv.org/pdf/2407.09276v1 | [
"Pascal Pfeiffer",
"Philipp Singer",
"Yauhen Babakhin",
"Gabor Fodor",
"Nischay Dhankhar",
"Sri Satish Ambati"
] | 2024-07-12T14:09:40Z | 2024-07-12T14:09:40Z |
2407.09274 | Unifying Sequences, Structures, and Descriptions for Any-to-Any Protein
Generation with the Large Multimodal Model HelixProtX | Proteins are fundamental components of biological systems and can be represented through various modalities, including sequences, structures, and textual descriptions. Despite the advances in deep learning and scientific large language models (LLMs) for protein research, current methodologies predominantly focus on limited specialized tasks -- often predicting one protein modality from another. These approaches restrict the understanding and generation of multimodal protein data. In contrast, large multimodal models have demonstrated potential capabilities in generating any-to-any content like text, images, and videos, thus enriching user interactions across various domains. Integrating these multimodal model technologies into protein research offers significant promise by potentially transforming how proteins are studied. To this end, we introduce HelixProtX, a system built upon the large multimodal model, aiming to offer a comprehensive solution to protein research by supporting any-to-any protein modality generation. Unlike existing methods, it allows for the transformation of any input protein modality into any desired protein modality. The experimental results affirm the advanced capabilities of HelixProtX, not only in generating functional descriptions from amino acid sequences but also in executing critical tasks such as designing protein sequences and structures from textual descriptions. Preliminary findings indicate that HelixProtX consistently achieves superior accuracy across a range of protein-related tasks, outperforming existing state-of-the-art models. By integrating multimodal large models into protein research, HelixProtX opens new avenues for understanding protein biology, thereby promising to accelerate scientific discovery. | http://arxiv.org/pdf/2407.09274v1 | [
"Zhiyuan Chen",
"Tianhao Chen",
"Chenggang Xie",
"Yang Xue",
"Xiaonan Zhang",
"Jingbo Zhou",
"Xiaomin Fang"
] | 2024-07-12T14:03:02Z | 2024-07-12T14:03:02Z |
2407.09271 | iNeMo: Incremental Neural Mesh Models for Robust Class-Incremental
Learning | Different from human nature, it is still common practice today for vision tasks to train deep learning models only initially and on fixed datasets. A variety of approaches have recently addressed handling continual data streams. However, extending these methods to manage out-of-distribution (OOD) scenarios has not effectively been investigated. On the other hand, it has recently been shown that non-continual neural mesh models exhibit strong performance in generalizing to such OOD scenarios. To leverage this decisive property in a continual learning setting, we propose incremental neural mesh models that can be extended with new meshes over time. In addition, we present a latent space initialization strategy that enables us to allocate feature space for future unseen classes in advance and a positional regularization term that forces the features of the different classes to consistently stay in respective latent space regions. We demonstrate the effectiveness of our method through extensive experiments on the Pascal3D and ObjectNet3D datasets and show that our approach outperforms the baselines for classification by $2-6%$ in the in-domain and by $6-50%$ in the OOD setting. Our work also presents the first incremental learning approach for pose estimation. Our code and model can be found at https://github.com/Fischer-Tom/iNeMo. | http://arxiv.org/pdf/2407.09271v1 | [
"Tom Fischer",
"Yaoyao Liu",
"Artur Jesslen",
"Noor Ahmed",
"Prakhar Kaushik",
"Angtian Wang",
"Alan Yuille",
"Adam Kortylewski",
"Eddy Ilg"
] | 2024-07-12T13:57:49Z | 2024-07-12T13:57:49Z |
2402.05435 | GPT-4 Generated Narratives of Life Events using a Structured Narrative
Prompt: A Validation Study | Large Language Models (LLMs) play a pivotal role in generating vast arrays of narratives, facilitating a systematic exploration of their effectiveness for communicating life events in narrative form. In this study, we employ a zero-shot structured narrative prompt to generate 24,000 narratives using OpenAI's GPT-4. From this dataset, we manually classify 2,880 narratives and evaluate their validity in conveying birth, death, hiring, and firing events. Remarkably, 87.43% of the narratives sufficiently convey the intention of the structured prompt. To automate the identification of valid and invalid narratives, we train and validate nine Machine Learning models on the classified datasets. Leveraging these models, we extend our analysis to predict the classifications of the remaining 21,120 narratives. All the ML models excelled at classifying valid narratives as valid, but experienced challenges at simultaneously classifying invalid narratives as invalid. Our findings not only advance the study of LLM capabilities, limitations, and validity but also offer practical insights for narrative generation and natural language processing applications. | http://arxiv.org/pdf/2402.05435v2 | [
"Christopher J. Lynch",
"Erik Jensen",
"Madison H. Munro",
"Virginia Zamponi",
"Joseph Martinez",
"Kevin O'Brien",
"Brandon Feldhaus",
"Katherine Smith",
"Ann Marie Reinhold",
"Ross Gore"
] | 2024-07-12T13:46:47Z | 2024-02-08T06:20:01Z |
2407.09251 | Deep Adversarial Defense Against Multilevel-Lp Attacks | Deep learning models have shown considerable vulnerability to adversarial attacks, particularly as attacker strategies become more sophisticated. While traditional adversarial training (AT) techniques offer some resilience, they often focus on defending against a single type of attack, e.g., the $ell_infty$-norm attack, which can fail for other types. This paper introduces a computationally efficient multilevel $ell_p$ defense, called the Efficient Robust Mode Connectivity (EMRC) method, which aims to enhance a deep learning model's resilience against multiple $ell_p$-norm attacks. Similar to analytical continuation approaches used in continuous optimization, the method blends two $p$-specific adversarially optimal models, the $ell_1$- and $ell_infty$-norm AT solutions, to provide good adversarial robustness for a range of $p$. We present experiments demonstrating that our approach performs better on various attacks as compared to AT-$ell_infty$, E-AT, and MSD, for datasets/architectures including: CIFAR-10, CIFAR-100 / PreResNet110, WideResNet, ViT-Base. | http://arxiv.org/pdf/2407.09251v1 | [
"Ren Wang",
"Yuxuan Li",
"Alfred Hero"
] | 2024-07-12T13:30:00Z | 2024-07-12T13:30:00Z |
2407.09250 | FedsLLM: Federated Split Learning for Large Language Models over
Communication Networks | Addressing the challenges of deploying large language models in wireless communication networks, this paper combines low-rank adaptation technology (LoRA) with the splitfed learning framework to propose the federated split learning for large language models (FedsLLM) framework. The method introduced in this paper utilizes LoRA technology to reduce processing loads by dividing the network into client subnetworks and server subnetworks. It leverages a federated server to aggregate and update client models. As the training data are transmitted through a wireless network between clients and both main and federated servers, the training delay is determined by the learning accuracy and the allocation of communication bandwidth. This paper models the minimization of the training delay by integrating computation and communication optimization, simplifying the optimization problem into a convex problem to find the optimal solution. Additionally, it presents a lemma that describes the precise solutions to this problem. Simulation results demonstrate that the proposed optimization algorithm reduces delays by an average of 47.63% compared to unoptimized scenarios. | http://arxiv.org/pdf/2407.09250v1 | [
"Kai Zhao",
"Zhaohui Yang",
"Chongwen Huang",
"Xiaoming Chen",
"Zhaoyang Zhang"
] | 2024-07-12T13:23:54Z | 2024-07-12T13:23:54Z |
2406.18332 | Early Classification of Time Series: Taxonomy and Benchmark | In many situations, the measurements of a studied phenomenon are provided sequentially, and the prediction of its class needs to be made as early as possible so as not to incur too high a time penalty, but not too early and risk paying the cost of misclassification. This problem has been particularly studied in the case of time series, and is known as Early Classification of Time Series (ECTS). Although it has been the subject of a growing body of literature, there is still a lack of a systematic, shared evaluation protocol to compare the relative merits of the various existing methods. This document begins by situating these methods within a principle-based taxonomy. It defines dimensions for organizing their evaluation, and then reports the results of a very extensive set of experiments along these dimensions involving nine state-of-the art ECTS algorithms. In addition, these and other experiments can be carried out using an open-source library in which most of the existing ECTS algorithms have been implemented (see url{https://github.com/ML-EDM/ml_edm}). | http://arxiv.org/pdf/2406.18332v2 | [
"Aurélien Renault",
"Alexis Bondu",
"Antoine Cornuéjols",
"Vincent Lemaire"
] | 2024-07-12T13:16:16Z | 2024-06-26T13:21:00Z |
2402.03625 | Convex Relaxations of ReLU Neural Networks Approximate Global Optima in
Polynomial Time | In this paper, we study the optimality gap between two-layer ReLU networks regularized with weight decay and their convex relaxations. We show that when the training data is random, the relative optimality gap between the original problem and its relaxation can be bounded by a factor of O(log n^0.5), where n is the number of training samples. A simple application leads to a tractable polynomial-time algorithm that is guaranteed to solve the original non-convex problem up to a logarithmic factor. Moreover, under mild assumptions, we show that local gradient methods converge to a point with low training loss with high probability. Our result is an exponential improvement compared to existing results and sheds new light on understanding why local gradient methods work well. | http://arxiv.org/pdf/2402.03625v3 | [
"Sungyoon Kim",
"Mert Pilanci"
] | 2024-07-12T12:55:53Z | 2024-02-06T01:29:35Z |
2307.05284 | On the Need of a Modeling Language for Distribution Shifts:
Illustrations on Tabular Datasets | Different distribution shifts require different interventions, and algorithms must be grounded in the specific shifts they address. However, methodological development for robust algorithms typically relies on structural assumptions that lack empirical validation. Advocating for an empirically grounded data-driven approach to research, we build an empirical testbed comprising natural shifts across 5 tabular datasets and 60,000 method configurations encompassing imbalanced learning and distributionally robust optimization (DRO) methods. We find $Y|X$-shifts are most prevalent on our testbed, in stark contrast to the heavy focus on $X$ (covariate)-shifts in the ML literature. The performance of robust algorithms varies significantly over shift types, and is no better than that of vanilla methods. To understand why, we conduct an in-depth empirical analysis of DRO methods and find that although often neglected by researchers, implementation details -- such as the choice of underlying model class (e.g., XGBoost) and hyperparameter selection -- have a bigger impact on performance than the ambiguity set or its radius. To further bridge that gap between methodological research and practice, we design case studies that illustrate how such a data-driven, inductive understanding of distribution shifts can enhance both data-centric and algorithmic interventions. | http://arxiv.org/pdf/2307.05284v3 | [
"Jiashuo Liu",
"Tianyu Wang",
"Peng Cui",
"Hongseok Namkoong"
] | 2024-07-12T12:54:37Z | 2023-07-11T14:25:10Z |
2402.09303 | Comparing supervised learning dynamics: Deep neural networks match human
data efficiency but show a generalisation lag | Recent research has seen many behavioral comparisons between humans and deep neural networks (DNNs) in the domain of image classification. Often, comparison studies focus on the end-result of the learning process by measuring and comparing the similarities in the representations of object categories once they have been formed. However, the process of how these representations emerge -- that is, the behavioral changes and intermediate stages observed during the acquisition -- is less often directly and empirically compared. Here we report a detailed investigation of the learning dynamics in human observers and various classic and state-of-the-art DNNs. We develop a constrained supervised learning environment to align learning-relevant conditions such as starting point, input modality, available input data and the feedback provided. Across the whole learning process we evaluate and compare how well learned representations can be generalized to previously unseen test data. Comparisons across the entire learning process indicate that DNNs demonstrate a level of data efficiency comparable to human learners, challenging some prevailing assumptions in the field. However, our results also reveal representational differences: while DNNs' learning is characterized by a pronounced generalisation lag, humans appear to immediately acquire generalizable representations without a preliminary phase of learning training set-specific information that is only later transferred to novel data. | http://arxiv.org/pdf/2402.09303v3 | [
"Lukas S. Huber",
"Fred W. Mast",
"Felix A. Wichmann"
] | 2024-07-12T12:47:19Z | 2024-02-14T16:47:20Z |
1912.06931 | Asymmetric GANs for Image-to-Image Translation | Existing models for unsupervised image translation with Generative Adversarial Networks (GANs) can learn the mapping from the source domain to the target domain using a cycle-consistency loss. However, these methods always adopt a symmetric network architecture to learn both forward and backward cycles. Because of the task complexity and cycle input difference between the source and target domains, the inequality in bidirectional forward-backward cycle translations is significant and the amount of information between two domains is different. In this paper, we analyze the limitation of existing symmetric GANs in asymmetric translation tasks, and propose an AsymmetricGAN model with both translation and reconstruction generators of unequal sizes and different parameter-sharing strategy to adapt to the asymmetric need in both unsupervised and supervised image translation tasks. Moreover, the training stage of existing methods has the common problem of model collapse that degrades the quality of the generated images, thus we explore different optimization losses for better training of AsymmetricGAN, making image translation with higher consistency and better stability. Extensive experiments on both supervised and unsupervised generative tasks with 8 datasets show that AsymmetricGAN achieves superior model capacity and better generation performance compared with existing GANs. To the best of our knowledge, we are the first to investigate the asymmetric GAN structure on both unsupervised and supervised image translation tasks. | http://arxiv.org/pdf/1912.06931v2 | [
"Hao Tang",
"Nicu Sebe"
] | 2024-07-12T12:34:52Z | 2019-12-14T21:24:41Z |
2407.09216 | A Fair Ranking and New Model for Panoptic Scene Graph Generation | In panoptic scene graph generation (PSGG), models retrieve interactions between objects in an image which are grounded by panoptic segmentation masks. Previous evaluations on panoptic scene graphs have been subject to an erroneous evaluation protocol where multiple masks for the same object can lead to multiple relation distributions per mask-mask pair. This can be exploited to increase the final score. We correct this flaw and provide a fair ranking over a wide range of existing PSGG models. The observed scores for existing methods increase by up to 7.4 mR@50 for all two-stage methods, while dropping by up to 19.3 mR@50 for all one-stage methods, highlighting the importance of a correct evaluation. Contrary to recent publications, we show that existing two-stage methods are competitive to one-stage methods. Building on this, we introduce the Decoupled SceneFormer (DSFormer), a novel two-stage model that outperforms all existing scene graph models by a large margin of +11 mR@50 and +10 mNgR@50 on the corrected evaluation, thus setting a new SOTA. As a core design principle, DSFormer encodes subject and object masks directly into feature space. | http://arxiv.org/pdf/2407.09216v1 | [
"Julian Lorenz",
"Alexander Pest",
"Daniel Kienzle",
"Katja Ludwig",
"Rainer Lienhart"
] | 2024-07-12T12:28:08Z | 2024-07-12T12:28:08Z |
2407.09212 | Generating SROI^{-} Ontologies via Knowledge Graph Query Embedding
Learning | Query embedding approaches answer complex logical queries over incomplete knowledge graphs (KGs) by computing and operating on low-dimensional vector representations of entities, relations, and queries. However, current query embedding models heavily rely on excessively parameterized neural networks and cannot explain the knowledge learned from the graph. We propose a novel query embedding method, AConE, which explains the knowledge learned from the graph in the form of SROI^{-} description logic axioms while being more parameter-efficient than most existing approaches. AConE associates queries to a SROI^{-} description logic concept. Every SROI^{-} concept is embedded as a cone in complex vector space, and each SROI^{-} relation is embedded as a transformation that rotates and scales cones. We show theoretically that AConE can learn SROI^{-} axioms, and defines an algebra whose operations correspond one to one to SROI^{-} description logic concept constructs. Our empirical study on multiple query datasets shows that AConE achieves superior results over previous baselines with fewer parameters. Notably on the WN18RR dataset, AConE achieves significant improvement over baseline models. We provide comprehensive analyses showing that the capability to represent axioms positively impacts the results of query answering. | http://arxiv.org/pdf/2407.09212v1 | [
"Yunjie He",
"Daniel Hernandez",
"Mojtaba Nayyeri",
"Bo Xiong",
"Yuqicheng Zhu",
"Evgeny Kharlamov",
"Steffen Staab"
] | 2024-07-12T12:20:39Z | 2024-07-12T12:20:39Z |
2310.05227 | Physics-aware Machine Learning Revolutionizes Scientific Paradigm for
Machine Learning and Process-based Hydrology | Accurate hydrological understanding and water cycle prediction are crucial for addressing scientific and societal challenges associated with the management of water resources, particularly under the dynamic influence of anthropogenic climate change. Existing reviews predominantly concentrate on the development of machine learning (ML) in this field, yet there is a clear distinction between hydrology and ML as separate paradigms. Here, we introduce physics-aware ML as a transformative approach to overcome the perceived barrier and revolutionize both fields. Specifically, we present a comprehensive review of the physics-aware ML methods, building a structured community (PaML) of existing methodologies that integrate prior physical knowledge or physics-based modeling into ML. We systematically analyze these PaML methodologies with respect to four aspects: physical data-guided ML, physics-informed ML, physics-embedded ML, and physics-aware hybrid learning. PaML facilitates ML-aided hypotheses, accelerating insights from big data and fostering scientific discoveries. We first conduct a systematic review of hydrology in PaML, including rainfall-runoff hydrological processes and hydrodynamic processes, and highlight the most promising and challenging directions for different objectives and PaML methods. Finally, a new PaML-based hydrology platform, termed HydroPML, is released as a foundation for hydrological applications. HydroPML enhances the explainability and causality of ML and lays the groundwork for the digital water cycle's realization. The HydroPML platform is publicly available at https://hydropml.github.io/. | http://arxiv.org/pdf/2310.05227v5 | [
"Qingsong Xu",
"Yilei Shi",
"Jonathan Bamber",
"Ye Tuo",
"Ralf Ludwig",
"Xiao Xiang Zhu"
] | 2024-07-12T12:05:28Z | 2023-10-08T16:48:29Z |
2309.03774 | Deep Learning Safety Concerns in Automated Driving Perception | Recent advances in the field of deep learning and impressive performance of deep neural networks (DNNs) for perception have resulted in an increased demand for their use in automated driving (AD) systems. The safety of such systems is of utmost importance and thus requires to consider the unique properties of DNNs. In order to achieve safety of AD systems with DNN-based perception components in a systematic and comprehensive approach, so-called safety concerns have been introduced as a suitable structuring element. On the one hand, the concept of safety concerns is -- by design -- well aligned to existing standards relevant for safety of AD systems such as ISO 21448 (SOTIF). On the other hand, it has already inspired several academic publications and upcoming standards on AI safety such as ISO PAS 8800. While the concept of safety concerns has been previously introduced, this paper extends and refines it, leveraging feedback from various domain and safety experts in the field. In particular, this paper introduces an additional categorization for a better understanding as well as enabling cross-functional teams to jointly address the concerns. | http://arxiv.org/pdf/2309.03774v3 | [
"Stephanie Abrecht",
"Alexander Hirsch",
"Shervin Raafatnia",
"Matthias Woehrle"
] | 2024-07-12T11:46:08Z | 2023-09-07T15:25:47Z |
2210.03919 | CLIP-PAE: Projection-Augmentation Embedding to Extract Relevant Features
for a Disentangled, Interpretable, and Controllable Text-Guided Face
Manipulation | Recently introduced Contrastive Language-Image Pre-Training (CLIP) bridges images and text by embedding them into a joint latent space. This opens the door to ample literature that aims to manipulate an input image by providing a textual explanation. However, due to the discrepancy between image and text embeddings in the joint space, using text embeddings as the optimization target often introduces undesired artifacts in the resulting images. Disentanglement, interpretability, and controllability are also hard to guarantee for manipulation. To alleviate these problems, we propose to define corpus subspaces spanned by relevant prompts to capture specific image characteristics. We introduce CLIP Projection-Augmentation Embedding (PAE) as an optimization target to improve the performance of text-guided image manipulation. Our method is a simple and general paradigm that can be easily computed and adapted, and smoothly incorporated into any CLIP-based image manipulation algorithm. To demonstrate the effectiveness of our method, we conduct several theoretical and empirical studies. As a case study, we utilize the method for text-guided semantic face editing. We quantitatively and qualitatively demonstrate that PAE facilitates a more disentangled, interpretable, and controllable image manipulation with state-of-the-art quality and accuracy. Project page: https://chenliang-zhou.github.io/CLIP-PAE/. | http://arxiv.org/pdf/2210.03919v5 | [
"Chenliang Zhou",
"Fangcheng Zhong",
"Cengiz Oztireli"
] | 2024-07-12T11:44:29Z | 2022-10-08T05:12:25Z |
2309.04820 | ABC Easy as 123: A Blind Counter for Exemplar-Free Multi-Class
Class-agnostic Counting | Class-agnostic counting methods enumerate objects of an arbitrary class, providing tremendous utility in many fields. Prior works have limited usefulness as they require either a set of examples of the type to be counted or that the query image contains only a single type of object. A significant factor in these shortcomings is the lack of a dataset to properly address counting in settings with more than one kind of object present. To address these issues, we propose the first Multi-class, Class-Agnostic Counting dataset (MCAC) and A Blind Counter (ABC123), a method that can count multiple types of objects simultaneously without using examples of type during training or inference. ABC123 introduces a new paradigm where instead of requiring exemplars to guide the enumeration, examples are found after the counting stage to help a user understand the generated outputs. We show that ABC123 outperforms contemporary methods on MCAC without needing human in-the-loop annotations. We also show that this performance transfers to FSC-147, the standard class-agnostic counting dataset. MCAC is available at MCAC.active.vision and ABC123 is available at ABC123.active.vision. | http://arxiv.org/pdf/2309.04820v2 | [
"Michael A. Hobley",
"Victor A. Prisacariu"
] | 2024-07-12T11:41:33Z | 2023-09-09T15:18:46Z |
2405.06649 | ProLLM: Protein Chain-of-Thoughts Enhanced LLM for Protein-Protein
Interaction Prediction | The prediction of protein-protein interactions (PPIs) is crucial for understanding biological functions and diseases. Previous machine learning approaches to PPI prediction mainly focus on direct physical interactions, ignoring the broader context of nonphysical connections through intermediate proteins, thus limiting their effectiveness. The emergence of Large Language Models (LLMs) provides a new opportunity for addressing this complex biological challenge. By transforming structured data into natural language prompts, we can map the relationships between proteins into texts. This approach allows LLMs to identify indirect connections between proteins, tracing the path from upstream to downstream. Therefore, we propose a novel framework ProLLM that employs an LLM tailored for PPI for the first time. Specifically, we propose Protein Chain of Thought (ProCoT), which replicates the biological mechanism of signaling pathways as natural language prompts. ProCoT considers a signaling pathway as a protein reasoning process, which starts from upstream proteins and passes through several intermediate proteins to transmit biological signals to downstream proteins. Thus, we can use ProCoT to predict the interaction between upstream proteins and downstream proteins. The training of ProLLM employs the ProCoT format, which enhances the model's understanding of complex biological problems. In addition to ProCoT, this paper also contributes to the exploration of embedding replacement of protein sites in natural language prompts, and instruction fine-tuning in protein knowledge datasets. We demonstrate the efficacy of ProLLM through rigorous validation against benchmark datasets, showing significant improvement over existing methods in terms of prediction accuracy and generalizability. The code is available at: https://github.com/MingyuJ666/ProLLM. | http://arxiv.org/pdf/2405.06649v2 | [
"Mingyu Jin",
"Haochen Xue",
"Zhenting Wang",
"Boming Kang",
"Ruosong Ye",
"Kaixiong Zhou",
"Mengnan Du",
"Yongfeng Zhang"
] | 2024-07-12T11:38:56Z | 2024-03-30T05:32:42Z |
2407.09186 | Variational Inference via Smoothed Particle Hydrodynamics | A new variational inference method, SPH-ParVI, based on smoothed particle hydrodynamics (SPH), is proposed for sampling partially known densities (e.g. up to a constant) or sampling using gradients. SPH-ParVI simulates the flow of a fluid under external effects driven by the target density; transient or steady state of the fluid approximates the target density. The continuum fluid is modelled as an interacting particle system (IPS) via SPH, where each particle carries smoothed properties, interacts and evolves as per the Navier-Stokes equations. This mesh-free, Lagrangian simulation method offers fast, flexible, scalable and deterministic sampling and inference for a class of probabilistic models such as those encountered in Bayesian inference and generative modelling. | http://arxiv.org/pdf/2407.09186v1 | [
"Yongchao Huang"
] | 2024-07-12T11:38:41Z | 2024-07-12T11:38:41Z |
2407.09173 | Conformal Inductive Graph Neural Networks | Conformal prediction (CP) transforms any model's output into prediction sets guaranteed to include (cover) the true label. CP requires exchangeability, a relaxation of the i.i.d. assumption, to obtain a valid distribution-free coverage guarantee. This makes it directly applicable to transductive node-classification. However, conventional CP cannot be applied in inductive settings due to the implicit shift in the (calibration) scores caused by message passing with the new nodes. We fix this issue for both cases of node and edge-exchangeable graphs, recovering the standard coverage guarantee without sacrificing statistical efficiency. We further prove that the guarantee holds independently of the prediction time, e.g. upon arrival of a new node/edge or at any subsequent moment. | http://arxiv.org/pdf/2407.09173v1 | [
"Soroush H. Zargarbashi",
"Aleksandar Bojchevski"
] | 2024-07-12T11:12:49Z | 2024-07-12T11:12:49Z |
2407.09167 | SE(3)-bi-equivariant Transformers for Point Cloud Assembly | Given a pair of point clouds, the goal of assembly is to recover a rigid transformation that aligns one point cloud to the other. This task is challenging because the point clouds may be non-overlapped, and they may have arbitrary initial positions. To address these difficulties, we propose a method, called SE(3)-bi-equivariant transformer (BITR), based on the SE(3)-bi-equivariance prior of the task: it guarantees that when the inputs are rigidly perturbed, the output will transform accordingly. Due to its equivariance property, BITR can not only handle non-overlapped PCs, but also guarantee robustness against initial positions. Specifically, BITR first extracts features of the inputs using a novel $SE(3) times SE(3)$-transformer, and then projects the learned feature to group SE(3) as the output. Moreover, we theoretically show that swap and scale equivariances can be incorporated into BITR, thus it further guarantees stable performance under scaling and swapping the inputs. We experimentally show the effectiveness of BITR in practical tasks. | http://arxiv.org/pdf/2407.09167v1 | [
"Ziming Wang",
"Rebecka Jörnsten"
] | 2024-07-12T11:01:28Z | 2024-07-12T11:01:28Z |
2407.09165 | Robust Yet Efficient Conformal Prediction Sets | Conformal prediction (CP) can convert any model's output into prediction sets guaranteed to include the true label with any user-specified probability. However, same as the model itself, CP is vulnerable to adversarial test examples (evasion) and perturbed calibration data (poisoning). We derive provably robust sets by bounding the worst-case change in conformity scores. Our tighter bounds lead to more efficient sets. We cover both continuous and discrete (sparse) data and our guarantees work both for evasion and poisoning attacks (on both features and labels). | http://arxiv.org/pdf/2407.09165v1 | [
"Soroush H. Zargarbashi",
"Mohammad Sadegh Akhondzadeh",
"Aleksandar Bojchevski"
] | 2024-07-12T10:59:44Z | 2024-07-12T10:59:44Z |
2407.09162 | Exploring State Space and Reasoning by Elimination in Tsetlin Machine | The Tsetlin Machine (TM) has gained significant attention in Machine Learning (ML). By employing logical fundamentals, it facilitates pattern learning and representation, offering an alternative approach for developing comprehensible Artificial Intelligence (AI) with a specific focus on pattern classification in the form of conjunctive clauses. In the domain of Natural Language Processing (NLP), TM is utilised to construct word embedding and describe target words using clauses. To enhance the descriptive capacity of these clauses, we study the concept of Reasoning by Elimination (RbE) in clauses' formulation, which involves incorporating feature negations to provide a more comprehensive representation. In more detail, this paper employs the Tsetlin Machine Auto-Encoder (TM-AE) architecture to generate dense word vectors, aiming at capturing contextual information by extracting feature-dense vectors for a given vocabulary. Thereafter, the principle of RbE is explored to improve descriptivity and optimise the performance of the TM. Specifically, the specificity parameter s and the voting margin parameter T are leveraged to regulate feature distribution in the state space, resulting in a dense representation of information for each clause. In addition, we investigate the state spaces of TM-AE, especially for the forgotten/excluded features. Empirical investigations on artificially generated data, the IMDB dataset, and the 20 Newsgroups dataset showcase the robustness of the TM, with accuracy reaching 90.62% for the IMDB. | http://arxiv.org/pdf/2407.09162v1 | [
"Ahmed K. Kadhim",
"Ole-Christoffer Granmo",
"Lei Jiao",
"Rishad Shafik"
] | 2024-07-12T10:58:01Z | 2024-07-12T10:58:01Z |
2407.09157 | Movie Recommendation with Poster Attention via Multi-modal Transformer
Feature Fusion | Pre-trained models learn general representations from large datsets which can be fine-turned for specific tasks to significantly reduce training time. Pre-trained models like generative pretrained transformers (GPT), bidirectional encoder representations from transformers (BERT), vision transfomers (ViT) have become a cornerstone of current research in machine learning. This study proposes a multi-modal movie recommendation system by extract features of the well designed posters for each movie and the narrative text description of the movie. This system uses the BERT model to extract the information of text modality, the ViT model applied to extract the information of poster/image modality, and the Transformer architecture for feature fusion of all modalities to predict users' preference. The integration of pre-trained foundational models with some smaller data sets in downstream applications capture multi-modal content features in a more comprehensive manner, thereby providing more accurate recommendations. The efficiency of the proof-of-concept model is verified by the standard benchmark problem the MovieLens 100K and 1M datasets. The prediction accuracy of user ratings is enhanced in comparison to the baseline algorithm, thereby demonstrating the potential of this cross-modal algorithm to be applied for movie or video recommendation. | http://arxiv.org/pdf/2407.09157v1 | [
"Linhan Xia",
"Yicheng Yang",
"Ziou Chen",
"Zheng Yang",
"Shengxin Zhu"
] | 2024-07-12T10:44:51Z | 2024-07-12T10:44:51Z |
2402.06963 | Tree Ensembles for Contextual Bandits | We propose a novel framework for contextual multi-armed bandits based on tree ensembles. Our framework integrates two widely used bandit methods, Upper Confidence Bound and Thompson Sampling, for both standard and combinatorial settings. We demonstrate the effectiveness of our framework via several experimental studies, employing both XGBoost and random forest, two popular tree ensemble methods. Compared to state-of-the-art methods based on decision trees and neural networks, our methods exhibit superior performance in terms of both regret minimization and computational runtime, when applied to benchmark datasets and the real-world application of navigation over road networks. | http://arxiv.org/pdf/2402.06963v2 | [
"Hannes Nilsson",
"Rikard Johansson",
"Niklas Åkerblom",
"Morteza Haghir Chehreghani"
] | 2024-07-12T10:40:08Z | 2024-02-10T14:36:31Z |
2402.14522 | Towards Unified Task Embeddings Across Multiple Models: Bridging the Gap
for Prompt-Based Large Language Models and Beyond | Task embedding, a meta-learning technique that captures task-specific information, has gained popularity, especially in areas such as multi-task learning, model editing, and interpretability. However, it faces challenges with the emergence of prompt-guided Large Language Models (LLMs) operating in a gradient-free manner. Existing task embedding methods rely on fine-tuned, task-specific language models, which hinders the adaptability of task embeddings across diverse models, especially prompt-based LLMs. To hardness the potential of task embeddings in the era of LLMs, we propose a framework for unified task embeddings (FUTE), harmonizing task embeddings from various models, including smaller language models and LLMs with varied prompts, within a single vector space. Such uniformity enables comparison and analysis of similarities amongst different models, broadening the scope and utility of existing task embedding methods in multi-model scenarios, while maintaining their performance comparable to architecture-specific methods. | http://arxiv.org/pdf/2402.14522v2 | [
"Xinyu Wang",
"Hainiu Xu",
"Lin Gui",
"Yulan He"
] | 2024-07-12T10:39:28Z | 2024-02-22T13:13:31Z |
2405.04437 | vAttention: Dynamic Memory Management for Serving LLMs without
PagedAttention | Efficient management of GPU memory is essential for high throughput LLM inference. Prior systems used to reserve KV-cache memory ahead-of-time that resulted in wasted capacity due to internal fragmentation. Inspired by demand paging, vLLM proposed PagedAttention to enable dynamic memory allocation for KV-cache. This approach eliminates fragmentation and improves serving throughout. However, to be able to allocate physical memory dynamically, PagedAttention changes the layout of KV-cache from contiguous virtual memory to non-contiguous virtual memory. As a consequence, one needs to rewrite the attention kernels to support paging, and implement a memory manager in the serving framework. This results in both performance and programming overheads, as well as portability challenges in adopting state-of-the-art attention kernels. In this paper, we propose vAttention, a new approach for dynamic KV-cache memory management. In contrast to PagedAttention, vAttention stores KV-cache in contiguous virtual memory and leverages OS support for on-demand allocation of physical memory. vAttention thus enables one to use state-of-the art attention kernels out-of-the-box by adding support for dynamic allocation of physical memory without having to re-write their code. We implement vAttention in the vLLM serving stack to show that it also helps improve decode throughput by up to 1.99x over vLLM, and the end-to-end serving throughput by up to 1.22x and 1.29x, compared to using the state-of-the-art PagedAttention based kernels of FlashAttention and FlashInfer. | http://arxiv.org/pdf/2405.04437v2 | [
"Ramya Prabhu",
"Ajay Nayak",
"Jayashree Mohan",
"Ramachandran Ramjee",
"Ashish Panwar"
] | 2024-07-12T10:33:31Z | 2024-05-07T16:00:32Z |
2407.09150 | Evaluating the Adversarial Robustness of Semantic Segmentation: Trying
Harder Pays Off | Machine learning models are vulnerable to tiny adversarial input perturbations optimized to cause a very large output error. To measure this vulnerability, we need reliable methods that can find such adversarial perturbations. For image classification models, evaluation methodologies have emerged that have stood the test of time. However, we argue that in the area of semantic segmentation, a good approximation of the sensitivity to adversarial perturbations requires significantly more effort than what is currently considered satisfactory. To support this claim, we re-evaluate a number of well-known robust segmentation models in an extensive empirical study. We propose new attacks and combine them with the strongest attacks available in the literature. We also analyze the sensitivity of the models in fine detail. The results indicate that most of the state-of-the-art models have a dramatically larger sensitivity to adversarial perturbations than previously reported. We also demonstrate a size-bias: small objects are often more easily attacked, even if the large objects are robust, a phenomenon not revealed by current evaluation metrics. Our results also demonstrate that a diverse set of strong attacks is necessary, because different models are often vulnerable to different attacks. | http://arxiv.org/pdf/2407.09150v1 | [
"Levente Halmosi",
"Bálint Mohos",
"Márk Jelasity"
] | 2024-07-12T10:32:53Z | 2024-07-12T10:32:53Z |
2407.09141 | Accuracy is Not All You Need | When Large Language Models (LLMs) are compressed using techniques such as quantization, the predominant way to demonstrate the validity of such techniques is by measuring the model's accuracy on various benchmarks.If the accuracies of the baseline model and the compressed model are close, it is assumed that there was negligible degradation in quality.However, even when the accuracy of baseline and compressed model are similar, we observe the phenomenon of flips, wherein answers change from correct to incorrect and vice versa in proportion.We conduct a detailed study of metrics across multiple compression techniques, models and datasets, demonstrating that the behavior of compressed models as visible to end-users is often significantly different from the baseline model, even when accuracy is similar.We further evaluate compressed models qualitatively and quantitatively using MT-Bench and show that compressed models are significantly worse than baseline models in this free-form generative task.Thus, we argue that compression techniques should also be evaluated using distance metrics.We propose two such metrics, KL-Divergence and flips, and show that they are well correlated. | http://arxiv.org/pdf/2407.09141v1 | [
"Abhinav Dutta",
"Sanjeev Krishnan",
"Nipun Kwatra",
"Ramachandran Ramjee"
] | 2024-07-12T10:19:02Z | 2024-07-12T10:19:02Z |
2401.15022 | Applications of artificial intelligence in the analysis of
histopathology images of gliomas: a review | In recent years, the diagnosis of gliomas has become increasingly complex. Analysis of glioma histopathology images using artificial intelligence (AI) offers new opportunities to support diagnosis and outcome prediction. To give an overview of the current state of research, this review examines 83 publicly available research studies that have proposed AI-based methods for whole-slide histopathology images of human gliomas, covering the diagnostic tasks of subtyping (23/83), grading (27/83), molecular marker prediction (20/83), and survival prediction (29/83). All studies were reviewed with regard to methodological aspects as well as clinical applicability. It was found that the focus of current research is the assessment of hematoxylin and eosin-stained tissue sections of adult-type diffuse gliomas. The majority of studies (52/83) are based on the publicly available glioblastoma and low-grade glioma datasets from The Cancer Genome Atlas (TCGA) and only a few studies employed other datasets in isolation (16/83) or in addition to the TCGA datasets (15/83). Current approaches mostly rely on convolutional neural networks (63/83) for analyzing tissue at 20x magnification (35/83). A new field of research is the integration of clinical data, omics data, or magnetic resonance imaging (29/83). So far, AI-based methods have achieved promising results, but are not yet used in real clinical settings. Future work should focus on the independent validation of methods on larger, multi-site datasets with high-quality and up-to-date clinical and molecular pathology annotations to demonstrate routine applicability. | http://arxiv.org/abs/2401.15022v4 | [
"Jan-Philipp Redlich",
"Friedrich Feuerhake",
"Joachim Weis",
"Nadine S. Schaadt",
"Sarah Teuber-Hanselmann",
"Christoph Buck",
"Sabine Luttmann",
"Andrea Eberle",
"Stefan Nikolin",
"Arno Appenzeller",
"Andreas Portmann",
"André Homeyer"
] | 2024-07-12T10:16:55Z | 2024-01-26T17:29:01Z |
2407.09136 | Stepwise Verification and Remediation of Student Reasoning Errors with
Large Language Model Tutors | Large language models (LLMs) present an opportunity to scale high-quality personalized education to all. A promising approach towards this means is to build dialog tutoring models that scaffold students' problem-solving. However, even though existing LLMs perform well in solving reasoning questions, they struggle to precisely detect student's errors and tailor their feedback to these errors. Inspired by real-world teaching practice where teachers identify student errors and customize their response based on them, we focus on verifying student solutions and show how grounding to such verification improves the overall quality of tutor response generation. We collect a dataset of 1K stepwise math reasoning chains with the first error step annotated by teachers. We show empirically that finding the mistake in a student solution is challenging for current models. We propose and evaluate several verifiers for detecting these errors. Using both automatic and human evaluation we show that the student solution verifiers steer the generation model towards highly targeted responses to student errors which are more often correct with less hallucinations compared to existing baselines. | http://arxiv.org/pdf/2407.09136v1 | [
"Nico Daheim",
"Jakub Macina",
"Manu Kapur",
"Iryna Gurevych",
"Mrinmaya Sachan"
] | 2024-07-12T10:11:40Z | 2024-07-12T10:11:40Z |