arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
sequencelengths 1
389
| abstract
stringlengths 96
3.09k
| categories
sequencelengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2402.12539 | 2024-02-19T21:01:11Z | Impact of data usage for forecasting on performance of model predictive
control in buildings with smart energy storage | [
"Max Langtry",
"Vijja Wichitwechkarn",
"Rebecca Ward",
"Chaoqun Zhuang",
"Monika J. Kreitmair",
"Nikolas Makasis",
"Zack Xuereb Conti",
"Ruchi Choudhary"
] | Data is required to develop forecasting models for use in Model Predictive
Control (MPC) schemes in building energy systems. However, data usage incurs
costs from both its collection and exploitation. Determining cost optimal data
usage requires understanding of the forecast accuracy and resulting MPC
operational performance it enables. This study investigates the performance of
both simple and state-of-the-art machine learning prediction models for MPC in
a multi-building energy system simulation using historic building energy data.
The impact of data usage on forecast accuracy is quantified for the following
data efficiency measures: reuse of prediction models, reduction of training
data volumes, reduction of model data features, and online model training. A
simple linear multi-layer perceptron model is shown to provide equivalent
forecast accuracy to state-of-the-art models, with greater data efficiency and
generalisability. The use of more than 2 years of training data for load
prediction models provided no significant improvement in forecast accuracy.
Forecast accuracy and data efficiency were improved simultaneously by using
change-point analysis to screen training data. Reused models and those trained
with 3 months of data had on average 10% higher error than baseline, indicating
that deploying MPC systems without prior data collection may be economic. | [
"eess.SY",
"cs.LG",
"cs.SY"
] | false |
2402.12572 | 2024-02-19T21:53:43Z | FairProof : Confidential and Certifiable Fairness for Neural Networks | [
"Chhavi Yadav",
"Amrita Roy Chowdhury",
"Dan Boneh",
"Kamalika Chaudhuri"
] | Machine learning models are increasingly used in societal applications, yet
legal and privacy concerns demand that they very often be kept confidential.
Consequently, there is a growing distrust about the fairness properties of
these models in the minds of consumers, who are often at the receiving end of
model predictions. To this end, we propose FairProof - a system that uses
Zero-Knowledge Proofs (a cryptographic primitive) to publicly verify the
fairness of a model, while maintaining confidentiality. We also propose a
fairness certification algorithm for fully-connected neural networks which is
befitting to ZKPs and is used in this system. We implement FairProof in Gnark
and demonstrate empirically that our system is practically feasible. | [
"cs.LG",
"cs.AI",
"cs.CR"
] | false |
2402.13287 | 2024-02-19T12:22:22Z | Manipulating hidden-Markov-model inferences by corrupting batch data | [
"William N. Caballero",
"Jose Manuel Camacho",
"Tahir Ekin",
"Roi Naveiro"
] | Time-series models typically assume untainted and legitimate streams of data.
However, a self-interested adversary may have incentive to corrupt this data,
thereby altering a decision maker's inference. Within the broader field of
adversarial machine learning, this research provides a novel, probabilistic
perspective toward the manipulation of hidden Markov model inferences via
corrupted data. In particular, we provision a suite of corruption problems for
filtering, smoothing, and decoding inferences leveraging an adversarial risk
analysis approach. Multiple stochastic programming models are set forth that
incorporate realistic uncertainties and varied attacker objectives. Three
general solution methods are developed by alternatively viewing the problem
from frequentist and Bayesian perspectives. The efficacy of each method is
illustrated via extensive, empirical testing. The developed methods are
characterized by their solution quality and computational effort, resulting in
a stratification of techniques across varying problem-instance architectures.
This research highlights the weaknesses of hidden Markov models under
adversarial activity, thereby motivating the need for robustification
techniques to ensure their security. | [
"cs.CR",
"cs.AI",
"cs.LG"
] | false |
2402.14847 | 2024-02-19T15:34:09Z | Deep learning-driven scheduling algorithm for a single machine problem
minimizing the total tardiness | [
"Michal Bouška",
"Přemysl Šůcha",
"Antonín Novák",
"Zdeněk Hanzálek"
] | In this paper, we investigate the use of the deep learning method for solving
a well-known NP-hard single machine scheduling problem with the objective of
minimizing the total tardiness. We propose a deep neural network that acts as a
polynomial-time estimator of the criterion value used in a single-pass
scheduling algorithm based on Lawler's decomposition and symmetric
decomposition proposed by Della Croce et al. Essentially, the neural network
guides the algorithm by estimating the best splitting of the problem into
subproblems. The paper also describes a new method for generating the training
data set, which speeds up the training dataset generation and reduces the
average optimality gap of solutions. The experimental results show that our
machine learning-driven approach can efficiently generalize information from
the training phase to significantly larger instances. Even though the instances
used in the training phase have from 75 to 100 jobs, the average optimality gap
on instances with up to 800 jobs is 0.26%, which is almost five times less than
the gap of the state-of-the-art heuristic. | [
"math.OC",
"cs.AI",
"cs.LG"
] | false |
2402.16882 | 2024-02-19T02:21:20Z | Substrate Scope Contrastive Learning: Repurposing Human Bias to Learn
Atomic Representations | [
"Wenhao Gao",
"Priyanka Raghavan",
"Ron Shprints",
"Connor W. Coley"
] | Learning molecular representation is a critical step in molecular machine
learning that significantly influences modeling success, particularly in
data-scarce situations. The concept of broadly pre-training neural networks has
advanced fields such as computer vision, natural language processing, and
protein engineering. However, similar approaches for small organic molecules
have not achieved comparable success. In this work, we introduce a novel
pre-training strategy, substrate scope contrastive learning, which learns
atomic representations tailored to chemical reactivity. This method considers
the grouping of substrates and their yields in published substrate scope tables
as a measure of their similarity or dissimilarity in terms of chemical
reactivity. We focus on 20,798 aryl halides in the CAS Content Collection
spanning thousands of publications to learn a representation of aryl halide
reactivity. We validate our pre-training approach through both intuitive
visualizations and comparisons to traditional reactivity descriptors and
physical organic chemistry principles. The versatility of these embeddings is
further evidenced in their application to yield prediction, regioselectivity
prediction, and the diverse selection of new substrates. This work not only
presents a chemistry-tailored neural network pre-training strategy to learn
reactivity-aligned atomic representations, but also marks a first-of-its-kind
approach to benefit from the human bias in substrate scope design. | [
"physics.chem-ph",
"cs.AI",
"cs.LG",
"q-bio.BM"
] | false |
2402.12237 | 2024-02-19T15:47:47Z | Learning to Defer in Content Moderation: The Human-AI Interplay | [
"Thodoris Lykouris",
"Wentao Weng"
] | Successful content moderation in online platforms relies on a human-AI
collaboration approach. A typical heuristic estimates the expected harmfulness
of a post and uses fixed thresholds to decide whether to remove it and whether
to send it for human review. This disregards the prediction uncertainty, the
time-varying element of human review capacity and post arrivals, and the
selective sampling in the dataset (humans only review posts filtered by the
admission algorithm).
In this paper, we introduce a model to capture the human-AI interplay in
content moderation. The algorithm observes contextual information for incoming
posts, makes classification and admission decisions, and schedules posts for
human review. Only admitted posts receive human reviews on their harmfulness.
These reviews help educate the machine-learning algorithms but are delayed due
to congestion in the human review system. The classical learning-theoretic way
to capture this human-AI interplay is via the framework of learning to defer,
where the algorithm has the option to defer a classification task to humans for
a fixed cost and immediately receive feedback. Our model contributes to this
literature by introducing congestion in the human review system. Moreover,
unlike work on online learning with delayed feedback where the delay in the
feedback is exogenous to the algorithm's decisions, the delay in our model is
endogenous to both the admission and the scheduling decisions.
We propose a near-optimal learning algorithm that carefully balances the
classification loss from a selectively sampled dataset, the idiosyncratic loss
of non-reviewed posts, and the delay loss of having congestion in the human
review system. To the best of our knowledge, this is the first result for
online learning in contextual queueing systems and hence our analytical
framework may be of independent interest. | [
"cs.LG",
"cs.AI",
"cs.GT",
"cs.HC",
"cs.PF"
] | false |
2402.12369 | 2024-02-19T18:56:35Z | Short-Period Variables in TESS Full-Frame Image Light Curves Identified
via Convolutional Neural Networks | [
"Greg Olmschenk",
"Richard K. Barry",
"Stela Ishitani Silva",
"Brian P. Powell",
"Ethan Kruse",
"Jeremy D. Schnittman",
"Agnieszka M. Cieplak",
"Thomas Barclay",
"Siddhant Solanki",
"Bianca Ortega",
"John Baker",
"Yesenia Helem Salinas Mamani"
] | The Transiting Exoplanet Survey Satellite (TESS) mission measured light from
stars in ~85% of the sky throughout its two-year primary mission, resulting in
millions of TESS 30-minute cadence light curves to analyze in the search for
transiting exoplanets. To search this vast dataset, we aim to provide an
approach that is both computationally efficient, produces highly performant
predictions, and minimizes the required human search effort. We present a
convolutional neural network that we train to identify short period variables.
To make a prediction for a given light curve, our network requires no prior
target parameters identified using other methods. Our network performs
inference on a TESS 30-minute cadence light curve in ~5ms on a single GPU,
enabling large scale archival searches. We present a collection of 14156
short-period variables identified by our network. The majority of our
identified variables fall into two prominent populations, one of short-period
main sequence binaries and another of Delta Scuti stars. Our neural network
model and related code is additionally provided as open-source code for public
use and extension. | [
"astro-ph.SR",
"astro-ph.EP",
"astro-ph.IM",
"cs.LG",
"eess.IV"
] | false |
2402.12499 | 2024-02-19T20:06:15Z | Automated Security Response through Online Learning with Adaptive
Conjectures | [
"Kim Hammar",
"Tao Li",
"Rolf Stadler",
"Quanyan Zhu"
] | We study automated security response for an IT infrastructure and formulate
the interaction between an attacker and a defender as a partially observed,
non-stationary game. We relax the standard assumption that the game model is
correctly specified and consider that each player has a probabilistic
conjecture about the model, which may be misspecified in the sense that the
true model has probability 0. This formulation allows us to capture uncertainty
about the infrastructure and the intents of the players. To learn effective
game strategies online, we design a novel method where a player iteratively
adapts its conjecture using Bayesian learning and updates its strategy through
rollout. We prove that the conjectures converge to best fits, and we provide a
bound on the performance improvement that rollout enables with a conjectured
model. To characterize the steady state of the game, we propose a variant of
the Berk-Nash equilibrium. We present our method through an advanced persistent
threat use case. Simulation studies based on testbed measurements show that our
method produces effective security strategies that adapt to a changing
environment. We also find that our method enables faster convergence than
current reinforcement learning techniques. | [
"cs.GT",
"cs.AI",
"cs.CR",
"cs.LG",
"cs.SY",
"eess.SY"
] | false |
2402.12641 | 2024-02-20T01:35:23Z | YOLO-Ant: A Lightweight Detector via Depthwise Separable Convolutional
and Large Kernel Design for Antenna Interference Source Detection | [
"Xiaoyu Tang",
"Xingming Chen",
"Jintao Cheng",
"Jin Wu",
"Rui Fan",
"Chengxi Zhang",
"Zebo Zhou"
] | In the era of 5G communication, removing interference sources that affect
communication is a resource-intensive task. The rapid development of computer
vision has enabled unmanned aerial vehicles to perform various high-altitude
detection tasks. Because the field of object detection for antenna interference
sources has not been fully explored, this industry lacks dedicated learning
samples and detection models for this specific task. In this article, an
antenna dataset is created to address important antenna interference source
detection issues and serves as the basis for subsequent research. We introduce
YOLO-Ant, a lightweight CNN and transformer hybrid detector specifically
designed for antenna interference source detection. Specifically, we initially
formulated a lightweight design for the network depth and width, ensuring that
subsequent investigations were conducted within a lightweight framework. Then,
we propose a DSLK-Block module based on depthwise separable convolution and
large convolution kernels to enhance the network's feature extraction ability,
effectively improving small object detection. To address challenges such as
complex backgrounds and large interclass differences in antenna detection, we
construct DSLKVit-Block, a powerful feature extraction module that combines
DSLK-Block and transformer structures. Considering both its lightweight design
and accuracy, our method not only achieves optimal performance on the antenna
dataset but also yields competitive results on public datasets. | [
"cs.CV"
] | false |
2402.12644 | 2024-02-20T01:43:51Z | Neuromorphic Synergy for Video Binarization | [
"Shijie Lin",
"Xiang Zhang",
"Lei Yang",
"Lei Yu",
"Bin Zhou",
"Xiaowei Luo",
"Wenping Wang",
"Jia Pan"
] | Bimodal objects, such as the checkerboard pattern used in camera calibration,
markers for object tracking, and text on road signs, to name a few, are
prevalent in our daily lives and serve as a visual form to embed information
that can be easily recognized by vision systems. While binarization from
intensity images is crucial for extracting the embedded information in the
bimodal objects, few previous works consider the task of binarization of blurry
images due to the relative motion between the vision sensor and the
environment. The blurry images can result in a loss in the binarization quality
and thus degrade the downstream applications where the vision system is in
motion. Recently, neuromorphic cameras offer new capabilities for alleviating
motion blur, but it is non-trivial to first deblur and then binarize the images
in a real-time manner. In this work, we propose an event-based binary
reconstruction method that leverages the prior knowledge of the bimodal
target's properties to perform inference independently in both event space and
image space and merge the results from both domains to generate a sharp binary
image. We also develop an efficient integration method to propagate this binary
image to high frame rate binary video. Finally, we develop a novel method to
naturally fuse events and images for unsupervised threshold identification. The
proposed method is evaluated in publicly available and our collected data
sequence, and shows the proposed method can outperform the SOTA methods to
generate high frame rate binary video in real-time on CPU-only devices. | [
"cs.CV"
] | false |
2402.12675 | 2024-02-20T02:48:14Z | Visual Reasoning in Object-Centric Deep Neural Networks: A Comparative
Cognition Approach | [
"Guillermo Puebla",
"Jeffrey S. Bowers"
] | Achieving visual reasoning is a long-term goal of artificial intelligence. In
the last decade, several studies have applied deep neural networks (DNNs) to
the task of learning visual relations from images, with modest results in terms
of generalization of the relations learned. However, in recent years,
object-centric representation learning has been put forward as a way to achieve
visual reasoning within the deep learning framework. Object-centric models
attempt to model input scenes as compositions of objects and relations between
them. To this end, these models use several kinds of attention mechanisms to
segregate the individual objects in a scene from the background and from other
objects. In this work we tested relation learning and generalization in several
object-centric models, as well as a ResNet-50 baseline. In contrast to previous
research, which has focused heavily in the same-different task in order to
asses relational reasoning in DNNs, we use a set of tasks -- with varying
degrees of difficulty -- derived from the comparative cognition literature. Our
results show that object-centric models are able to segregate the different
objects in a scene, even in many out-of-distribution cases. In our simpler
tasks, this improves their capacity to learn and generalize visual relations in
comparison to the ResNet-50 baseline. However, object-centric models still
struggle in our more difficult tasks and conditions. We conclude that abstract
visual reasoning remains an open challenge for DNNs, including object-centric
models. | [
"cs.CV"
] | false |
2402.12676 | 2024-02-20T02:48:58Z | Advancing Monocular Video-Based Gait Analysis Using Motion Imitation
with Physics-Based Simulation | [
"Nikolaos Smyrnakis",
"Tasos Karakostas",
"R. James Cotton"
] | Gait analysis from videos obtained from a smartphone would open up many
clinical opportunities for detecting and quantifying gait impairments. However,
existing approaches for estimating gait parameters from videos can produce
physically implausible results. To overcome this, we train a policy using
reinforcement learning to control a physics simulation of human movement to
replicate the movement seen in video. This forces the inferred movements to be
physically plausible, while improving the accuracy of the inferred step length
and walking velocity. | [
"cs.CV"
] | false |
2402.12706 | 2024-02-20T04:09:58Z | Learning Domain-Invariant Temporal Dynamics for Few-Shot Action
Recognition | [
"Yuke Li",
"Guangyi Chen",
"Ben Abramowitz",
"Stefano Anzellott",
"Donglai Wei"
] | Few-shot action recognition aims at quickly adapting a pre-trained model to
the novel data with a distribution shift using only a limited number of
samples. Key challenges include how to identify and leverage the transferable
knowledge learned by the pre-trained model. Our central hypothesis is that
temporal invariance in the dynamic system between latent variables lends itself
to transferability (domain-invariance). We therefore propose DITeD, or
Domain-Invariant Temporal Dynamics for knowledge transfer. To detect the
temporal invariance part, we propose a generative framework with a two-stage
training strategy during pre-training. Specifically, we explicitly model
invariant dynamics including temporal dynamic generation and transitions, and
the variant visual and domain encoders. Then we pre-train the model with the
self-supervised signals to learn the representation. After that, we fix the
whole representation model and tune the classifier. During adaptation, we fix
the transferable temporal dynamics and update the image encoder. The efficacy
of our approach is revealed by the superior accuracy of DITeD over leading
alternatives across standard few-shot action recognition datasets. Moreover, we
validate that the learned temporal dynamic transition and temporal dynamic
generation modules possess transferable qualities. | [
"cs.CV"
] | false |
2402.12741 | 2024-02-20T06:14:30Z | MuLan: Multimodal-LLM Agent for Progressive Multi-Object Diffusion | [
"Sen Li",
"Ruochen Wang",
"Cho-Jui Hsieh",
"Minhao Cheng",
"Tianyi Zhou"
] | Existing text-to-image models still struggle to generate images of multiple
objects, especially in handling their spatial positions, relative sizes,
overlapping, and attribute bindings. In this paper, we develop a training-free
Multimodal-LLM agent (MuLan) to address these challenges by progressive
multi-object generation with planning and feedback control, like a human
painter. MuLan harnesses a large language model (LLM) to decompose a prompt to
a sequence of sub-tasks, each generating only one object conditioned on
previously generated objects by stable diffusion. Unlike existing LLM-grounded
methods, MuLan only produces a high-level plan at the beginning while the exact
size and location of each object are determined by an LLM and attention
guidance upon each sub-task. Moreover, MuLan adopts a vision-language model
(VLM) to provide feedback to the image generated in each sub-task and control
the diffusion model to re-generate the image if it violates the original
prompt. Hence, each model in every step of MuLan only needs to address an easy
sub-task it is specialized for. We collect 200 prompts containing multi-objects
with spatial relationships and attribute bindings from different benchmarks to
evaluate MuLan. The results demonstrate the superiority of MuLan in generating
multiple objects over baselines. The code is available on
https://github.com/measure-infinity/mulan-code. | [
"cs.CV"
] | false |
2402.12754 | 2024-02-20T06:47:12Z | Fingerprint Presentation Attack Detector Using Global-Local Model | [
"Haozhe Liu",
"Wentian Zhang",
"Feng Liu",
"Haoqian Wu",
"Linlin Shen"
] | The vulnerability of automated fingerprint recognition systems (AFRSs) to
presentation attacks (PAs) promotes the vigorous development of PA detection
(PAD) technology. However, PAD methods have been limited by information loss
and poor generalization ability, resulting in new PA materials and fingerprint
sensors. This paper thus proposes a global-local model-based PAD (RTK-PAD)
method to overcome those limitations to some extent. The proposed method
consists of three modules, called: 1) the global module; 2) the local module;
and 3) the rethinking module. By adopting the cut-out-based global module, a
global spoofness score predicted from nonlocal features of the entire
fingerprint images can be achieved. While by using the texture
in-painting-based local module, a local spoofness score predicted from
fingerprint patches is obtained. The two modules are not independent but
connected through our proposed rethinking module by localizing two
discriminative patches for the local module based on the global spoofness
score. Finally, the fusion spoofness score by averaging the global and local
spoofness scores is used for PAD. Our experimental results evaluated on LivDet
2017 show that the proposed RTK-PAD can achieve an average classification error
(ACE) of 2.28% and a true detection rate (TDR) of 91.19% when the false
detection rate (FDR) equals 1.0%, which significantly outperformed the
state-of-the-art methods by $\sim$10% in terms of TDR (91.19% versus 80.74%). | [
"cs.CV"
] | false |
2402.12763 | 2024-02-20T07:11:27Z | BronchoTrack: Airway Lumen Tracking for Branch-Level Bronchoscopic
Localization | [
"Qingyao Tian",
"Huai Liao",
"Xinyan Huang",
"Bingyu Yang",
"Jinlin Wu",
"Jian Chen",
"Lujie Li",
"Hongbin Liu"
] | Localizing the bronchoscope in real time is essential for ensuring
intervention quality. However, most existing methods struggle to balance
between speed and generalization. To address these challenges, we present
BronchoTrack, an innovative real-time framework for accurate branch-level
localization, encompassing lumen detection, tracking, and airway association.To
achieve real-time performance, we employ a benchmark lightweight detector for
efficient lumen detection. We are the first to introduce multi-object tracking
to bronchoscopic localization, mitigating temporal confusion in lumen
identification caused by rapid bronchoscope movement and complex airway
structures. To ensure generalization across patient cases, we propose a
training-free detection-airway association method based on a semantic airway
graph that encodes the hierarchy of bronchial tree structures.Experiments on
nine patient datasets demonstrate BronchoTrack's localization accuracy of 85.64
\%, while accessing up to the 4th generation of airways.Furthermore, we tested
BronchoTrack in an in-vivo animal study using a porcine model, where it
successfully localized the bronchoscope into the 8th generation
airway.Experimental evaluation underscores BronchoTrack's real-time performance
in both satisfying accuracy and generalization, demonstrating its potential for
clinical applications. | [
"cs.CV"
] | false |
2402.12765 | 2024-02-20T07:12:22Z | GOOD: Towards Domain Generalized Orientated Object Detection | [
"Qi Bi",
"Beichen Zhou",
"Jingjun Yi",
"Wei Ji",
"Haolan Zhan",
"Gui-Song Xia"
] | Oriented object detection has been rapidly developed in the past few years,
but most of these methods assume the training and testing images are under the
same statistical distribution, which is far from reality. In this paper, we
propose the task of domain generalized oriented object detection, which intends
to explore the generalization of oriented object detectors on arbitrary unseen
target domains. Learning domain generalized oriented object detectors is
particularly challenging, as the cross-domain style variation not only
negatively impacts the content representation, but also leads to unreliable
orientation predictions. To address these challenges, we propose a generalized
oriented object detector (GOOD). After style hallucination by the emerging
contrastive language-image pre-training (CLIP), it consists of two key
components, namely, rotation-aware content consistency learning (RAC) and style
consistency learning (SEC). The proposed RAC allows the oriented object
detector to learn stable orientation representation from style-diversified
samples. The proposed SEC further stabilizes the generalization ability of
content representation from different image styles. Extensive experiments on
multiple cross-domain settings show the state-of-the-art performance of GOOD.
Source code will be publicly available. | [
"cs.CV"
] | false |
2402.12779 | 2024-02-20T07:37:32Z | Two-stage Rainfall-Forecasting Diffusion Model | [
"XuDong Ling",
"ChaoRong Li",
"FengQing Qin",
"LiHong Zhu",
"Yuanyuan Huang"
] | Deep neural networks have made great achievements in rainfall
prediction.However, the current forecasting methods have certain limitations,
such as with blurry generated images and incorrect spatial positions. To
overcome these challenges, we propose a Two-stage Rainfall-Forecasting
Diffusion Model (TRDM) aimed at improving the accuracy of long-term rainfall
forecasts and addressing the imbalance in performance between temporal and
spatial modeling. TRDM is a two-stage method for rainfall prediction tasks. The
task of the first stage is to capture robust temporal information while
preserving spatial information under low-resolution conditions. The task of the
second stage is to reconstruct the low-resolution images generated in the first
stage into high-resolution images. We demonstrate state-of-the-art results on
the MRMS and Swedish radar datasets. Our project is open source and available
on GitHub at:
\href{https://github.com/clearlyzerolxd/TRDM}{https://github.com/clearlyzerolxd/TRDM}. | [
"cs.CV"
] | false |
2402.12788 | 2024-02-20T07:56:02Z | RhythmFormer: Extracting rPPG Signals Based on Hierarchical Temporal
Periodic Transformer | [
"Bochao Zou",
"Zizheng Guo",
"Jiansheng Chen",
"Huimin Ma"
] | Remote photoplethysmography (rPPG) is a non-contact method for detecting
physiological signals based on facial videos, holding high potential in various
applications such as healthcare, affective computing, anti-spoofing, etc. Due
to the periodicity nature of rPPG, the long-range dependency capturing capacity
of the Transformer was assumed to be advantageous for such signals. However,
existing approaches have not conclusively demonstrated the superior performance
of Transformer over traditional convolutional neural network methods, this gap
may stem from a lack of thorough exploration of rPPG periodicity. In this
paper, we propose RhythmFormer, a fully end-to-end transformer-based method for
extracting rPPG signals by explicitly leveraging the quasi-periodic nature of
rPPG. The core module, Hierarchical Temporal Periodic Transformer,
hierarchically extracts periodic features from multiple temporal scales. It
utilizes dynamic sparse attention based on periodicity in the temporal domain,
allowing for fine-grained modeling of rPPG features. Furthermore, a fusion stem
is proposed to guide self-attention to rPPG features effectively, and it can be
easily transferred to existing methods to enhance their performance
significantly. RhythmFormer achieves state-of-the-art performance with fewer
parameters and reduced computational complexity in comprehensive experiments
compared to previous approaches. The codes are available at
https://github.com/zizheng-guo/RhythmFormer. | [
"cs.CV"
] | false |
2402.12792 | 2024-02-20T08:04:12Z | OccFlowNet: Towards Self-supervised Occupancy Estimation via
Differentiable Rendering and Occupancy Flow | [
"Simon Boeder",
"Fabian Gigengack",
"Benjamin Risse"
] | Semantic occupancy has recently gained significant traction as a prominent 3D
scene representation. However, most existing methods rely on large and costly
datasets with fine-grained 3D voxel labels for training, which limits their
practicality and scalability, increasing the need for self-monitored learning
in this domain. In this work, we present a novel approach to occupancy
estimation inspired by neural radiance field (NeRF) using only 2D labels, which
are considerably easier to acquire. In particular, we employ differentiable
volumetric rendering to predict depth and semantic maps and train a 3D network
based on 2D supervision only. To enhance geometric accuracy and increase the
supervisory signal, we introduce temporal rendering of adjacent time steps.
Additionally, we introduce occupancy flow as a mechanism to handle dynamic
objects in the scene and ensure their temporal consistency. Through extensive
experimentation we demonstrate that 2D supervision only is sufficient to
achieve state-of-the-art performance compared to methods using 3D labels, while
outperforming concurrent 2D approaches. When combining 2D supervision with 3D
labels, temporal rendering and occupancy flow we outperform all previous
occupancy estimation models significantly. We conclude that the proposed
rendering supervision and occupancy flow advances occupancy estimation and
further bridges the gap towards self-supervised learning in this domain. | [
"cs.CV"
] | false |
2402.12908 | 2024-02-20T10:56:52Z | RealCompo: Dynamic Equilibrium between Realism and Compositionality
Improves Text-to-Image Diffusion Models | [
"Xinchen Zhang",
"Ling Yang",
"Yaqi Cai",
"Zhaochen Yu",
"Jiake Xie",
"Ye Tian",
"Minkai Xu",
"Yong Tang",
"Yujiu Yang",
"Bin Cui"
] | Diffusion models have achieved remarkable advancements in text-to-image
generation. However, existing models still have many difficulties when faced
with multiple-object compositional generation. In this paper, we propose a new
training-free and transferred-friendly text-to-image generation framework,
namely RealCompo, which aims to leverage the advantages of text-to-image and
layout-to-image models to enhance both realism and compositionality of the
generated images. An intuitive and novel balancer is proposed to dynamically
balance the strengths of the two models in denoising process, allowing
plug-and-play use of any model without extra training. Extensive experiments
show that our RealCompo consistently outperforms state-of-the-art text-to-image
models and layout-to-image models in multiple-object compositional generation
while keeping satisfactory realism and compositionality of the generated
images. Code is available at https://github.com/YangLing0818/RealCompo | [
"cs.CV"
] | true |
2402.12923 | 2024-02-20T11:18:40Z | Advancements in Point Cloud-Based 3D Defect Detection and Classification
for Industrial Systems: A Comprehensive Survey | [
"Anju Rani",
"Daniel Ortiz-Arroyo",
"Petar Durdevic"
] | In recent years, 3D point clouds (PCs) have gained significant attention due
to their diverse applications across various fields such as computer vision
(CV), condition monitoring, virtual reality, robotics, autonomous driving etc.
Deep learning (DL) has proven effective in leveraging 3D PCs to address various
challenges previously encountered in 2D vision. However, the application of
deep neural networks (DNN) to process 3D PCs presents its own set of
challenges. To address these challenges, numerous methods have been proposed.
This paper provides an in-depth review of recent advancements in DL-based
condition monitoring (CM) using 3D PCs, with a specific focus on defect shape
classification and segmentation within industrial applications for operational
and maintenance purposes. Recognizing the crucial role of these aspects in
industrial maintenance, the paper provides insightful observations that offer
perspectives on the strengths and limitations of the reviewed DL-based PC
processing methods. This synthesis of knowledge aims to contribute to the
understanding and enhancement of CM processes, particularly within the
framework of remaining useful life (RUL), in industrial systems. | [
"cs.CV"
] | false |
2402.12927 | 2024-02-20T11:26:42Z | CLIPping the Deception: Adapting Vision-Language Models for Universal
Deepfake Detection | [
"Sohail Ahmed Khan",
"Duc-Tien Dang-Nguyen"
] | The recent advancements in Generative Adversarial Networks (GANs) and the
emergence of Diffusion models have significantly streamlined the production of
highly realistic and widely accessible synthetic content. As a result, there is
a pressing need for effective general purpose detection mechanisms to mitigate
the potential risks posed by deepfakes. In this paper, we explore the
effectiveness of pre-trained vision-language models (VLMs) when paired with
recent adaptation methods for universal deepfake detection. Following previous
studies in this domain, we employ only a single dataset (ProGAN) in order to
adapt CLIP for deepfake detection. However, in contrast to prior research,
which rely solely on the visual part of CLIP while ignoring its textual
component, our analysis reveals that retaining the text part is crucial.
Consequently, the simple and lightweight Prompt Tuning based adaptation
strategy that we employ outperforms the previous SOTA approach by 5.01% mAP and
6.61% accuracy while utilizing less than one third of the training data (200k
images as compared to 720k). To assess the real-world applicability of our
proposed models, we conduct a comprehensive evaluation across various
scenarios. This involves rigorous testing on images sourced from 21 distinct
datasets, including those generated by GANs-based, Diffusion-based and
Commercial tools. | [
"cs.CV"
] | false |
2402.12938 | 2024-02-20T11:50:27Z | UniCell: Universal Cell Nucleus Classification via Prompt Learning | [
"Junjia Huang",
"Haofeng Li",
"Xiang Wan",
"Guanbin Li"
] | The recognition of multi-class cell nuclei can significantly facilitate the
process of histopathological diagnosis. Numerous pathological datasets are
currently available, but their annotations are inconsistent. Most existing
methods require individual training on each dataset to deduce the relevant
labels and lack the use of common knowledge across datasets, consequently
restricting the quality of recognition. In this paper, we propose a universal
cell nucleus classification framework (UniCell), which employs a novel prompt
learning mechanism to uniformly predict the corresponding categories of
pathological images from different dataset domains. In particular, our
framework adopts an end-to-end architecture for nuclei detection and
classification, and utilizes flexible prediction heads for adapting various
datasets. Moreover, we develop a Dynamic Prompt Module (DPM) that exploits the
properties of multiple datasets to enhance features. The DPM first integrates
the embeddings of datasets and semantic categories, and then employs the
integrated prompts to refine image representations, efficiently harvesting the
shared knowledge among the related cell types and data sources. Experimental
results demonstrate that the proposed method effectively achieves the
state-of-the-art results on four nucleus detection and classification
benchmarks. Code and models are available at https://github.com/lhaof/UniCell | [
"cs.CV"
] | false |
2402.12946 | 2024-02-20T12:01:30Z | Cell Graph Transformer for Nuclei Classification | [
"Wei Lou",
"Guanbin Li",
"Xiang Wan",
"Haofeng Li"
] | Nuclei classification is a critical step in computer-aided diagnosis with
histopathology images. In the past, various methods have employed graph neural
networks (GNN) to analyze cell graphs that model inter-cell relationships by
considering nuclei as vertices. However, they are limited by the GNN mechanism
that only passes messages among local nodes via fixed edges. To address the
issue, we develop a cell graph transformer (CGT) that treats nodes and edges as
input tokens to enable learnable adjacency and information exchange among all
nodes. Nevertheless, training the transformer with a cell graph presents
another challenge. Poorly initialized features can lead to noisy self-attention
scores and inferior convergence, particularly when processing the cell graphs
with numerous connections. Thus, we further propose a novel topology-aware
pretraining method that leverages a graph convolutional network (GCN) to learn
a feature extractor. The pre-trained features may suppress unreasonable
correlations and hence ease the finetuning of CGT. Experimental results suggest
that the proposed cell graph transformer with topology-aware pretraining
significantly improves the nuclei classification results, and achieves the
state-of-the-art performance. Code and models are available at
https://github.com/lhaof/CGT | [
"cs.CV"
] | false |
2402.12968 | 2024-02-20T12:35:23Z | MapTrack: Tracking in the Map | [
"Fei Wang",
"Ruohui Zhang",
"Chenglin Chen",
"Min Yang",
"Yun Bai"
] | Multi-Object Tracking (MOT) aims to maintain stable and uninterrupted
trajectories for each target. Most state-of-the-art approaches first detect
objects in each frame and then implement data association between new
detections and existing tracks using motion models and appearance similarities.
Despite achieving satisfactory results, occlusion and crowds can easily lead to
missing and distorted detections, followed by missing and false associations.
In this paper, we first revisit the classic tracker DeepSORT, enhancing its
robustness over crowds and occlusion significantly by placing greater trust in
predictions when detections are unavailable or of low quality in crowded and
occluded scenes. Specifically, we propose a new framework comprising of three
lightweight and plug-and-play algorithms: the probability map, the prediction
map, and the covariance adaptive Kalman filter. The probability map identifies
whether undetected objects have genuinely disappeared from view (e.g., out of
the image or entered a building) or are only temporarily undetected due to
occlusion or other reasons. Trajectories of undetected targets that are still
within the probability map are extended by state estimations directly. The
prediction map determines whether an object is in a crowd, and we prioritize
state estimations over observations when severe deformation of observations
occurs, accomplished through the covariance adaptive Kalman filter. The
proposed method, named MapTrack, achieves state-of-the-art results on popular
multi-object tracking benchmarks such as MOT17 and MOT20. Despite its superior
performance, our method remains simple, online, and real-time. The code will be
open-sourced later. | [
"cs.CV"
] | false |
2402.13004 | 2024-02-20T13:33:33Z | Comparison of Conventional Hybrid and CTC/Attention Decoders for
Continuous Visual Speech Recognition | [
"David Gimeno-Gómez",
"Carlos-D. Martínez-Hinarejos"
] | Thanks to the rise of deep learning and the availability of large-scale
audio-visual databases, recent advances have been achieved in Visual Speech
Recognition (VSR). Similar to other speech processing tasks, these end-to-end
VSR systems are usually based on encoder-decoder architectures. While encoders
are somewhat general, multiple decoding approaches have been explored, such as
the conventional hybrid model based on Deep Neural Networks combined with
Hidden Markov Models (DNN-HMM) or the Connectionist Temporal Classification
(CTC) paradigm. However, there are languages and tasks in which data is scarce,
and in this situation, there is not a clear comparison between different types
of decoders. Therefore, we focused our study on how the conventional DNN-HMM
decoder and its state-of-the-art CTC/Attention counterpart behave depending on
the amount of data used for their estimation. We also analyzed to what extent
our visual speech features were able to adapt to scenarios for which they were
not explicitly trained, either considering a similar dataset or another
collected for a different language. Results showed that the conventional
paradigm reached recognition rates that improve the CTC/Attention model in
data-scarcity scenarios along with a reduced training time and fewer
parameters. | [
"cs.CV"
] | false |
2402.13061 | 2024-02-20T14:56:28Z | Toward Fairness via Maximum Mean Discrepancy Regularization on Logits
Space | [
"Hao-Wei Chung",
"Ching-Hao Chiu",
"Yu-Jen Chen",
"Yiyu Shi",
"Tsung-Yi Ho"
] | Fairness has become increasingly pivotal in machine learning for high-risk
applications such as machine learning in healthcare and facial recognition.
However, we see the deficiency in the previous logits space constraint methods.
Therefore, we propose a novel framework, Logits-MMD, that achieves the fairness
condition by imposing constraints on output logits with Maximum Mean
Discrepancy. Moreover, quantitative analysis and experimental results show that
our framework has a better property that outperforms previous methods and
achieves state-of-the-art on two facial recognition datasets and one animal
dataset. Finally, we show experimental results and demonstrate that our debias
approach achieves the fairness condition effectively. | [
"cs.CV"
] | false |
2402.13088 | 2024-02-20T15:30:09Z | Slot-VLM: SlowFast Slots for Video-Language Modeling | [
"Jiaqi Xu",
"Cuiling Lan",
"Wenxuan Xie",
"Xuejin Chen",
"Yan Lu"
] | Video-Language Models (VLMs), powered by the advancements in Large Language
Models (LLMs), are charting new frontiers in video understanding. A pivotal
challenge is the development of an efficient method to encapsulate video
content into a set of representative tokens to align with LLMs. In this work,
we introduce Slot-VLM, a novel framework designed to generate semantically
decomposed video tokens, in terms of object-wise and event-wise visual
representations, to facilitate LLM inference. Particularly, we design a
SlowFast Slots module, i.e., SF-Slots, that adaptively aggregates the dense
video tokens from the CLIP vision encoder to a set of representative slots. In
order to take into account both the spatial object details and the varied
temporal dynamics, SF-Slots is built with a dual-branch structure. The
Slow-Slots branch focuses on extracting object-centric slots from features at
high spatial resolution but low (slow) frame sample rate, emphasizing detailed
object information. Conversely, Fast-Slots branch is engineered to learn
event-centric slots from high temporal sample rate but low spatial resolution
features. These complementary slots are combined to form the vision context,
serving as the input to the LLM for efficient question answering. Our
experimental results demonstrate the effectiveness of our Slot-VLM, which
achieves the state-of-the-art performance on video question-answering. | [
"cs.CV"
] | false |
2402.13122 | 2024-02-20T16:35:14Z | Cross-Domain Transfer Learning with CoRTe: Consistent and Reliable
Transfer from Black-Box to Lightweight Segmentation Model | [
"Claudia Cuttano",
"Antonio Tavera",
"Fabio Cermelli",
"Giuseppe Averta",
"Barbara Caputo"
] | Many practical applications require training of semantic segmentation models
on unlabelled datasets and their execution on low-resource hardware.
Distillation from a trained source model may represent a solution for the first
but does not account for the different distribution of the training data.
Unsupervised domain adaptation (UDA) techniques claim to solve the domain
shift, but in most cases assume the availability of the source data or an
accessible white-box source model, which in practical applications are often
unavailable for commercial and/or safety reasons. In this paper, we investigate
a more challenging setting in which a lightweight model has to be trained on a
target unlabelled dataset for semantic segmentation, under the assumption that
we have access only to black-box source model predictions. Our method, named
CoRTe, consists of (i) a pseudo-labelling function that extracts reliable
knowledge from the black-box source model using its relative confidence, (ii) a
pseudo label refinement method to retain and enhance the novel information
learned by the student model on the target data, and (iii) a consistent
training of the model using the extracted pseudo labels. We benchmark CoRTe on
two synthetic-to-real settings, demonstrating remarkable results when using
black-box models to transfer knowledge on lightweight models for a target data
distribution. | [
"cs.CV"
] | false |
2402.13131 | 2024-02-20T16:44:55Z | exploreCOSMOS: Interactive Exploration of Conditional Statistical Shape
Models in the Web-Browser | [
"Maximilian Hahn",
"Bernhard Egger"
] | Statistical Shape Models of faces and various body parts are heavily used in
medical image analysis, computer vision and visualization. Whilst the field is
well explored with many existing tools, all of them aim at experts, which
limits their applicability. We demonstrate the first tool that enables the
convenient exploration of statistical shape models in the browser, with the
capability to manipulate the faces in a targeted manner. This manipulation is
performed via a posterior model given partial observations. We release our code
and application on GitHub https://github.com/maximilian-hahn/exploreCOSMOS | [
"cs.CV"
] | false |
2402.13146 | 2024-02-20T17:00:59Z | OLViT: Multi-Modal State Tracking via Attention-Based Embeddings for
Video-Grounded Dialog | [
"Adnen Abdessaied",
"Manuel von Hochmeister",
"Andreas Bulling"
] | We present the Object Language Video Transformer (OLViT) - a novel model for
video dialog operating over a multi-modal attention-based dialog state tracker.
Existing video dialog models struggle with questions requiring both spatial and
temporal localization within videos, long-term temporal reasoning, and accurate
object tracking across multiple dialog turns. OLViT addresses these challenges
by maintaining a global dialog state based on the output of an Object State
Tracker (OST) and a Language State Tracker (LST): while the OST attends to the
most important objects within the video, the LST keeps track of the most
important linguistic co-references to previous dialog turns. In stark contrast
to previous works, our approach is generic by nature and is therefore capable
of learning continuous multi-modal dialog state representations of the most
relevant objects and rounds. As a result, they can be seamlessly integrated
into Large Language Models (LLMs) and offer high flexibility in dealing with
different datasets and tasks. Evaluations on the challenging DVD (response
classification) and SIMMC 2.1 (response generation) datasets show that OLViT
achieves new state-of-the-art performance across both datasets. | [
"cs.CV"
] | false |
2402.13252 | 2024-02-20T18:59:02Z | Improving Robustness for Joint Optimization of Camera Poses and
Decomposed Low-Rank Tensorial Radiance Fields | [
"Bo-Yu Cheng",
"Wei-Chen Chiu",
"Yu-Lun Liu"
] | In this paper, we propose an algorithm that allows joint refinement of camera
pose and scene geometry represented by decomposed low-rank tensor, using only
2D images as supervision. First, we conduct a pilot study based on a 1D signal
and relate our findings to 3D scenarios, where the naive joint pose
optimization on voxel-based NeRFs can easily lead to sub-optimal solutions.
Moreover, based on the analysis of the frequency spectrum, we propose to apply
convolutional Gaussian filters on 2D and 3D radiance fields for a
coarse-to-fine training schedule that enables joint camera pose optimization.
Leveraging the decomposition property in decomposed low-rank tensor, our method
achieves an equivalent effect to brute-force 3D convolution with only incurring
little computational overhead. To further improve the robustness and stability
of joint optimization, we also propose techniques of smoothed 2D supervision,
randomly scaled kernel parameters, and edge-guided loss mask. Extensive
quantitative and qualitative evaluations demonstrate that our proposed
framework achieves superior performance in novel view synthesis as well as
rapid convergence for optimization. | [
"cs.CV"
] | true |
2402.13404 | 2024-02-20T22:15:13Z | Layout-to-Image Generation with Localized Descriptions using ControlNet
with Cross-Attention Control | [
"Denis Lukovnikov",
"Asja Fischer"
] | While text-to-image diffusion models can generate highquality images from
textual descriptions, they generally lack fine-grained control over the visual
composition of the generated images. Some recent works tackle this problem by
training the model to condition the generation process on additional input
describing the desired image layout. Arguably the most popular among such
methods, ControlNet, enables a high degree of control over the generated image
using various types of conditioning inputs (e.g. segmentation maps). However,
it still lacks the ability to take into account localized textual descriptions
that indicate which image region is described by which phrase in the prompt. In
this work, we show the limitations of ControlNet for the layout-to-image task
and enable it to use localized descriptions using a training-free approach that
modifies the crossattention scores during generation. We adapt and investigate
several existing cross-attention control methods in the context of ControlNet
and identify shortcomings that cause failure (concept bleeding) or image
degradation under specific conditions. To address these shortcomings, we
develop a novel cross-attention manipulation method in order to maintain image
quality while improving control. Qualitative and quantitative experimental
studies focusing on challenging cases are presented, demonstrating the
effectiveness of the investigated general approach, and showing the
improvements obtained by the proposed cross-attention control method. | [
"cs.CV"
] | false |
2402.12624 | 2024-02-20T01:07:32Z | Efficient Parameter Mining and Freezing for Continual Object Detection | [
"Angelo G. Menezes",
"Augusto J. Peterlevitz",
"Mateus A. Chinelatto",
"André C. P. L. F. de Carvalho"
] | Continual Object Detection is essential for enabling intelligent agents to
interact proactively with humans in real-world settings. While
parameter-isolation strategies have been extensively explored in the context of
continual learning for classification, they have yet to be fully harnessed for
incremental object detection scenarios. Drawing inspiration from prior research
that focused on mining individual neuron responses and integrating insights
from recent developments in neural pruning, we proposed efficient ways to
identify which layers are the most important for a network to maintain the
performance of a detector across sequential updates. The presented findings
highlight the substantial advantages of layer-level parameter isolation in
facilitating incremental learning within object detection models, offering
promising avenues for future research and application in real-world scenarios. | [
"cs.CV",
"cs.AI"
] | false |
2402.12701 | 2024-02-20T03:57:16Z | wmh_seg: Transformer based U-Net for Robust and Automatic White Matter
Hyperintensity Segmentation across 1.5T, 3T and 7T | [
"Jinghang Li",
"Tales Santini",
"Yuanzhe Huang",
"Joseph M. Mettenburg",
"Tamer S. Ibrahim",
"Howard J. Aizenstein",
"Minjie Wu"
] | White matter hyperintensity (WMH) remains the top imaging biomarker for
neurodegenerative diseases. Robust and accurate segmentation of WMH holds
paramount significance for neuroimaging studies. The growing shift from 3T to
7T MRI necessitates robust tools for harmonized segmentation across field
strengths and artifacts. Recent deep learning models exhibit promise in WMH
segmentation but still face challenges, including diverse training data
representation and limited analysis of MRI artifacts' impact. To address these,
we introduce wmh_seg, a novel deep learning model leveraging a
transformer-based encoder from SegFormer. wmh_seg is trained on an unmatched
dataset, including 1.5T, 3T, and 7T FLAIR images from various sources,
alongside with artificially added MR artifacts. Our approach bridges gaps in
training diversity and artifact analysis. Our model demonstrated stable
performance across magnetic field strengths, scanner manufacturers, and common
MR imaging artifacts. Despite the unique inhomogeneity artifacts on ultra-high
field MR images, our model still offers robust and stable segmentation on 7T
FLAIR images. Our model, to date, is the first that offers quality white matter
lesion segmentation on 7T FLAIR images. | [
"eess.IV",
"cs.CV"
] | false |
2402.12736 | 2024-02-20T06:01:31Z | CST: Calibration Side-Tuning for Parameter and Memory Efficient Transfer
Learning | [
"Feng Chen"
] | Achieving a universally high accuracy in object detection is quite
challenging, and the mainstream focus in the industry currently lies on
detecting specific classes of objects. However, deploying one or multiple
object detection networks requires a certain amount of GPU memory for training
and storage capacity for inference. This presents challenges in terms of how to
effectively coordinate multiple object detection tasks under
resource-constrained conditions. This paper introduces a lightweight
fine-tuning strategy called Calibration side tuning, which integrates aspects
of adapter tuning and side tuning to adapt the successful techniques employed
in transformers for use with ResNet. The Calibration side tuning architecture
that incorporates maximal transition calibration, utilizing a small number of
additional parameters to enhance network performance while maintaining a smooth
training process. Furthermore, this paper has conducted an analysis on multiple
fine-tuning strategies and have implemented their application within ResNet,
thereby expanding the research on fine-tuning strategies for object detection
networks. Besides, this paper carried out extensive experiments using five
benchmark datasets. The experimental results demonstrated that this method
outperforms other compared state-of-the-art techniques, and a better balance
between the complexity and performance of the finetune schemes is achieved. | [
"cs.CV",
"cs.AI"
] | false |
2402.12797 | 2024-02-20T08:14:53Z | A Geometric Algorithm for Tubular Shape Reconstruction from Skeletal
Representation | [
"Guoqing Zhang",
"Songzi Cat",
"Juzi Cat"
] | We introduce a novel approach for the reconstruction of tubular shapes from
skeletal representations. Our method processes all skeletal points as a whole,
eliminating the need for splitting input structure into multiple segments. We
represent the tubular shape as a truncated signed distance function (TSDF) in a
voxel hashing manner, in which the signed distance between a voxel center and
the object is computed through a simple geometric algorithm. Our method does
not involve any surface sampling scheme or solving large matrix equations, and
therefore is a faster and more elegant solution for tubular shape
reconstruction compared to other approaches. Experiments demonstrate the
efficiency and effectiveness of the proposed method. Code is avaliable at
https://github.com/wlsdzyzl/Dragon. | [
"cs.CV",
"cs.CG"
] | false |
2402.12800 | 2024-02-20T08:19:30Z | Radar-Based Recognition of Static Hand Gestures in American Sign
Language | [
"Christian Schuessler",
"Wenxuan Zhang",
"Johanna Bräunig",
"Marcel Hoffmann",
"Michael Stelzig",
"Martin Vossiek"
] | In the fast-paced field of human-computer interaction (HCI) and virtual
reality (VR), automatic gesture recognition has become increasingly essential.
This is particularly true for the recognition of hand signs, providing an
intuitive way to effortlessly navigate and control VR and HCI applications.
Considering increased privacy requirements, radar sensors emerge as a
compelling alternative to cameras. They operate effectively in low-light
conditions without capturing identifiable human details, thanks to their lower
resolution and distinct wavelength compared to visible light.
While previous works predominantly deploy radar sensors for dynamic hand
gesture recognition based on Doppler information, our approach prioritizes
classification using an imaging radar that operates on spatial information,
e.g. image-like data. However, generating large training datasets required for
neural networks (NN) is a time-consuming and challenging process, often falling
short of covering all potential scenarios. Acknowledging these challenges, this
study explores the efficacy of synthetic data generated by an advanced radar
ray-tracing simulator. This simulator employs an intuitive material model that
can be adjusted to introduce data diversity.
Despite exclusively training the NN on synthetic data, it demonstrates
promising performance when put to the test with real measurement data. This
emphasizes the practicality of our methodology in overcoming data scarcity
challenges and advancing the field of automatic gesture recognition in VR and
HCI applications. | [
"cs.CV",
"eess.SP"
] | false |
2402.12843 | 2024-02-20T09:13:11Z | SolarPanel Segmentation :Self-Supervised Learning for Imperfect Datasets | [
"Sankarshanaa Sagaram",
"Aditya Kasliwal",
"Krish Didwania",
"Laven Srivastava",
"Pallavi Kailas",
"Ujjwal Verma"
] | The increasing adoption of solar energy necessitates advanced methodologies
for monitoring and maintenance to ensure optimal performance of solar panel
installations. A critical component in this context is the accurate
segmentation of solar panels from aerial or satellite imagery, which is
essential for identifying operational issues and assessing efficiency. This
paper addresses the significant challenges in panel segmentation, particularly
the scarcity of annotated data and the labour-intensive nature of manual
annotation for supervised learning. We explore and apply Self-Supervised
Learning (SSL) to solve these challenges. We demonstrate that SSL significantly
enhances model generalization under various conditions and reduces dependency
on manually annotated data, paving the way for robust and adaptable solar panel
segmentation solutions. | [
"cs.CV",
"cs.AI"
] | false |
2402.12844 | 2024-02-20T09:13:15Z | ICON: Improving Inter-Report Consistency of Radiology Report Generation
via Lesion-aware Mix-up Augmentation | [
"Wenjun Hou",
"Yi Cheng",
"Kaishuai Xu",
"Yan Hu",
"Wenjie Li",
"Jiang Liu"
] | Previous research on radiology report generation has made significant
progress in terms of increasing the clinical accuracy of generated reports. In
this paper, we emphasize another crucial quality that it should possess, i.e.,
inter-report consistency, which refers to the capability of generating
consistent reports for semantically equivalent radiographs. This quality is
even of greater significance than the overall report accuracy in terms of
ensuring the system's credibility, as a system prone to providing conflicting
results would severely erode users' trust. Regrettably, existing approaches
struggle to maintain inter-report consistency, exhibiting biases towards common
patterns and susceptibility to lesion variants. To address this issue, we
propose ICON, which improves the inter-report consistency of radiology report
generation. Aiming at enhancing the system's ability to capture the
similarities in semantically equivalent lesions, our approach involves first
extracting lesions from input images and examining their characteristics. Then,
we introduce a lesion-aware mix-up augmentation technique to ensure that the
representations of the semantically equivalent lesions align with the same
attributes, by linearly interpolating them during the training phase. Extensive
experiments on three publicly available chest X-ray datasets verify the
effectiveness of our approach, both in terms of improving the consistency and
accuracy of the generated reports. | [
"cs.CV",
"cs.CL"
] | false |
2402.12846 | 2024-02-20T09:20:30Z | ConVQG: Contrastive Visual Question Generation with Multimodal Guidance | [
"Li Mi",
"Syrielle Montariol",
"Javiera Castillo-Navarro",
"Xianjie Dai",
"Antoine Bosselut",
"Devis Tuia"
] | Asking questions about visual environments is a crucial way for intelligent
agents to understand rich multi-faceted scenes, raising the importance of
Visual Question Generation (VQG) systems. Apart from being grounded to the
image, existing VQG systems can use textual constraints, such as expected
answers or knowledge triplets, to generate focused questions. These constraints
allow VQG systems to specify the question content or leverage external
commonsense knowledge that can not be obtained from the image content only.
However, generating focused questions using textual constraints while enforcing
a high relevance to the image content remains a challenge, as VQG systems often
ignore one or both forms of grounding. In this work, we propose Contrastive
Visual Question Generation (ConVQG), a method using a dual contrastive
objective to discriminate questions generated using both modalities from those
based on a single one. Experiments on both knowledge-aware and standard VQG
benchmarks demonstrate that ConVQG outperforms the state-of-the-art methods and
generates image-grounded, text-guided, and knowledge-rich questions. Our human
evaluation results also show preference for ConVQG questions compared to
non-contrastive baselines. | [
"cs.CV",
"cs.AI"
] | false |
2402.13007 | 2024-02-20T13:42:36Z | Improve Cross-Architecture Generalization on Dataset Distillation | [
"Binglin Zhou",
"Linhao Zhong",
"Wentao Chen"
] | Dataset distillation, a pragmatic approach in machine learning, aims to
create a smaller synthetic dataset from a larger existing dataset. However,
existing distillation methods primarily adopt a model-based paradigm, where the
synthetic dataset inherits model-specific biases, limiting its generalizability
to alternative models. In response to this constraint, we propose a novel
methodology termed "model pool". This approach involves selecting models from a
diverse model pool based on a specific probability distribution during the data
distillation process. Additionally, we integrate our model pool with the
established knowledge distillation approach and apply knowledge distillation to
the test process of the distilled dataset. Our experimental results validate
the effectiveness of the model pool approach across a range of existing models
while testing, demonstrating superior performance compared to existing
methodologies. | [
"cs.LG",
"cs.CV"
] | false |
2402.13144 | 2024-02-20T16:59:03Z | Neural Network Diffusion | [
"Kai Wang",
"Zhaopan Xu",
"Yukun Zhou",
"Zelin Zang",
"Trevor Darrell",
"Zhuang Liu",
"Yang You"
] | Diffusion models have achieved remarkable success in image and video
generation. In this work, we demonstrate that diffusion models can also
\textit{generate high-performing neural network parameters}. Our approach is
simple, utilizing an autoencoder and a standard latent diffusion model. The
autoencoder extracts latent representations of a subset of the trained network
parameters. A diffusion model is then trained to synthesize these latent
parameter representations from random noise. It then generates new
representations that are passed through the autoencoder's decoder, whose
outputs are ready to use as new subsets of network parameters. Across various
architectures and datasets, our diffusion process consistently generates models
of comparable or improved performance over trained networks, with minimal
additional cost. Notably, we empirically find that the generated models perform
differently with the trained networks. Our results encourage more exploration
on the versatile use of diffusion models. | [
"cs.LG",
"cs.CV"
] | true |
2402.13152 | 2024-02-20T17:07:08Z | AnnoTheia: A Semi-Automatic Annotation Toolkit for Audio-Visual Speech
Technologies | [
"José-M. Acosta-Triana",
"David Gimeno-Gómez",
"Carlos-D. Martínez-Hinarejos"
] | More than 7,000 known languages are spoken around the world. However, due to
the lack of annotated resources, only a small fraction of them are currently
covered by speech technologies. Albeit self-supervised speech representations,
recent massive speech corpora collections, as well as the organization of
challenges, have alleviated this inequality, most studies are mainly
benchmarked on English. This situation is aggravated when tasks involving both
acoustic and visual speech modalities are addressed. In order to promote
research on low-resource languages for audio-visual speech technologies, we
present AnnoTheia, a semi-automatic annotation toolkit that detects when a
person speaks on the scene and the corresponding transcription. In addition, to
show the complete process of preparing AnnoTheia for a language of interest, we
also describe the adaptation of a pre-trained model for active speaker
detection to Spanish, using a database not initially conceived for this type of
task. The AnnoTheia toolkit, tutorials, and pre-trained models are available on
GitHub. | [
"cs.CV",
"cs.CL"
] | false |
2402.13217 | 2024-02-20T18:29:49Z | VideoPrism: A Foundational Visual Encoder for Video Understanding | [
"Long Zhao",
"Nitesh B. Gundavarapu",
"Liangzhe Yuan",
"Hao Zhou",
"Shen Yan",
"Jennifer J. Sun",
"Luke Friedman",
"Rui Qian",
"Tobias Weyand",
"Yue Zhao",
"Rachel Hornung",
"Florian Schroff",
"Ming-Hsuan Yang",
"David A. Ross",
"Huisheng Wang",
"Hartwig Adam",
"Mikhail Sirotenko",
"Ting Liu",
"Boqing Gong"
] | We introduce VideoPrism, a general-purpose video encoder that tackles diverse
video understanding tasks with a single frozen model. We pretrain VideoPrism on
a heterogeneous corpus containing 36M high-quality video-caption pairs and 582M
video clips with noisy parallel text (e.g., ASR transcripts). The pretraining
approach improves upon masked autoencoding by global-local distillation of
semantic video embeddings and a token shuffling scheme, enabling VideoPrism to
focus primarily on the video modality while leveraging the invaluable text
associated with videos. We extensively test VideoPrism on four broad groups of
video understanding tasks, from web video question answering to CV for science,
achieving state-of-the-art performance on 30 out of 33 video understanding
benchmarks. | [
"cs.CV",
"cs.AI"
] | true |
2402.13220 | 2024-02-20T18:31:27Z | How Easy is It to Fool Your Multimodal LLMs? An Empirical Analysis on
Deceptive Prompts | [
"Yusu Qian",
"Haotian Zhang",
"Yinfei Yang",
"Zhe Gan"
] | The remarkable advancements in Multimodal Large Language Models (MLLMs) have
not rendered them immune to challenges, particularly in the context of handling
deceptive information in prompts, thus producing hallucinated responses under
such conditions. To quantitatively assess this vulnerability, we present
MAD-Bench, a carefully curated benchmark that contains 850 test samples divided
into 6 categories, such as non-existent objects, count of objects, spatial
relationship, and visual confusion. We provide a comprehensive analysis of
popular MLLMs, ranging from GPT-4V, Gemini-Pro, to open-sourced models, such as
LLaVA-1.5 and CogVLM. Empirically, we observe significant performance gaps
between GPT-4V and other models; and previous robust instruction-tuned models,
such as LRV-Instruction and LLaVA-RLHF, are not effective on this new
benchmark. While GPT-4V achieves 75.02% accuracy on MAD-Bench, the accuracy of
any other model in our experiments ranges from 5% to 35%. We further propose a
remedy that adds an additional paragraph to the deceptive prompts to encourage
models to think twice before answering the question. Surprisingly, this simple
method can even double the accuracy; however, the absolute numbers are still
too low to be satisfactory. We hope MAD-Bench can serve as a valuable benchmark
to stimulate further research to enhance models' resilience against deceptive
prompts. | [
"cs.CV",
"cs.CL"
] | true |
2402.13232 | 2024-02-20T18:47:56Z | A Touch, Vision, and Language Dataset for Multimodal Alignment | [
"Letian Fu",
"Gaurav Datta",
"Huang Huang",
"William Chung-Ho Panitch",
"Jaimyn Drake",
"Joseph Ortiz",
"Mustafa Mukadam",
"Mike Lambeta",
"Roberto Calandra",
"Ken Goldberg"
] | Touch is an important sensing modality for humans, but it has not yet been
incorporated into a multimodal generative language model. This is partially due
to the difficulty of obtaining natural language labels for tactile data and the
complexity of aligning tactile readings with both visual observations and
language descriptions. As a step towards bridging that gap, this work
introduces a new dataset of 44K in-the-wild vision-touch pairs, with English
language labels annotated by humans (10%) and textual pseudo-labels from GPT-4V
(90%). We use this dataset to train a vision-language-aligned tactile encoder
for open-vocabulary classification and a touch-vision-language (TVL) model for
text generation using the trained encoder. Results suggest that by
incorporating touch, the TVL model improves (+29% classification accuracy)
touch-vision-language alignment over existing models trained on any pair of
those modalities. Although only a small fraction of the dataset is
human-labeled, the TVL model demonstrates improved visual-tactile understanding
over GPT-4V (+12%) and open-source vision-language models (+32%) on a new
touch-vision understanding benchmark. Code and data:
https://tactile-vlm.github.io. | [
"cs.CV",
"cs.RO"
] | true |
2402.13243 | 2024-02-20T18:55:09Z | VADv2: End-to-End Vectorized Autonomous Driving via Probabilistic
Planning | [
"Shaoyu Chen",
"Bo Jiang",
"Hao Gao",
"Bencheng Liao",
"Qing Xu",
"Qian Zhang",
"Chang Huang",
"Wenyu Liu",
"Xinggang Wang"
] | Learning a human-like driving policy from large-scale driving demonstrations
is promising, but the uncertainty and non-deterministic nature of planning make
it challenging. In this work, to cope with the uncertainty problem, we propose
VADv2, an end-to-end driving model based on probabilistic planning. VADv2 takes
multi-view image sequences as input in a streaming manner, transforms sensor
data into environmental token embeddings, outputs the probabilistic
distribution of action, and samples one action to control the vehicle. Only
with camera sensors, VADv2 achieves state-of-the-art closed-loop performance on
the CARLA Town05 benchmark, significantly outperforming all existing methods.
It runs stably in a fully end-to-end manner, even without the rule-based
wrapper. Closed-loop demos are presented at https://hgao-cv.github.io/VADv2. | [
"cs.CV",
"cs.RO"
] | false |
2402.13306 | 2024-02-20T18:58:23Z | Vision System Prototype for Inspection and Monitoring with a Smart
Camera | [
"Efren Hernández-Molina",
"Benjamin Ojeda-Magaña",
"Jose Guadalupe Robledo-Hernández",
"Ruben Ruelas"
] | This paper presents the design of an artificial vision system prototype for
automatic inspection and monitoring of objects over a conveyor belt and using a
Smart camera 2D BOA-INS. The prototype consists of a conveyor belt and an
embedded system based on an Arduino Mega card for system control, and it has as
main peripherals the smart camera, a direct current motor, a photoelectric
sensor, LED illumination and LEDs indicating the status (good or defect) of
each evaluated object. The application of the prototype is for educational
purposes, so that undergraduate, master and diploma students can simulate a
continuous production line, controlled by an embedded system, and perform
quality control by monitoring through a visual system and a personal computer.
This allows implementing the topics of embedded systems, artificial vision,
artificial intelligence, pattern recognition, automatic control, as well as
automation of real processes. | [
"cs.CV",
"eess.IV",
"I.4.8"
] | false |
2402.13368 | 2024-02-20T20:48:00Z | Unsupervised Concept Discovery Mitigates Spurious Correlations | [
"Md Rifat Arefin",
"Yan Zhang",
"Aristide Baratin",
"Francesco Locatello",
"Irina Rish",
"Dianbo Liu",
"Kenji Kawaguchi"
] | Models prone to spurious correlations in training data often produce brittle
predictions and introduce unintended biases. Addressing this challenge
typically involves methods relying on prior knowledge and group annotation to
remove spurious correlations, which may not be readily available in many
applications. In this paper, we establish a novel connection between
unsupervised object-centric learning and mitigation of spurious correlations.
Instead of directly inferring sub-groups with varying correlations with labels,
our approach focuses on discovering concepts: discrete ideas that are shared
across input samples. Leveraging existing object-centric representation
learning, we introduce CoBalT: a concept balancing technique that effectively
mitigates spurious correlations without requiring human labeling of subgroups.
Evaluation across the Waterbirds, CelebA and ImageNet-9 benchmark datasets for
subpopulation shifts demonstrate superior or competitive performance compared
state-of-the-art baselines, without the need for group annotation. | [
"cs.LG",
"cs.CV"
] | false |
2402.12627 | 2024-02-20T01:16:01Z | A Comprehensive Review of Machine Learning Advances on Data Change: A
Cross-Field Perspective | [
"Jeng-Lin Li",
"Chih-Fan Hsu",
"Ming-Ching Chang",
"Wei-Chao Chen"
] | Recent artificial intelligence (AI) technologies show remarkable evolution in
various academic fields and industries. However, in the real world, dynamic
data lead to principal challenges for deploying AI models. An unexpected data
change brings about severe performance degradation in AI models. We identify
two major related research fields, domain shift and concept drift according to
the setting of the data change. Although these two popular research fields aim
to solve distribution shift and non-stationary data stream problems, the
underlying properties remain similar which also encourages similar technical
approaches. In this review, we regroup domain shift and concept drift into a
single research problem, namely the data change problem, with a systematic
overview of state-of-the-art methods in the two research fields. We propose a
three-phase problem categorization scheme to link the key ideas in the two
technical fields. We thus provide a novel scope for researchers to explore
contemporary technical strategies, learn industrial applications, and identify
future directions for addressing data change challenges. | [
"cs.LG",
"cs.AI",
"cs.CV"
] | false |
2402.12683 | 2024-02-20T03:14:47Z | TorchCP: A Library for Conformal Prediction based on PyTorch | [
"Hongxin Wei",
"Jianguo Huang"
] | TorchCP is a Python toolbox for conformal prediction research on deep
learning models. It contains various implementations for posthoc and training
methods for classification and regression tasks (including multi-dimension
output). TorchCP is built on PyTorch (Paszke et al., 2019) and leverages the
advantages of matrix computation to provide concise and efficient inference
implementations. The code is licensed under the LGPL license and is
open-sourced at $\href{https://github.com/ml-stat-Sustech/TorchCP}{\text{this
https URL}}$. | [
"cs.LG",
"cs.CV",
"math.ST",
"stat.TH"
] | false |
2402.12750 | 2024-02-20T06:38:10Z | Model Composition for Multimodal Large Language Models | [
"Chi Chen",
"Yiyang Du",
"Zheng Fang",
"Ziyue Wang",
"Fuwen Luo",
"Peng Li",
"Ming Yan",
"Ji Zhang",
"Fei Huang",
"Maosong Sun",
"Yang Liu"
] | Recent developments in Multimodal Large Language Models (MLLMs) have shown
rapid progress, moving towards the goal of creating versatile MLLMs that
understand inputs from various modalities. However, existing methods typically
rely on joint training with paired multimodal instruction data, which is
resource-intensive and challenging to extend to new modalities. In this paper,
we propose a new paradigm through the model composition of existing MLLMs to
create a new model that retains the modal understanding capabilities of each
original model. Our basic implementation, NaiveMC, demonstrates the
effectiveness of this paradigm by reusing modality encoders and merging LLM
parameters. Furthermore, we introduce DAMC to address parameter interference
and mismatch issues during the merging process, thereby enhancing the model
performance. To facilitate research in this area, we propose MCUB, a benchmark
for assessing ability of MLLMs to understand inputs from diverse modalities.
Experiments on this benchmark and four other multimodal understanding tasks
show significant improvements over baselines, proving that model composition
can create a versatile model capable of processing inputs from multiple
modalities. | [
"cs.CV",
"cs.AI",
"cs.CL"
] | false |
2402.12760 | 2024-02-20T06:58:49Z | A User-Friendly Framework for Generating Model-Preferred Prompts in
Text-to-Image Synthesis | [
"Nailei Hei",
"Qianyu Guo",
"Zihao Wang",
"Yan Wang",
"Haofen Wang",
"Wenqiang Zhang"
] | Well-designed prompts have demonstrated the potential to guide text-to-image
models in generating amazing images. Although existing prompt engineering
methods can provide high-level guidance, it is challenging for novice users to
achieve the desired results by manually entering prompts due to a discrepancy
between novice-user-input prompts and the model-preferred prompts. To bridge
the distribution gap between user input behavior and model training datasets,
we first construct a novel Coarse-Fine Granularity Prompts dataset (CFP) and
propose a novel User-Friendly Fine-Grained Text Generation framework (UF-FGTG)
for automated prompt optimization. For CFP, we construct a novel dataset for
text-to-image tasks that combines coarse and fine-grained prompts to facilitate
the development of automated prompt generation methods. For UF-FGTG, we propose
a novel framework that automatically translates user-input prompts into
model-preferred prompts. Specifically, we propose a prompt refiner that
continually rewrites prompts to empower users to select results that align with
their unique needs. Meanwhile, we integrate image-related loss functions from
the text-to-image model into the training process of text generation to
generate model-preferred prompts. Additionally, we propose an adaptive feature
extraction module to ensure diversity in the generated results. Experiments
demonstrate that our approach is capable of generating more visually appealing
and diverse images than previous state-of-the-art methods, achieving an average
improvement of 5% across six quality and aesthetic metrics. | [
"cs.MM",
"cs.AI",
"cs.CV"
] | false |
2402.13251 | 2024-02-20T18:59:00Z | FlashTex: Fast Relightable Mesh Texturing with LightControlNet | [
"Kangle Deng",
"Timothy Omernick",
"Alexander Weiss",
"Deva Ramanan",
"Jun-Yan Zhu",
"Tinghui Zhou",
"Maneesh Agrawala"
] | Manually creating textures for 3D meshes is time-consuming, even for expert
visual content creators. We propose a fast approach for automatically texturing
an input 3D mesh based on a user-provided text prompt. Importantly, our
approach disentangles lighting from surface material/reflectance in the
resulting texture so that the mesh can be properly relit and rendered in any
lighting environment. We introduce LightControlNet, a new text-to-image model
based on the ControlNet architecture, which allows the specification of the
desired lighting as a conditioning image to the model. Our text-to-texture
pipeline then constructs the texture in two stages. The first stage produces a
sparse set of visually consistent reference views of the mesh using
LightControlNet. The second stage applies a texture optimization based on Score
Distillation Sampling (SDS) that works with LightControlNet to increase the
texture quality while disentangling surface material from lighting. Our
pipeline is significantly faster than previous text-to-texture methods, while
producing high-quality and relightable textures. | [
"cs.GR",
"cs.CV",
"cs.LG"
] | true |
2402.13353 | 2024-02-20T20:04:23Z | Combining unsupervised and supervised learning in microscopy enables
defect analysis of a full 4H-SiC wafer | [
"Binh Duong Nguyen",
"Johannes Steiner",
"Peter Wellmann",
"Stefan Sandfeld"
] | Detecting and analyzing various defect types in semiconductor materials is an
important prerequisite for understanding the underlying mechanisms as well as
tailoring the production processes. Analysis of microscopy images that reveal
defects typically requires image analysis tasks such as segmentation and object
detection. With the permanently increasing amount of data that is produced by
experiments, handling these tasks manually becomes more and more impossible. In
this work, we combine various image analysis and data mining techniques for
creating a robust and accurate, automated image analysis pipeline. This allows
for extracting the type and position of all defects in a microscopy image of a
KOH-etched 4H-SiC wafer that was stitched together from approximately 40,000
individual images. | [
"cs.CV",
"cond-mat.mtrl-sci",
"cs.LG"
] | false |
2402.13369 | 2024-02-20T20:49:22Z | The Uncanny Valley: A Comprehensive Analysis of Diffusion Models | [
"Karam Ghanem",
"Danilo Bzdok"
] | Through Diffusion Models (DMs), we have made significant advances in
generating high-quality images. Our exploration of these models delves deeply
into their core operational principles by systematically investigating key
aspects across various DM architectures: i) noise schedules, ii) samplers, and
iii) guidance. Our comprehensive examination of these models sheds light on
their hidden fundamental mechanisms, revealing the concealed foundational
elements that are essential for their effectiveness. Our analyses emphasize the
hidden key factors that determine model performance, offering insights that
contribute to the advancement of DMs. Past findings show that the configuration
of noise schedules, samplers, and guidance is vital to the quality of generated
images; however, models reach a stable level of quality across different
configurations at a remarkably similar point, revealing that the decisive
factors for optimal performance predominantly reside in the diffusion process
dynamics and the structural design of the model's network, rather than the
specifics of configuration details. Our comparative analysis reveals that
Denoising Diffusion Probabilistic Model (DDPM)-based diffusion dynamics
consistently outperform the Noise Conditioned Score Network (NCSN)-based ones,
not only when evaluated in their original forms but also when continuous
through Stochastic Differential Equation (SDE)-based implementations. | [
"cs.LG",
"cs.AI",
"cs.CV",
"I.2.10; I.4.8; I.4.5; I.4.m"
] | false |
2404.07212 | 2024-02-20T10:47:06Z | Hybrid Training of Denoising Networks to Improve the Texture Acutance of
Digital Cameras | [
"Raphaël Achddou",
"Yann Gousseau",
"Saïd Ladjal"
] | In order to evaluate the capacity of a camera to render textures properly,
the standard practice, used by classical scoring protocols, is to compute the
frequential response to a dead leaves image target, from which is built a
texture acutance metric. In this work, we propose a mixed training procedure
for image restoration neural networks, relying on both natural and synthetic
images, that yields a strong improvement of this acutance metric without
impairing fidelity terms. The feasibility of the approach is demonstrated both
on the denoising of RGB images and the full development of RAW images, opening
the path to a systematic improvement of the texture acutance of real imaging
devices. | [
"eess.IV",
"cs.AI",
"cs.CV"
] | false |
2402.13126 | 2024-02-20T16:39:23Z | VGMShield: Mitigating Misuse of Video Generative Models | [
"Yan Pang",
"Yang Zhang",
"Tianhao Wang"
] | With the rapid advancement in video generation, people can conveniently
utilize video generation models to create videos tailored to their specific
desires. Nevertheless, there are also growing concerns about their potential
misuse in creating and disseminating false information.
In this work, we introduce VGMShield: a set of three straightforward but
pioneering mitigations through the lifecycle of fake video generation. We start
from \textit{fake video detection} trying to understand whether there is
uniqueness in generated videos and whether we can differentiate them from real
videos; then, we investigate the \textit{tracing} problem, which maps a fake
video back to a model that generates it. Towards these, we propose to leverage
pre-trained models that focus on {\it spatial-temporal dynamics} as the
backbone to identify inconsistencies in videos. Through experiments on seven
state-of-the-art open-source models, we demonstrate that current models still
cannot perfectly handle spatial-temporal relationships, and thus, we can
accomplish detection and tracing with nearly perfect accuracy.
Furthermore, anticipating future generative model improvements, we propose a
{\it prevention} method that adds invisible perturbations to images to make the
generated videos look unreal. Together with fake video detection and tracing,
our multi-faceted set of solutions can effectively mitigate misuse of video
generative models. | [
"cs.CR",
"cs.AI",
"cs.CV",
"cs.LG",
"eess.IV"
] | false |
2402.17775 | 2024-02-20T11:36:23Z | Wavelet Scattering Transform for Bioacustics: Application to Watkins
Marine Mammal Sound Database | [
"Davide Carbone",
"Alessandro Licciardi"
] | Marine mammal communication is a complex field, hindered by the diversity of
vocalizations and environmental factors. The Watkins Marine Mammal Sound
Database (WMMD) is an extensive labeled dataset used in machine learning
applications. However, the methods for data preparation, preprocessing, and
classification found in the literature are quite disparate. This study first
focuses on a brief review of the state-of-the-art benchmarks on the dataset,
with an emphasis on clarifying data preparation and preprocessing methods.
Subsequently, we propose the application of the Wavelet Scattering Transform
(WST) in place of standard methods based on the Short-Time Fourier Transform
(STFT). The study also tackles a classification task using an ad-hoc deep
architecture with residual layers. We outperform the existing classification
architecture by $6\%$ in accuracy using WST and $8\%$ using Mel spectrogram
preprocessing, effectively reducing by half the number of misclassified
samples, and reaching a top accuracy of $96\%$. | [
"eess.SP",
"cs.AI",
"cs.CV",
"cs.LG",
"cs.SD",
"eess.AS"
] | false |
2402.12690 | 2024-02-20T03:37:16Z | Simpson's Paradox and the Accuracy-Fluency Tradeoff in Translation | [
"Zheng Wei Lim",
"Ekaterina Vylomova",
"Trevor Cohn",
"Charles Kemp"
] | A good translation should be faithful to the source and should respect the
norms of the target language. We address a theoretical puzzle about the
relationship between these objectives. On one hand, intuition and some prior
work suggest that accuracy and fluency should trade off against each other, and
that capturing every detail of the source can only be achieved at the cost of
fluency. On the other hand, quality assessment researchers often suggest that
accuracy and fluency are highly correlated and difficult for human raters to
distinguish (Callison-Burch et al. 2007). We show that the tension between
these views is an instance of Simpson's paradox, and that accuracy and fluency
are positively correlated at the level of the corpus but trade off at the level
of individual source segments. We further suggest that the relationship between
accuracy and fluency is best evaluated at the segment (or sentence) level, and
that the trade off between these dimensions has implications both for assessing
translation quality and developing improved MT systems. | [
"cs.CL"
] | false |
2402.12691 | 2024-02-20T03:37:24Z | Tree-Planted Transformers: Large Language Models with Implicit Syntactic
Supervision | [
"Ryo Yoshida",
"Taiga Someya",
"Yohei Oseki"
] | Large Language Models (LLMs) have achieved remarkable success thanks to
scalability on large text corpora, but have some drawback in training
efficiency. In contrast, Syntactic Language Models (SLMs) can be trained
efficiently to reach relatively high performance thanks to syntactic
supervision, but have trouble with scalability. Thus, given these complementary
advantages of LLMs and SLMs, it is necessary to develop an architecture that
integrates the scalability of LLMs with the training efficiency of SLMs, namely
Syntactic Large Language Models (SLLM). In this paper, we propose a novel
method dubbed tree-planting: implicitly "plant" trees into attention weights of
Transformer LMs to reflect syntactic structures of natural language.
Specifically, Transformer LMs trained with tree-planting will be called
Tree-Planted Transformers (TPT), which learn syntax on small treebanks via
tree-planting and then scale on large text corpora via continual learning with
syntactic scaffolding. Targeted syntactic evaluations on the SyntaxGym
benchmark demonstrated that TPTs, despite the lack of explicit syntactic
supervision, significantly outperformed various SLMs with explicit syntactic
supervision that generate hundreds of syntactic structures in parallel,
suggesting that tree-planting and TPTs are the promising foundation for SLLMs. | [
"cs.CL"
] | false |
2402.12713 | 2024-02-20T04:26:08Z | Are Large Language Models Rational Investors? | [
"Yuhang Zhou",
"Yuchen Ni",
"Xiang Liu",
"Jian Zhang",
"Sen Liu",
"Guangnan Ye",
"Hongfeng Chai"
] | Large Language Models (LLMs) are progressively being adopted in financial
analysis to harness their extensive knowledge base for interpreting complex
market data and trends. However, their application in the financial domain is
challenged by intrinsic biases (i.e., risk-preference bias) and a superficial
grasp of market intricacies, underscoring the need for a thorough assessment of
their financial insight. This study introduces a novel framework, Financial
Bias Indicators (FBI), to critically evaluate the financial rationality of
LLMs, focusing on their ability to discern and navigate the subtleties of
financial information and to identify any irrational biases that might skew
market analysis.
Our research adopts an innovative methodology to measure financial
rationality, integrating principles of behavioral finance to scrutinize the
biases and decision-making patterns of LLMs. We conduct a comprehensive
evaluation of 19 leading LLMs, considering factors such as model scale,
training datasets, input strategies, etc. The findings reveal varying degrees
of financial irrationality among the models, influenced by their design and
training. Models trained specifically on financial datasets might exhibit
greater irrationality, and it's possible that even larger financial language
models (FinLLMs) could display more biases than smaller, more generalized
models. This outcomes provide profound insights into how these elements affect
the financial rationality of LLMs, indicating that targeted training and
structured input methods could improve model performance. This work enriches
our understanding of LLMs' strengths and weaknesses in financial applications,
laying the groundwork for the development of more dependable and rational
financial analysis tools. | [
"cs.CL"
] | false |
2402.12770 | 2024-02-20T07:20:03Z | Acknowledgment of Emotional States: Generating Validating Responses for
Empathetic Dialogue | [
"Zi Haur Pang",
"Yahui Fu",
"Divesh Lala",
"Keiko Ochi",
"Koji Inoue",
"Tatsuya Kawahara"
] | In the realm of human-AI dialogue, the facilitation of empathetic responses
is important. Validation is one of the key communication techniques in
psychology, which entails recognizing, understanding, and acknowledging others'
emotional states, thoughts, and actions. This study introduces the first
framework designed to engender empathetic dialogue with validating responses.
Our approach incorporates a tripartite module system: 1) validation timing
detection, 2) users' emotional state identification, and 3) validating response
generation. Utilizing Japanese EmpatheticDialogues dataset - a textual-based
dialogue dataset consisting of 8 emotional categories from Plutchik's wheel of
emotions - the Task Adaptive Pre-Training (TAPT) BERT-based model outperforms
both random baseline and the ChatGPT performance, in term of F1-score, in all
modules. Further validation of our model's efficacy is confirmed in its
application to the TUT Emotional Storytelling Corpus (TESC), a speech-based
dialogue dataset, by surpassing both random baseline and the ChatGPT. This
consistent performance across both textual and speech-based dialogues
underscores the effectiveness of our framework in fostering empathetic human-AI
communication. | [
"cs.CL"
] | false |
2402.12801 | 2024-02-20T08:20:49Z | Few shot clinical entity recognition in three languages: Masked language
models outperform LLM prompting | [
"Marco Naguib",
"Xavier Tannier",
"Aurélie Névéol"
] | Large Language Models are becoming the go-to solution for many natural
language processing tasks, including in specialized domains where their
few-shot capacities are expected to yield high performance in low-resource
settings. Herein, we aim to assess the performance of Large Language Models for
few shot clinical entity recognition in multiple languages. We evaluate named
entity recognition in English, French and Spanish using 8 in-domain (clinical)
and 6 out-domain gold standard corpora. We assess the performance of 10
auto-regressive language models using prompting and 16 masked language models
used for text encoding in a biLSTM-CRF supervised tagger. We create a few-shot
set-up by limiting the amount of annotated data available to 100 sentences. Our
experiments show that although larger prompt-based models tend to achieve
competitive F-measure for named entity recognition outside the clinical domain,
this level of performance does not carry over to the clinical domain where
lighter supervised taggers relying on masked language models perform better,
even with the performance drop incurred from the few-shot set-up. In all
experiments, the CO2 impact of masked language models is inferior to that of
auto-regressive models. Results are consistent over the three languages and
suggest that few-shot learning using Large language models is not production
ready for named entity recognition in the clinical domain. Instead, models
could be used for speeding-up the production of gold standard annotated data. | [
"cs.CL"
] | false |
2402.12806 | 2024-02-20T08:27:05Z | SymBa: Symbolic Backward Chaining for Multi-step Natural Language
Reasoning | [
"Jinu Lee",
"Wonseok Hwang"
] | Large Language Models (LLMs) have recently demonstrated remarkable reasoning
ability as in Chain-of-thought prompting, but faithful multi-step reasoning
remains a challenge. We specifically focus on backward chaining, where the
query is recursively decomposed using logical rules until proven. To address
the limitations of current backward chaining implementations, we propose SymBa
(Symbolic Backward Chaining). In SymBa, the symbolic top-down solver controls
the entire proof process and the LLM is called to generate a single reasoning
step only when the solver encounters a dead end. By this novel solver-LLM
integration, while being able to produce an interpretable, structured proof,
SymBa achieves significant improvement in performance, proof faithfulness, and
efficiency in diverse multi-step reasoning benchmarks (ProofWriter,
Birds-Electricity, GSM8k, CLUTRR-TF, ECtHR Article 6) compared to backward
chaining baselines. | [
"cs.CL"
] | false |
2402.12840 | 2024-02-20T09:07:41Z | ArabicMMLU: Assessing Massive Multitask Language Understanding in Arabic | [
"Fajri Koto",
"Haonan Li",
"Sara Shatnawi",
"Jad Doughman",
"Abdelrahman Boda Sadallah",
"Aisha Alraeesi",
"Khalid Almubarak",
"Zaid Alyafeai",
"Neha Sengupta",
"Shady Shehata",
"Nizar Habash",
"Preslav Nakov",
"Timothy Baldwin"
] | The focus of language model evaluation has transitioned towards reasoning and
knowledge-intensive tasks, driven by advancements in pretraining large models.
While state-of-the-art models are partially trained on large Arabic texts,
evaluating their performance in Arabic remains challenging due to the limited
availability of relevant datasets. To bridge this gap, we present ArabicMMLU,
the first multi-task language understanding benchmark for Arabic language,
sourced from school exams across diverse educational levels in different
countries spanning North Africa, the Levant, and the Gulf regions. Our data
comprises 40 tasks and 14,575 multiple-choice questions in Modern Standard
Arabic (MSA), and is carefully constructed by collaborating with native
speakers in the region. Our comprehensive evaluations of 35 models reveal
substantial room for improvement, particularly among the best open-source
models. Notably, BLOOMZ, mT0, LLama2, and Falcon struggle to achieve a score of
50%, while even the top-performing Arabic-centric model only achieves a score
of 62.3%. | [
"cs.CL"
] | false |
2402.12851 | 2024-02-20T09:30:48Z | MoELoRA: Contrastive Learning Guided Mixture of Experts on
Parameter-Efficient Fine-Tuning for Large Language Models | [
"Tongxu Luo",
"Jiahe Lei",
"Fangyu Lei",
"Weihao Liu",
"Shizhu He",
"Jun Zhao",
"Kang Liu"
] | Fine-tuning is often necessary to enhance the adaptability of Large Language
Models (LLM) to downstream tasks. Nonetheless, the process of updating billions
of parameters demands significant computational resources and training time,
which poses a substantial obstacle to the widespread application of large-scale
models in various scenarios. To address this issue, Parameter-Efficient
Fine-Tuning (PEFT) has emerged as a prominent paradigm in recent research.
However, current PEFT approaches that employ a limited set of global parameters
(such as LoRA, which adds low-rank approximation matrices to all weights) face
challenges in flexibly combining different computational modules in downstream
tasks. In this work, we introduce a novel PEFT method: MoELoRA. We consider
LoRA as Mixture of Experts (MoE), and to mitigate the random routing phenomenon
observed in MoE, we propose the utilization of contrastive learning to
encourage experts to learn distinct features. We conducted experiments on 11
tasks in math reasoning and common-sense reasoning benchmarks. With the same
number of parameters, our approach outperforms LoRA significantly. In math
reasoning, MoELoRA achieved an average performance that was 4.2% higher than
LoRA, and demonstrated competitive performance compared to the 175B GPT-3.5 on
several benchmarks. | [
"cs.CL"
] | false |
2402.12862 | 2024-02-20T09:53:38Z | Handling Ambiguity in Emotion: From Out-of-Domain Detection to
Distribution Estimation | [
"Wen Wu",
"Bo Li",
"Chao Zhang",
"Chung-Cheng Chiu",
"Qiujia Li",
"Junwen Bai",
"Tara N. Sainath",
"Philip C. Woodland"
] | The subjective perception of emotion leads to inconsistent labels from human
annotators. Typically, utterances lacking majority-agreed labels are excluded
when training an emotion classifier, which cause problems when encountering
ambiguous emotional expressions during testing. This paper investigates three
methods to handle ambiguous emotion. First, we show that incorporating
utterances without majority-agreed labels as an additional class in the
classifier reduces the classification performance of the other emotion classes.
Then, we propose detecting utterances with ambiguous emotions as out-of-domain
samples by quantifying the uncertainty in emotion classification using
evidential deep learning. This approach retains the classification accuracy
while effectively detects ambiguous emotion expressions. Furthermore, to obtain
fine-grained distinctions among ambiguous emotions, we propose representing
emotion as a distribution instead of a single class label. The task is thus
re-framed from classification to distribution estimation where every individual
annotation is taken into account, not just the majority opinion. The evidential
uncertainty measure is extended to quantify the uncertainty in emotion
distribution estimation. Experimental results on the IEMOCAP and CREMA-D
datasets demonstrate the superior capability of the proposed method in terms of
majority class prediction, emotion distribution estimation, and uncertainty
estimation. | [
"cs.CL"
] | false |
2402.12880 | 2024-02-20T10:18:18Z | Autism Detection in Speech -- A Survey | [
"Nadine Probol",
"Margot Mieskes"
] | There has been a range of studies of how autism is displayed in voice,
speech, and language. We analyse studies from the biomedical, as well as the
psychological domain, but also from the NLP domain in order to find linguistic,
prosodic and acoustic cues that could indicate autism. Our survey looks at all
three domains. We define autism and which comorbidities might influence the
correct detection of the disorder. We especially look at observations such as
verbal and semantic fluency, prosodic features, but also disfluencies and
speaking rate. We also show word-based approaches and describe machine learning
and transformer-based approaches both on the audio data as well as the
transcripts. Lastly, we conclude, while there already is a lot of research,
female patients seem to be severely under-researched. Also, most NLP research
focuses on traditional machine learning methods instead of transformers which
could be beneficial in this context. Additionally, we were unable to find
research combining both features from audio and transcripts. | [
"cs.CL"
] | false |
2402.12881 | 2024-02-20T10:23:00Z | GRAFFORD: A Benchmark Dataset for Testing the Knowledge of Object
Affordances of Language and Vision Models | [
"Sayantan Adak",
"Daivik Agrawal",
"Animesh Mukherjee",
"Somak Aditya"
] | We investigate the knowledge of object affordances in pre-trained language
models (LMs) and pre-trained Vision-Language models (VLMs). Transformers-based
large pre-trained language models (PTLM) learn contextual representation from
massive amounts of unlabeled text and are shown to perform impressively in
downstream NLU tasks. In parallel, a growing body of literature shows that
PTLMs fail inconsistently and non-intuitively, showing a lack of reasoning and
grounding. To take a first step toward quantifying the effect of grounding (or
lack thereof), we curate a novel and comprehensive dataset of object
affordances -- GrAFFORD, characterized by 15 affordance classes. Unlike
affordance datasets collected in vision and language domains, we annotate
in-the-wild sentences with objects and affordances. Experimental results reveal
that PTLMs exhibit limited reasoning abilities when it comes to uncommon object
affordances. We also observe that pre-trained VLMs do not necessarily capture
object affordances effectively. Through few-shot fine-tuning, we demonstrate
improvement in affordance knowledge in PTLMs and VLMs. Our research contributes
a novel dataset for language grounding tasks, and presents insights into LM
capabilities, advancing the understanding of object affordances. Codes and data
are available at https://github.com/sayantan11995/Affordance | [
"cs.CL"
] | false |
2402.12913 | 2024-02-20T11:01:39Z | OPDAI at SemEval-2024 Task 6: Small LLMs can Accelerate Hallucination
Detection with Weakly Supervised Data | [
"Chengcheng Wei",
"Ze Chen",
"Songtan Fang",
"Jiarong He",
"Max Gao"
] | This paper mainly describes a unified system for hallucination detection of
LLMs, which wins the second prize in the model-agnostic track of the
SemEval-2024 Task 6, and also achieves considerable results in the model-aware
track. This task aims to detect hallucination with LLMs for three different
text-generation tasks without labeled training data. We utilize prompt
engineering and few-shot learning to verify the performance of different LLMs
on the validation data. Then we select the LLMs with better performance to
generate high-quality weakly supervised training data, which not only satisfies
the consistency of different LLMs, but also satisfies the consistency of the
optimal LLM with different sampling parameters. Furthermore, we finetune
different LLMs by using the constructed training data, and finding that a
relatively small LLM can achieve a competitive level of performance in
hallucination detection, when compared to the large LLMs and the prompt-based
approaches using GPT-4. | [
"cs.CL"
] | false |
2402.12940 | 2024-02-20T11:52:29Z | Normalized Orthography for Tunisian Arabic | [
"Houcemeddine Turki",
"Kawthar Ellouze",
"Hager Ben Ammar",
"Mohamed Ali Hadj Taieb",
"Imed Adel",
"Mohamed Ben Aouicha",
"Pier Luigi Farri",
"Abderrezak Bennour"
] | Tunisian Arabic (ISO 693-3: aeb) is a distinct linguistic variety native to
Tunisia, initially stemmed from the Arabic language and enriched by a multitude
of historical influences. This research introduces the "Normalized Orthography
for Tunisian Arabic" (NOTA), an adaptation of CODA* guidelines tailored for
transcribing Tunisian Arabic using the Arabic script for language resource
development purposes, with an emphasis on user-friendliness and consistency.
The updated standard seeks to address challenges related to accurately
representing the unique characteristics of Tunisian phonology and morphology.
This will be achieved by rectifying problems arising from transcriptions based
on resemblances to Modern Standard Arabic. | [
"cs.CL"
] | false |
2402.12998 | 2024-02-20T13:25:39Z | Phonotactic Complexity across Dialects | [
"Ryan Soh-Eun Shim",
"Kalvin Chang",
"David R. Mortensen"
] | Received wisdom in linguistic typology holds that if the structure of a
language becomes more complex in one dimension, it will simplify in another,
building on the assumption that all languages are equally complex (Joseph and
Newmeyer, 2012). We study this claim on a micro-level, using a
tightly-controlled sample of Dutch dialects (across 366 collection sites) and
Min dialects (across 60 sites), which enables a more fair comparison across
varieties. Even at the dialect level, we find empirical evidence for a tradeoff
between word length and a computational measure of phonotactic complexity from
a LSTM-based phone-level language model-a result previously documented only at
the language level. A generalized additive model (GAM) shows that dialects with
low phonotactic complexity concentrate around the capital regions, which we
hypothesize to correspond to prior hypotheses that language varieties of
greater or more diverse populations show reduced phonotactic complexity. We
also experiment with incorporating the auxiliary task of predicting syllable
constituency, but do not find an increase in the negative correlation observed. | [
"cs.CL"
] | false |
2402.13013 | 2024-02-20T13:56:38Z | Code Needs Comments: Enhancing Code LLMs with Comment Augmentation | [
"Demin Song",
"Honglin Guo",
"Yunhua Zhou",
"Shuhao Xing",
"Yudong Wang",
"Zifan Song",
"Wenwei Zhang",
"Qipeng Guo",
"Hang Yan",
"Xipeng Qiu",
"Dahua Lin"
] | The programming skill is one crucial ability for Large Language Models
(LLMs), necessitating a deep understanding of programming languages (PLs) and
their correlation with natural languages (NLs). We examine the impact of
pre-training data on code-focused LLMs' performance by assessing the comment
density as a measure of PL-NL alignment. Given the scarcity of code-comment
aligned data in pre-training corpora, we introduce a novel data augmentation
method that generates comments for existing code, coupled with a data filtering
strategy that filters out code data poorly correlated with natural language. We
conducted experiments on three code-focused LLMs and observed consistent
improvements in performance on two widely-used programming skill benchmarks.
Notably, the model trained on the augmented data outperformed both the model
used for generating comments and the model further trained on the data without
augmentation. | [
"cs.CL"
] | false |
2402.13016 | 2024-02-20T13:59:12Z | Understanding the effects of language-specific class imbalance in
multilingual fine-tuning | [
"Vincent Jung",
"Lonneke van der Plas"
] | We study the effect of one type of imbalance often present in real-life
multilingual classification datasets: an uneven distribution of labels across
languages. We show evidence that fine-tuning a transformer-based Large Language
Model (LLM) on a dataset with this imbalance leads to worse performance, a more
pronounced separation of languages in the latent space, and the promotion of
uninformative features. We modify the traditional class weighing approach to
imbalance by calculating class weights separately for each language and show
that this helps mitigate those detrimental effects. These results create
awareness of the negative effects of language-specific class imbalance in
multilingual fine-tuning and the way in which the model learns to rely on the
separation of languages to perform the task. | [
"cs.CL"
] | false |
2402.13036 | 2024-02-20T14:23:34Z | SiLLM: Large Language Models for Simultaneous Machine Translation | [
"Shoutao Guo",
"Shaolei Zhang",
"Zhengrui Ma",
"Min Zhang",
"Yang Feng"
] | Simultaneous Machine Translation (SiMT) generates translations while reading
the source sentence, necessitating a policy to determine the optimal timing for
reading and generating words. Despite the remarkable performance achieved by
Large Language Models (LLM) across various NLP tasks, existing SiMT methods
predominantly focus on conventional transformers, employing a single model to
concurrently determine the policy and generate the translations. However, given
the complexity of SiMT, it is challenging to effectively address both tasks
with a single model. Therefore, there is a need to decouple the SiMT task into
policy-decision and translation sub-tasks. We propose SiLLM, which delegates
the two sub-tasks to separate agents, thereby incorporating LLM into SiMT. The
policy-decision agent is managed by a conventional SiMT model, responsible for
determining the translation policy. The translation agent, leveraging the
capabilities of LLM, generates translation using the partial source sentence.
The two agents collaborate to accomplish SiMT. To facilitate the application of
token-level policies determined by conventional SiMT models to LLM, we propose
a word-level policy adapted for LLM. Experiments on two datasets demonstrate
that, with a small amount of data for fine-tuning LLM, SiLLM attains
state-of-the-art performance. | [
"cs.CL"
] | false |
2402.13048 | 2024-02-20T14:36:23Z | Stable Knowledge Editing in Large Language Models | [
"Zihao Wei",
"Liang Pang",
"Hanxing Ding",
"Jingcheng Deng",
"Huawei Shen",
"Xueqi Cheng"
] | Efficient knowledge editing of large language models is crucial for replacing
obsolete information or incorporating specialized knowledge on a large scale.
However, previous methods implicitly assume that knowledge is localized and
isolated within the model, an assumption that oversimplifies the interconnected
nature of model knowledge. The premise of localization results in an incomplete
knowledge editing, whereas an isolated assumption may impair both other
knowledge and general abilities. It introduces instability to the performance
of the knowledge editing method. To transcend these assumptions, we introduce
StableKE, a method adopts a novel perspective based on knowledge augmentation
rather than knowledge localization. To overcome the expense of human labeling,
StableKE integrates two automated knowledge augmentation strategies: Semantic
Paraphrase Enhancement strategy, which diversifies knowledge descriptions to
facilitate the teaching of new information to the model, and Contextual
Description Enrichment strategy, expanding the surrounding knowledge to prevent
the forgetting of related information. StableKE surpasses other knowledge
editing methods, demonstrating stability both edited knowledge and multi-hop
knowledge, while also preserving unrelated knowledge and general abilities.
Moreover, StableKE can edit knowledge on ChatGPT. | [
"cs.CL"
] | false |
2402.13064 | 2024-02-20T15:00:35Z | Synthetic Data (Almost) from Scratch: Generalized Instruction Tuning for
Language Models | [
"Haoran Li",
"Qingxiu Dong",
"Zhengyang Tang",
"Chaojun Wang",
"Xingxing Zhang",
"Haoyang Huang",
"Shaohan Huang",
"Xiaolong Huang",
"Zeqiang Huang",
"Dongdong Zhang",
"Yuxian Gu",
"Xin Cheng",
"Xun Wang",
"Si-Qing Chen",
"Li Dong",
"Wei Lu",
"Zhifang Sui",
"Benyou Wang",
"Wai Lam",
"Furu Wei"
] | We introduce Generalized Instruction Tuning (called GLAN), a general and
scalable method for instruction tuning of Large Language Models (LLMs). Unlike
prior work that relies on seed examples or existing datasets to construct
instruction tuning data, GLAN exclusively utilizes a pre-curated taxonomy of
human knowledge and capabilities as input and generates large-scale synthetic
instruction data across all disciplines. Specifically, inspired by the
systematic structure in human education system, we build the taxonomy by
decomposing human knowledge and capabilities to various fields, sub-fields and
ultimately, distinct disciplines semi-automatically, facilitated by LLMs.
Subsequently, we generate a comprehensive list of subjects for every discipline
and proceed to design a syllabus tailored to each subject, again utilizing
LLMs. With the fine-grained key concepts detailed in every class session of the
syllabus, we are able to generate diverse instructions with a broad coverage
across the entire spectrum of human knowledge and skills. Extensive experiments
on large language models (e.g., Mistral) demonstrate that GLAN excels in
multiple dimensions from mathematical reasoning, coding, academic exams,
logical reasoning to general instruction following without using task-specific
training data of these tasks. In addition, GLAN allows for easy customization
and new fields or skills can be added by simply incorporating a new node into
our taxonomy. | [
"cs.CL"
] | true |
2402.13113 | 2024-02-20T16:09:49Z | When Only Time Will Tell: Interpreting How Transformers Process Local
Ambiguities Through the Lens of Restart-Incrementality | [
"Brielen Madureira",
"Patrick Kahardipraja",
"David Schlangen"
] | Incremental models that process sentences one token at a time will sometimes
encounter points where more than one interpretation is possible. Causal models
are forced to output one interpretation and continue, whereas models that can
revise may edit their previous output as the ambiguity is resolved. In this
work, we look at how restart-incremental Transformers build and update internal
states, in an effort to shed light on what processes cause revisions not viable
in autoregressive models. We propose an interpretable way to analyse the
incremental states, showing that their sequential structure encodes information
on the garden path effect and its resolution. Our method brings insights on
various bidirectional encoders for contextualised meaning representation and
dependency parsing, contributing to show their advantage over causal models
when it comes to revisions. | [
"cs.CL"
] | false |
2402.13130 | 2024-02-20T16:43:20Z | Are ELECTRA's Sentence Embeddings Beyond Repair? The Case of Semantic
Textual Similarity | [
"Ivan Rep",
"David Dukić",
"Jan Šnajder"
] | While BERT produces high-quality sentence embeddings, its pre-training
computational cost is a significant drawback. In contrast, ELECTRA delivers a
cost-effective pre-training objective and downstream task performance
improvements, but not as performant sentence embeddings. The community tacitly
stopped utilizing ELECTRA's sentence embeddings for semantic textual similarity
(STS). We notice a significant drop in performance when using the ELECTRA
discriminator's last layer in comparison to earlier layers. We explore this
drop and devise a way to repair ELECTRA's embeddings, proposing a novel
truncated model fine-tuning (TMFT) method. TMFT improves the Spearman
correlation coefficient by over 8 points while increasing parameter efficiency
on the STS benchmark dataset. We extend our analysis to various model sizes and
languages. Further, we discover the surprising efficacy of ELECTRA's generator
model, which performs on par with BERT, using significantly fewer parameters
and a substantially smaller embedding size. Finally, we observe further boosts
by combining TMFT with a word similarity task or domain adaptive pre-training. | [
"cs.CL"
] | false |
2402.13137 | 2024-02-20T16:53:26Z | The Hidden Space of Transformer Language Adapters | [
"Jesujoba O. Alabi",
"Marius Mosbach",
"Matan Eyal",
"Dietrich Klakow",
"Mor Geva"
] | We analyze the operation of transformer language adapters, which are small
modules trained on top of a frozen language model to adapt its predictions to
new target languages. We show that adapted predictions mostly evolve in the
source language the model was trained on, while the target language becomes
pronounced only in the very last layers of the model. Moreover, the adaptation
process is gradual and distributed across layers, where it is possible to skip
small groups of adapters without decreasing adaptation performance. Last, we
show that adapters operate on top of the model's frozen representation space
while largely preserving its structure, rather than on an 'isolated' subspace.
Our findings provide a deeper view into the adaptation process of language
models to new languages, showcasing the constraints imposed on it by the
underlying model and introduces practical implications to enhance its
efficiency. | [
"cs.CL"
] | false |
2402.13188 | 2024-02-20T17:56:24Z | Question Calibration and Multi-Hop Modeling for Temporal Question
Answering | [
"Chao Xue",
"Di Liang",
"Pengfei Wang",
"Jing Zhang"
] | Many models that leverage knowledge graphs (KGs) have recently demonstrated
remarkable success in question answering (QA) tasks. In the real world, many
facts contained in KGs are time-constrained thus temporal KGQA has received
increasing attention. Despite the fruitful efforts of previous models in
temporal KGQA, they still have several limitations. (I) They adopt pre-trained
language models (PLMs) to obtain question representations, while PLMs tend to
focus on entity information and ignore entity transfer caused by temporal
constraints, and finally fail to learn specific temporal representations of
entities. (II) They neither emphasize the graph structure between entities nor
explicitly model the multi-hop relationship in the graph, which will make it
difficult to solve complex multi-hop question answering. To alleviate this
problem, we propose a novel Question Calibration and Multi-Hop Modeling
(QC-MHM) approach. Specifically, We first calibrate the question representation
by fusing the question and the time-constrained concepts in KG. Then, we
construct the GNN layer to complete multi-hop message passing. Finally, the
question representation is combined with the embedding output by the GNN to
generate the final prediction. Empirical results verify that the proposed model
achieves better performance than the state-of-the-art models in the benchmark
dataset. Notably, the Hits@1 and Hits@10 results of QC-MHM on the CronQuestions
dataset's complex questions are absolutely improved by 5.1% and 1.2% compared
to the best-performing baseline. Moreover, QC-MHM can generate interpretable
and trustworthy predictions. | [
"cs.CL"
] | false |
2402.13211 | 2024-02-20T18:21:32Z | Can Large Language Models be Good Emotional Supporter? Mitigating
Preference Bias on Emotional Support Conversation | [
"Dongjin Kang",
"Sunghwan Kim",
"Taeyoon Kwon",
"Seungjun Moon",
"Hyunsouk Cho",
"Youngjae Yu",
"Dongha Lee",
"Jinyoung Yeo"
] | Emotional Support Conversation (ESC) is a task aimed at alleviating
individuals' emotional distress through daily conversation. Given its inherent
complexity and non-intuitive nature, ESConv dataset incorporates support
strategies to facilitate the generation of appropriate responses. Recently,
despite the remarkable conversational ability of large language models (LLMs),
previous studies have suggested that they often struggle with providing useful
emotional support. Hence, this work initially analyzes the results of LLMs on
ESConv, revealing challenges in selecting the correct strategy and a notable
preference for a specific strategy. Motivated by these, we explore the impact
of the inherent preference in LLMs on providing emotional support, and
consequently, we observe that exhibiting high preference for specific
strategies hinders effective emotional support, aggravating its robustness in
predicting the appropriate strategy. Moreover, we conduct a methodological
study to offer insights into the necessary approaches for LLMs to serve as
proficient emotional supporters. Our findings emphasize that (1) low preference
for specific strategies hinders the progress of emotional support, (2) external
assistance helps reduce preference bias, and (3) LLMs alone cannot become good
emotional supporters. These insights suggest promising avenues for future
research to enhance the emotional intelligence of LLMs. | [
"cs.CL",
"I.2.7"
] | false |
2402.13222 | 2024-02-20T18:32:47Z | RoCode: A Dataset for Measuring Code Intelligence from Problem
Definitions in Romanian | [
"Adrian Cosma",
"Bogdan Iordache",
"Paolo Rosso"
] | Recently, large language models (LLMs) have become increasingly powerful and
have become capable of solving a plethora of tasks through proper instructions
in natural language. However, the vast majority of testing suites assume that
the instructions are written in English, the de facto prompting language. Code
intelligence and problem solving still remain a difficult task, even for the
most advanced LLMs. Currently, there are no datasets to measure the
generalization power for code-generation models in a language other than
English. In this work, we present RoCode, a competitive programming dataset,
consisting of 2,642 problems written in Romanian, 11k solutions in C, C++ and
Python and comprehensive testing suites for each problem. The purpose of RoCode
is to provide a benchmark for evaluating the code intelligence of language
models trained on Romanian / multilingual text as well as a fine-tuning set for
pretrained Romanian models. Through our results and review of related works, we
argue for the need to develop code models for languages other than English. | [
"cs.CL"
] | false |
2402.13253 | 2024-02-20T18:59:26Z | BiMediX: Bilingual Medical Mixture of Experts LLM | [
"Sara Pieri",
"Sahal Shaji Mullappilly",
"Fahad Shahbaz Khan",
"Rao Muhammad Anwer",
"Salman Khan",
"Timothy Baldwin",
"Hisham Cholakkal"
] | In this paper, we introduce BiMediX, the first bilingual medical mixture of
experts LLM designed for seamless interaction in both English and Arabic. Our
model facilitates a wide range of medical interactions in English and Arabic,
including multi-turn chats to inquire about additional details such as patient
symptoms and medical history, multiple-choice question answering, and
open-ended question answering. We propose a semi-automated English-to-Arabic
translation pipeline with human refinement to ensure high-quality translations.
We also introduce a comprehensive evaluation benchmark for Arabic medical LLMs.
Furthermore, we introduce BiMed1.3M, an extensive Arabic-English bilingual
instruction set covering 1.3 Million diverse medical interactions, resulting in
over 632 million healthcare specialized tokens for instruction tuning. Our
BiMed1.3M dataset includes 250k synthesized multi-turn doctor-patient chats and
maintains a 1:2 Arabic-to-English ratio. Our model outperforms state-of-the-art
Med42 and Meditron by average absolute gains of 2.5% and 4.1%, respectively,
computed across multiple medical evaluation benchmarks in English, while
operating at 8-times faster inference. Moreover, our BiMediX outperforms the
generic Arabic-English bilingual LLM, Jais-30B, by average absolute gains of
10% on our Arabic medical benchmark and 15% on bilingual evaluations across
multiple datasets. Our project page with source code and trained model is
available at https://github.com/mbzuai-oryx/BiMediX . | [
"cs.CL"
] | false |
2402.13302 | 2024-02-20T13:47:51Z | Enhancing Modern Supervised Word Sense Disambiguation Models by Semantic
Lexical Resources | [
"Stefano Melacci",
"Achille Globo",
"Leonardo Rigutini"
] | Supervised models for Word Sense Disambiguation (WSD) currently yield to
state-of-the-art results in the most popular benchmarks. Despite the recent
introduction of Word Embeddings and Recurrent Neural Networks to design
powerful context-related features, the interest in improving WSD models using
Semantic Lexical Resources (SLRs) is mostly restricted to knowledge-based
approaches. In this paper, we enhance "modern" supervised WSD models exploiting
two popular SLRs: WordNet and WordNet Domains. We propose an effective way to
introduce semantic features into the classifiers, and we consider using the SLR
structure to augment the training data. We study the effect of different types
of semantic features, investigating their interaction with local contexts
encoded by means of mixtures of Word Embeddings or Recurrent Neural Networks,
and we extend the proposed model into a novel multi-layer architecture for WSD.
A detailed experimental comparison in the recent Unified Evaluation Framework
(Raganato et al., 2017) shows that the proposed approach leads to supervised
models that compare favourably with the state-of-the art. | [
"cs.CL"
] | false |
2402.13331 | 2024-02-20T19:19:47Z | Enhanced Hallucination Detection in Neural Machine Translation through
Simple Detector Aggregation | [
"Anas Himmi",
"Guillaume Staerman",
"Marine Picot",
"Pierre Colombo",
"Nuno M. Guerreiro"
] | Hallucinated translations pose significant threats and safety concerns when
it comes to the practical deployment of machine translation systems. Previous
research works have identified that detectors exhibit complementary performance
different detectors excel at detecting different types of hallucinations. In
this paper, we propose to address the limitations of individual detectors by
combining them and introducing a straightforward method for aggregating
multiple detectors. Our results demonstrate the efficacy of our aggregated
detector, providing a promising step towards evermore reliable machine
translation systems. | [
"cs.CL"
] | false |
2402.13374 | 2024-02-20T20:57:47Z | Reliable LLM-based User Simulator for Task-Oriented Dialogue Systems | [
"Ivan Sekulić",
"Silvia Terragni",
"Victor Guimarães",
"Nghia Khau",
"Bruna Guedes",
"Modestas Filipavicius",
"André Ferreira Manso",
"Roland Mathis"
] | In the realm of dialogue systems, user simulation techniques have emerged as
a game-changer, redefining the evaluation and enhancement of task-oriented
dialogue (TOD) systems. These methods are crucial for replicating real user
interactions, enabling applications like synthetic data augmentation, error
detection, and robust evaluation. However, existing approaches often rely on
rigid rule-based methods or on annotated data. This paper introduces DAUS, a
Domain-Aware User Simulator. Leveraging large language models, we fine-tune
DAUS on real examples of task-oriented dialogues. Results on two relevant
benchmarks showcase significant improvements in terms of user goal fulfillment.
Notably, we have observed that fine-tuning enhances the simulator's coherence
with user goals, effectively mitigating hallucinations -- a major source of
inconsistencies in simulator responses. | [
"cs.CL"
] | false |
2402.13408 | 2024-02-20T22:26:35Z | Healthcare Copilot: Eliciting the Power of General LLMs for Medical
Consultation | [
"Zhiyao Ren",
"Yibing Zhan",
"Baosheng Yu",
"Liang Ding",
"Dacheng Tao"
] | The copilot framework, which aims to enhance and tailor large language models
(LLMs) for specific complex tasks without requiring fine-tuning, is gaining
increasing attention from the community. In this paper, we introduce the
construction of a Healthcare Copilot designed for medical consultation. The
proposed Healthcare Copilot comprises three main components: 1) the Dialogue
component, responsible for effective and safe patient interactions; 2) the
Memory component, storing both current conversation data and historical patient
information; and 3) the Processing component, summarizing the entire dialogue
and generating reports. To evaluate the proposed Healthcare Copilot, we
implement an auto-evaluation scheme using ChatGPT for two roles: as a virtual
patient engaging in dialogue with the copilot, and as an evaluator to assess
the quality of the dialogue. Extensive results demonstrate that the proposed
Healthcare Copilot significantly enhances the capabilities of general LLMs for
medical consultations in terms of inquiry capability, conversational fluency,
response accuracy, and safety. Furthermore, we conduct ablation studies to
highlight the contribution of each individual module in the Healthcare Copilot.
Code will be made publicly available on GitHub. | [
"cs.CL"
] | false |
2402.13415 | 2024-02-20T22:56:23Z | Structure Guided Prompt: Instructing Large Language Model in Multi-Step
Reasoning by Exploring Graph Structure of the Text | [
"Kewei Cheng",
"Nesreen K. Ahmed",
"Theodore Willke",
"Yizhou Sun"
] | Although Large Language Models (LLMs) excel at addressing straightforward
reasoning tasks, they frequently struggle with difficulties when confronted by
more complex multi-step reasoning due to a range of factors. Firstly, natural
language often encompasses complex relationships among entities, making it
challenging to maintain a clear reasoning chain over longer spans. Secondly,
the abundance of linguistic diversity means that the same entities and
relationships can be expressed using different terminologies and structures,
complicating the task of identifying and establishing connections between
multiple pieces of information. Graphs provide an effective solution to
represent data rich in relational information and capture long-term
dependencies among entities. To harness the potential of graphs, our paper
introduces Structure Guided Prompt, an innovative three-stage task-agnostic
prompting framework designed to improve the multi-step reasoning capabilities
of LLMs in a zero-shot setting. This framework explicitly converts unstructured
text into a graph via LLMs and instructs them to navigate this graph using
task-specific strategies to formulate responses. By effectively organizing
information and guiding navigation, it enables LLMs to provide more accurate
and context-aware responses. Our experiments show that this framework
significantly enhances the reasoning capabilities of LLMs, enabling them to
excel in a broader spectrum of natural language scenarios. | [
"cs.CL"
] | false |
2402.13426 | 2024-02-20T23:38:39Z | Explaining Relationships Among Research Papers | [
"Xiangci Li",
"Jessica Ouyang"
] | Due to the rapid pace of research publications, keeping up to date with all
the latest related papers is very time-consuming, even with daily feed tools.
There is a need for automatically generated, short, customized literature
reviews of sets of papers to help researchers decide what to read. While
several works in the last decade have addressed the task of explaining a single
research paper, usually in the context of another paper citing it, the
relationship among multiple papers has been ignored; prior works have focused
on generating a single citation sentence in isolation, without addressing the
expository and transition sentences needed to connect multiple papers in a
coherent story. In this work, we explore a feature-based, LLM-prompting
approach to generate richer citation texts, as well as generating multiple
citations at once to capture the complex relationships among research papers.
We perform an expert evaluation to investigate the impact of our proposed
features on the quality of the generated paragraphs and find a strong
correlation between human preference and integrative writing style, suggesting
that humans prefer high-level, abstract citations, with transition sentences
between them to provide an overall story. | [
"cs.CL"
] | false |
2402.12621 | 2024-02-20T01:04:21Z | Reflect-RL: Two-Player Online RL Fine-Tuning for LMs | [
"Runlong Zhou",
"Simon S. Du",
"Beibin Li"
] | As language models (LMs) demonstrate their capabilities in various fields,
their application to tasks requiring multi-round interactions has become
increasingly popular. These tasks usually have complex dynamics, so supervised
fine-tuning (SFT) on a limited offline dataset does not yield good performance.
However, only a few works attempted to directly train the LMs within
interactive decision-making environments. We aim to create an effective
mechanism to fine-tune LMs with online reinforcement learning (RL) in these
environments. We propose Reflect-RL, a two-player system to fine-tune an LM
using online RL, where a frozen reflection model assists the policy model. To
generate data for the warm-up SFT stage, we use negative example generation to
enhance the error-correction ability of the reflection model. Furthermore, we
designed single-prompt action enumeration and applied curriculum learning to
allow the policy model to learn more efficiently. Empirically, we verify that
Reflect-RL outperforms SFT and online RL without reflection. Testing results
indicate GPT-2-xl after Reflect-RL also outperforms those of untuned
pre-trained LMs, such as Mistral 7B. | [
"cs.LG",
"cs.CL"
] | false |
2402.12649 | 2024-02-20T01:49:15Z | Bias in Language Models: Beyond Trick Tests and Toward RUTEd Evaluation | [
"Kristian Lum",
"Jacy Reese Anthis",
"Chirag Nagpal",
"Alexander D'Amour"
] | Bias benchmarks are a popular method for studying the negative impacts of
bias in LLMs, yet there has been little empirical investigation of whether
these benchmarks are actually indicative of how real world harm may manifest in
the real world. In this work, we study the correspondence between such
decontextualized "trick tests" and evaluations that are more grounded in
Realistic Use and Tangible {Effects (i.e. RUTEd evaluations). We explore this
correlation in the context of gender-occupation bias--a popular genre of bias
evaluation. We compare three de-contextualized evaluations adapted from the
current literature to three analogous RUTEd evaluations applied to long-form
content generation. We conduct each evaluation for seven instruction-tuned
LLMs. For the RUTEd evaluations, we conduct repeated trials of three text
generation tasks: children's bedtime stories, user personas, and English
language learning exercises. We found no correspondence between trick tests and
RUTEd evaluations. Specifically, selecting the least biased model based on the
de-contextualized results coincides with selecting the model with the best
performance on RUTEd evaluations only as often as random chance. We conclude
that evaluations that are not based in realistic use are likely insufficient to
mitigate and assess bias and real-world harms. | [
"cs.CL",
"stat.AP"
] | false |
2402.12784 | 2024-02-20T07:49:30Z | Understanding and Mitigating the Threat of Vec2Text to Dense Retrieval
Systems | [
"Shengyao Zhuang",
"Bevan Koopman",
"Xiaoran Chu",
"Guido Zuccon"
] | The introduction of Vec2Text, a technique for inverting text embeddings, has
raised serious privacy concerns within dense retrieval systems utilizing text
embeddings, including those provided by OpenAI and Cohere. This threat comes
from the ability for a malicious attacker with access to text embeddings to
reconstruct the original text.
In this paper, we investigate various aspects of embedding models that could
influence the recoverability of text using Vec2Text. Our exploration involves
factors such as distance metrics, pooling functions, bottleneck pre-training,
training with noise addition, embedding quantization, and embedding dimensions
-- aspects not previously addressed in the original Vec2Text paper. Through a
thorough analysis of these factors, our aim is to gain a deeper understanding
of the critical elements impacting the trade-offs between text recoverability
and retrieval effectiveness in dense retrieval systems. This analysis provides
valuable insights for practitioners involved in designing privacy-aware dense
retrieval systems. Additionally, we propose a straightforward fix for embedding
transformation that ensures equal ranking effectiveness while mitigating the
risk of text recoverability.
Furthermore, we extend the application of Vec2Text to the separate task of
corpus poisoning, where, theoretically, Vec2Text presents a more potent threat
compared to previous attack methods. Notably, Vec2Text does not require access
to the dense retriever's model parameters and can efficiently generate numerous
adversarial passages.
In summary, this study highlights the potential threat posed by Vec2Text to
existing dense retrieval systems, while also presenting effective methods to
patch and strengthen such systems against such risks. | [
"cs.IR",
"cs.CL"
] | false |
2402.12786 | 2024-02-20T07:51:43Z | Advancing Large Language Models to Capture Varied Speaking Styles and
Respond Properly in Spoken Conversations | [
"Guan-Ting Lin",
"Cheng-Han Chiang",
"Hung-yi Lee"
] | In spoken dialogue, even if two current turns are the same sentence, their
responses might still differ when they are spoken in different styles. The
spoken styles, containing paralinguistic and prosodic information, mark the
most significant difference between text and speech modality. When using
text-only LLMs to model spoken dialogue, text-only LLMs cannot give different
responses based on the speaking style of the current turn. In this paper, we
focus on enabling LLMs to listen to the speaking styles and respond properly.
Our goal is to teach the LLM that "even if the sentences are identical if they
are spoken in different styles, their corresponding responses might be
different". Since there is no suitable dataset for achieving this goal, we
collect a speech-to-speech dataset, StyleTalk, with the following desired
characteristics: when two current speeches have the same content but are spoken
in different styles, their responses will be different. To teach LLMs to
understand and respond properly to the speaking styles, we propose the
Spoken-LLM framework that can model the linguistic content and the speaking
styles. We train Spoken-LLM using the StyleTalk dataset and devise a two-stage
training pipeline to help the Spoken-LLM better learn the speaking styles.
Based on extensive experiments, we show that Spoken-LLM outperforms text-only
baselines and prior speech LLMs methods. | [
"cs.CL",
"eess.AS"
] | false |
2402.12821 | 2024-02-20T08:41:23Z | Identifying Factual Inconsistency in Summaries: Towards Effective
Utilization of Large Language Model | [
"Liyan Xu",
"Zhenlin Su",
"Mo Yu",
"Jin Xu",
"Jinho D. Choi",
"Jie Zhou",
"Fei Liu"
] | Factual inconsistency poses a significant hurdle for the commercial
deployment of abstractive summarizers. Under this Large Language Model (LLM)
era, this work focuses around two important questions: what is the best way to
leverage LLM for factual inconsistency detection, and how could we distill a
smaller LLM with both high efficiency and efficacy? Three zero-shot paradigms
are firstly proposed and evaluated across five diverse datasets: direct
inference on the entire summary or each summary window; entity verification
through question generation and answering. Experiments suggest that LLM itself
is capable to resolve this task train-free under the proper paradigm design,
surpassing strong trained baselines by 2.8% on average. To further promote
practical utility, we then propose training strategies aimed at distilling
smaller open-source LLM that learns to score the entire summary at once with
high accuracy, which outperforms the zero-shot approaches by much larger LLM,
serving as an effective and efficient ready-to-use scorer. | [
"cs.CL",
"cs.LG"
] | false |
2402.12835 | 2024-02-20T09:02:55Z | PANDA: Preference Adaptation for Enhancing Domain-Specific Abilities of
LLMs | [
"An Liu",
"Zonghan Yang",
"Zhenhe Zhang",
"Qingyuan Hu",
"Peng Li",
"Ming Yan",
"Ji Zhang",
"Fei Huang",
"Yang Liu"
] | While Large language models (LLMs) have demonstrated considerable
capabilities across various natural language tasks, they often fall short of
the performance achieved by domain-specific state-of-the-art models. One
potential approach to enhance domain-specific capabilities of LLMs involves
fine-tuning them using corresponding datasets. However, this method can be both
resource and time-intensive, and not applicable to closed-source commercial
LLMs. In this paper, we propose Preference Adaptation for Enhancing
Domain-specific Abilities of LLMs (PANDA), a method designed to augment the
domain-specific capabilities of LLMs by leveraging insights from the response
preference of expert models without requiring fine-tuning. Our experimental
results reveal that PANDA significantly enhances the domain-specific ability of
LLMs on text classification and interactive decision tasks. Moreover, LLM with
PANDA even outperforms the expert model that being learned on 4 tasks of
ScienceWorld. This finding highlights the potential of exploring tuning-free
approaches to achieve weak-to-strong generalization. | [
"cs.CL",
"cs.AI"
] | false |
2402.12890 | 2024-02-20T10:34:19Z | More Discriminative Sentence Embeddings via Semantic Graph Smoothing | [
"Chakib Fettal",
"Lazhar Labiod",
"Mohamed Nadif"
] | This paper explores an empirical approach to learn more discriminantive
sentence representations in an unsupervised fashion. Leveraging semantic graph
smoothing, we enhance sentence embeddings obtained from pretrained models to
improve results for the text clustering and classification tasks. Our method,
validated on eight benchmarks, demonstrates consistent improvements, showcasing
the potential of semantic graph smoothing in improving sentence embeddings for
the supervised and unsupervised document categorization tasks. | [
"cs.CL",
"cs.LG"
] | false |