arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
sequencelengths 1
389
| abstract
stringlengths 96
3.09k
| categories
sequencelengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2402.12058 | 2024-02-19T11:23:53Z | Scaffolding Coordinates to Promote Vision-Language Coordination in Large
Multi-Modal Models | [
"Xuanyu Lei",
"Zonghan Yang",
"Xinrui Chen",
"Peng Li",
"Yang Liu"
] | State-of-the-art Large Multi-Modal Models (LMMs) have demonstrated
exceptional capabilities in vision-language tasks. Despite their advanced
functionalities, the performances of LMMs are still limited in challenging
scenarios that require complex reasoning with multiple levels of visual
information. Existing prompting techniques for LMMs focus on either improving
textual reasoning or leveraging tools for image preprocessing, lacking a simple
and general visual prompting scheme to promote vision-language coordination in
LMMs. In this work, we propose Scaffold prompting that scaffolds coordinates to
promote vision-language coordination. Specifically, Scaffold overlays a dot
matrix within the image as visual information anchors and leverages
multi-dimensional coordinates as textual positional references. Extensive
experiments on a wide range of challenging vision-language tasks demonstrate
the superiority of Scaffold over GPT-4V with the textual CoT prompting. Our
code is released in https://github.com/leixy20/Scaffold. | [
"cs.CV",
"cs.CL"
] | false |
2402.12079 | 2024-02-19T11:59:14Z | LVCHAT: Facilitating Long Video Comprehension | [
"Yu Wang",
"Zeyuan Zhang",
"Julian McAuley",
"Zexue He"
] | Enabling large language models (LLMs) to read videos is vital for multimodal
LLMs. Existing works show promise on short videos whereas long video (longer
than e.g.~1 minute) comprehension remains challenging. The major problem lies
in the over-compression of videos, i.e., the encoded video representations are
not enough to represent the whole video. To address this issue, we propose Long
Video Chat (LVChat), where Frame-Scalable Encoding (FSE) is introduced to
dynamically adjust the number of embeddings in alignment with the duration of
the video to ensure long videos are not overly compressed into a few
embeddings. To deal with long videos whose length is beyond videos seen during
training, we propose Interleaved Frame Encoding (IFE), repeating positional
embedding and interleaving multiple groups of videos to enable long video
input, avoiding performance degradation due to overly long videos. Experimental
results show that LVChat significantly outperforms existing methods by up to
27\% in accuracy on long-video QA datasets and long-video captioning
benchmarks. Our code is published at https://github.com/wangyu-ustc/LVChat. | [
"cs.CV",
"cs.CL"
] | false |
2402.12095 | 2024-02-19T12:23:39Z | Major TOM: Expandable Datasets for Earth Observation | [
"Alistair Francis",
"Mikolaj Czerkawski"
] | Deep learning models are increasingly data-hungry, requiring significant
resources to collect and compile the datasets needed to train them, with Earth
Observation (EO) models being no exception. However, the landscape of datasets
in EO is relatively atomised, with interoperability made difficult by diverse
formats and data structures. If ever larger datasets are to be built, and
duplication of effort minimised, then a shared framework that allows users to
combine and access multiple datasets is needed. Here, Major TOM (Terrestrial
Observation Metaset) is proposed as this extensible framework. Primarily, it
consists of a geographical indexing system based on a set of grid points and a
metadata structure that allows multiple datasets with different sources to be
merged. Besides the specification of Major TOM as a framework, this work also
presents a large, open-access dataset, MajorTOM-Core, which covers the vast
majority of the Earth's land surface. This dataset provides the community with
both an immediately useful resource, as well as acting as a template for future
additions to the Major TOM ecosystem. Access: https://huggingface.co/Major-TOM | [
"cs.CV",
"cs.DB"
] | false |
2402.12098 | 2024-02-19T12:27:39Z | Towards Explainable LiDAR Point Cloud Semantic Segmentation via Gradient
Based Target Localization | [
"Abhishek Kuriyal",
"Vaibhav Kumar"
] | Semantic Segmentation (SS) of LiDAR point clouds is essential for many
applications, such as urban planning and autonomous driving. While much
progress has been made in interpreting SS predictions for images, interpreting
point cloud SS predictions remains a challenge. This paper introduces pGS-CAM,
a novel gradient-based method for generating saliency maps in neural network
activation layers. Inspired by Grad-CAM, which uses gradients to highlight
local importance, pGS-CAM is robust and effective on a variety of datasets
(SemanticKITTI, Paris-Lille3D, DALES) and 3D deep learning architectures
(KPConv, RandLANet). Our experiments show that pGS-CAM effectively accentuates
the feature learning in intermediate activations of SS architectures by
highlighting the contribution of each point. This allows us to better
understand how SS models make their predictions and identify potential areas
for improvement. Relevant codes are available at
https://github.com/geoai4cities/pGS-CAM. | [
"cs.CV",
"cs.AI"
] | false |
2402.12114 | 2024-02-19T13:08:31Z | A Spatiotemporal Illumination Model for 3D Image Fusion in Optical
Coherence Tomography | [
"Stefan Ploner",
"Jungeun Won",
"Julia Schottenhamml",
"Jessica Girgis",
"Kenneth Lam",
"Nadia Waheed",
"James Fujimoto",
"Andreas Maier"
] | Optical coherence tomography (OCT) is a non-invasive, micrometer-scale
imaging modality that has become a clinical standard in ophthalmology. By
raster-scanning the retina, sequential cross-sectional image slices are
acquired to generate volumetric data. In-vivo imaging suffers from
discontinuities between slices that show up as motion and illumination
artifacts. We present a new illumination model that exploits continuity in
orthogonally raster-scanned volume data. Our novel spatiotemporal
parametrization adheres to illumination continuity both temporally, along the
imaged slices, as well as spatially, in the transverse directions. Yet, our
formulation does not make inter-slice assumptions, which could have
discontinuities. This is the first optimization of a 3D inverse model in an
image reconstruction context in OCT. Evaluation in 68 volumes from eyes with
pathology showed reduction of illumination artifacts in 88\% of the data, and
only 6\% showed moderate residual illumination artifacts. The method enables
the use of forward-warped motion corrected data, which is more accurate, and
enables supersampling and advanced 3D image reconstruction in OCT. | [
"eess.IV",
"cs.CV"
] | false |
2402.12303 | 2024-02-19T17:27:04Z | UncertaintyTrack: Exploiting Detection and Localization Uncertainty in
Multi-Object Tracking | [
"Chang Won Lee",
"Steven L. Waslander"
] | Multi-object tracking (MOT) methods have seen a significant boost in
performance recently, due to strong interest from the research community and
steadily improving object detection methods. The majority of tracking methods
follow the tracking-by-detection (TBD) paradigm, blindly trust the incoming
detections with no sense of their associated localization uncertainty. This
lack of uncertainty awareness poses a problem in safety-critical tasks such as
autonomous driving where passengers could be put at risk due to erroneous
detections that have propagated to downstream tasks, including MOT. While there
are existing works in probabilistic object detection that predict the
localization uncertainty around the boxes, no work in 2D MOT for autonomous
driving has studied whether these estimates are meaningful enough to be
leveraged effectively in object tracking. We introduce UncertaintyTrack, a
collection of extensions that can be applied to multiple TBD trackers to
account for localization uncertainty estimates from probabilistic object
detectors. Experiments on the Berkeley Deep Drive MOT dataset show that the
combination of our method and informative uncertainty estimates reduces the
number of ID switches by around 19\% and improves mMOTA by 2-3%. The source
code is available at https://github.com/TRAILab/UncertaintyTrack | [
"cs.CV",
"cs.RO"
] | false |
2402.12320 | 2024-02-19T17:49:23Z | Landmark Stereo Dataset for Landmark Recognition and Moving Node
Localization in a Non-GPS Battlefield Environment | [
"Ganesh Sapkota",
"Sanjay Madria"
] | In this paper, we have proposed a new strategy of using the landmark anchor
node instead of a radio-based anchor node to obtain the virtual coordinates
(landmarkID, DISTANCE) of moving troops or defense forces that will help in
tracking and maneuvering the troops along a safe path within a GPS-denied
battlefield environment. The proposed strategy implements landmark recognition
using the Yolov5 model and landmark distance estimation using an efficient
Stereo Matching Algorithm. We consider that a moving node carrying a low-power
mobile device facilitated with a calibrated stereo vision camera that captures
stereo images of a scene containing landmarks within the battlefield region
whose locations are stored in an offline server residing within the device
itself. We created a custom landmark image dataset called MSTLandmarkv1 with 34
landmark classes and another landmark stereo dataset of those 34 landmark
instances called MSTLandmarkStereov1. We trained the YOLOv5 model with
MSTLandmarkv1 dataset and achieved 0.95 mAP @ 0.5 IoU and 0.767 mAP @ [0.5:
0.95] IoU. We calculated the distance from a node to the landmark utilizing the
bounding box coordinates and the depth map generated by the improved SGM
algorithm using MSTLandmarkStereov1. The tuple of landmark IDs obtained from
the detection result and the distances calculated by the SGM algorithm are
stored as the virtual coordinates of a node. In future work, we will use these
virtual coordinates to obtain the location of a node using an efficient
trilateration algorithm and optimize the node position using the appropriate
optimization method. | [
"cs.CV",
"cs.LG"
] | false |
2402.12525 | 2024-02-19T20:36:32Z | LangXAI: Integrating Large Vision Models for Generating Textual
Explanations to Enhance Explainability in Visual Perception Tasks | [
"Truong Thanh Hung Nguyen",
"Tobias Clement",
"Phuc Truong Loc Nguyen",
"Nils Kemmerzell",
"Van Binh Truong",
"Vo Thanh Khang Nguyen",
"Mohamed Abdelaal",
"Hung Cao"
] | LangXAI is a framework that integrates Explainable Artificial Intelligence
(XAI) with advanced vision models to generate textual explanations for visual
recognition tasks. Despite XAI advancements, an understanding gap persists for
end-users with limited domain knowledge in artificial intelligence and computer
vision. LangXAI addresses this by furnishing text-based explanations for
classification, object detection, and semantic segmentation model outputs to
end-users. Preliminary results demonstrate LangXAI's enhanced plausibility,
with high BERTScore across tasks, fostering a more transparent and reliable AI
framework on vision tasks for end-users. | [
"cs.CV",
"cs.AI"
] | false |
2402.12550 | 2024-02-19T21:20:22Z | Multilinear Mixture of Experts: Scalable Expert Specialization through
Factorization | [
"James Oldfield",
"Markos Georgopoulos",
"Grigorios G. Chrysos",
"Christos Tzelepis",
"Yannis Panagakis",
"Mihalis A. Nicolaou",
"Jiankang Deng",
"Ioannis Patras"
] | The Mixture of Experts (MoE) paradigm provides a powerful way to decompose
inscrutable dense layers into smaller, modular computations often more amenable
to human interpretation, debugging, and editability. A major problem however
lies in the computational cost of scaling the number of experts to achieve
sufficiently fine-grained specialization. In this paper, we propose the
Multilinear Mixutre of Experts (MMoE) layer to address this, focusing on vision
models. MMoE layers perform an implicit computation on prohibitively large
weight tensors entirely in factorized form. Consequently, MMoEs both (1) avoid
the issues incurred through the discrete expert routing in the popular 'sparse'
MoE models, yet (2) do not incur the restrictively high inference-time costs of
'soft' MoE alternatives. We present both qualitative and quantitative evidence
(through visualization and counterfactual interventions respectively) that
scaling MMoE layers when fine-tuning foundation models for vision tasks leads
to more specialized experts at the class-level whilst remaining competitive
with the performance of parameter-matched linear layer counterparts. Finally,
we show that learned expert specialism further facilitates manual correction of
demographic bias in CelebA attribute classification. Our MMoE model code is
available at https://github.com/james-oldfield/MMoE. | [
"cs.CV",
"cs.LG"
] | false |
2402.12551 | 2024-02-19T21:20:56Z | Landmark-based Localization using Stereo Vision and Deep Learning in
GPS-Denied Battlefield Environment | [
"Ganesh Sapkota",
"Sanjay Madria"
] | Localization in a battlefield environment is increasingly challenging as GPS
connectivity is often denied or unreliable, and physical deployment of anchor
nodes across wireless networks for localization can be difficult in hostile
battlefield terrain. Existing range-free localization methods rely on
radio-based anchors and their average hop distance which suffers from accuracy
and stability in dynamic and sparse wireless network topology. Vision-based
methods like SLAM and Visual Odometry use expensive sensor fusion techniques
for map generation and pose estimation. This paper proposes a novel framework
for localization in non-GPS battlefield environments using only the passive
camera sensors and considering naturally existing or artificial landmarks as
anchors. The proposed method utilizes a customcalibrated stereo vision camera
for distance estimation and the YOLOv8s model, which is trained and fine-tuned
with our real-world dataset for landmark recognition. The depth images are
generated using an efficient stereomatching algorithm, and distances to
landmarks are determined by extracting the landmark depth feature utilizing a
bounding box predicted by the landmark recognition model. The position of the
unknown node is then obtained using the efficient least square algorithm and
then optimized using the L-BFGS-B (limited-memory quasi-Newton code for
bound-constrained optimization) method. Experimental results demonstrate that
our proposed framework performs better than existing anchorbased DV-Hop
algorithms and competes with the most efficient vision-based algorithms in
terms of localization error (RMSE). | [
"cs.CV",
"cs.AI"
] | false |
2404.07211 | 2024-02-19T08:03:07Z | A real-time Artificial Intelligence system for learning Sign Language | [
"Elisa Cabana"
] | A primary challenge for the deaf and hearing-impaired community stems from
the communication gap with the hearing society, which can greatly impact their
daily lives and result in social exclusion. To foster inclusivity in society,
our endeavor focuses on developing a cost-effective, resource-efficient, and
open technology based on Artificial Intelligence, designed to assist people in
learning and using Sign Language for communication. The analysis presented in
this research paper intends to enrich the recent academic scientific literature
on Sign Language solutions based on Artificial Intelligence, with a particular
focus on American Sign Language (ASL). This research has yielded promising
preliminary results and serves as a basis for further development. | [
"cs.CV",
"cs.AI"
] | false |
2402.11775 | 2024-02-19T02:07:15Z | FOD-Swin-Net: angular super resolution of fiber orientation distribution
using a transformer-based deep model | [
"Mateus Oliveira da Silva",
"Caio Pinheiro Santana",
"Diedre Santos do Carmo",
"Letícia Rittner"
] | Identifying and characterizing brain fiber bundles can help to understand
many diseases and conditions. An important step in this process is the
estimation of fiber orientations using Diffusion-Weighted Magnetic Resonance
Imaging (DW-MRI). However, obtaining robust orientation estimates demands
high-resolution data, leading to lengthy acquisitions that are not always
clinically available. In this work, we explore the use of automated angular
super resolution from faster acquisitions to overcome this challenge. Using the
publicly available Human Connectome Project (HCP) DW-MRI data, we trained a
transformer-based deep learning architecture to achieve angular super
resolution in fiber orientation distribution (FOD). Our patch-based
methodology, FOD-Swin-Net, is able to bring a single-shell reconstruction
driven from 32 directions to be comparable to a multi-shell 288 direction FOD
reconstruction, greatly reducing the number of required directions on initial
acquisition. Evaluations of the reconstructed FOD with Angular Correlation
Coefficient and qualitative visualizations reveal superior performance than the
state-of-the-art in HCP testing data. Open source code for reproducibility is
available at https://github.com/MICLab-Unicamp/FOD-Swin-Net. | [
"eess.IV",
"cs.CV",
"cs.LG",
"q-bio.NC"
] | false |
2402.11789 | 2024-02-19T02:32:45Z | Statistical Test for Generated Hypotheses by Diffusion Models | [
"Teruyuki Katsuoka",
"Tomohiro Shiraishi",
"Daiki Miwa",
"Vo Nguyen Le Duy",
"Ichiro Takeuchi"
] | The enhanced performance of AI has accelerated its integration into
scientific research. In particular, the use of generative AI to create
scientific hypotheses is promising and is increasingly being applied across
various fields. However, when employing AI-generated hypotheses for critical
decisions, such as medical diagnoses, verifying their reliability is crucial.
In this study, we consider a medical diagnostic task using generated images by
diffusion models, and propose a statistical test to quantify its reliability.
The basic idea behind the proposed statistical test is to employ a selective
inference framework, where we consider a statistical test conditional on the
fact that the generated images are produced by a trained diffusion model. Using
the proposed method, the statistical reliability of medical image diagnostic
results can be quantified in the form of a p-value, allowing for
decision-making with a controlled error rate. We show the theoretical validity
of the proposed statistical test and its effectiveness through numerical
experiments on synthetic and brain image datasets. | [
"stat.ML",
"cs.CV",
"cs.LG"
] | false |
2402.11866 | 2024-02-19T06:14:46Z | Two Online Map Matching Algorithms Based on Analytic Hierarchy Process
and Fuzzy Logic | [
"Jeremy J. Lin",
"Tomoro Mochida",
"Riley C. W. O'Neill",
"Atsuro Yoshida",
"Masashi Yamazaki",
"Akinobu Sasada"
] | Our aim of this paper is to develop new map matching algorithms and to
improve upon previous work. We address two key approaches: Analytic Hierarchy
Process (AHP) map matching and fuzzy logic map matching. AHP is a
decision-making method that combines mathematical analysis with human judgment,
and fuzzy logic is an approach to computing based on the degree of truth and
aims at modeling the imprecise modes of reasoning from 0 to 1 rather than the
usual boolean logic. Of these algorithms, the way of our applying AHP to map
matching is newly developed in this paper, meanwhile, our application of fuzzy
logic to map matching is mostly the same as existing research except for some
small changes. Because of the common characteristic that both methods are
designed to handle imprecise information and simplicity for implementation, we
decided to use these methods. | [
"cs.CG",
"cs.AI",
"cs.CV"
] | false |
2402.11989 | 2024-02-19T09:32:48Z | Privacy-Preserving Low-Rank Adaptation for Latent Diffusion Models | [
"Zihao Luo",
"Xilie Xu",
"Feng Liu",
"Yun Sing Koh",
"Di Wang",
"Jingfeng Zhang"
] | Low-rank adaptation (LoRA) is an efficient strategy for adapting latent
diffusion models (LDMs) on a training dataset to generate specific objects by
minimizing the adaptation loss. However, adapted LDMs via LoRA are vulnerable
to membership inference (MI) attacks that can judge whether a particular data
point belongs to private training datasets, thus facing severe risks of privacy
leakage. To defend against MI attacks, we make the first effort to propose a
straightforward solution: privacy-preserving LoRA (PrivateLoRA). PrivateLoRA is
formulated as a min-max optimization problem where a proxy attack model is
trained by maximizing its MI gain while the LDM is adapted by minimizing the
sum of the adaptation loss and the proxy attack model's MI gain. However, we
empirically disclose that PrivateLoRA has the issue of unstable optimization
due to the large fluctuation of the gradient scale which impedes adaptation. To
mitigate this issue, we propose Stable PrivateLoRA that adapts the LDM by
minimizing the ratio of the adaptation loss to the MI gain, which implicitly
rescales the gradient and thus stabilizes the optimization. Our comprehensive
empirical results corroborate that adapted LDMs via Stable PrivateLoRA can
effectively defend against MI attacks while generating high-quality images. Our
code is available at https://github.com/WilliamLUO0/StablePrivateLoRA. | [
"cs.LG",
"cs.CR",
"cs.CV"
] | false |
2402.12121 | 2024-02-19T13:16:10Z | Evaluating Image Review Ability of Vision Language Models | [
"Shigeki Saito",
"Kazuki Hayashi",
"Yusuke Ide",
"Yusuke Sakai",
"Kazuma Onishi",
"Toma Suzuki",
"Seiji Gobara",
"Hidetaka Kamigaito",
"Katsuhiko Hayashi",
"Taro Watanabe"
] | Large-scale vision language models (LVLMs) are language models that are
capable of processing images and text inputs by a single model. This paper
explores the use of LVLMs to generate review texts for images. The ability of
LVLMs to review images is not fully understood, highlighting the need for a
methodical evaluation of their review abilities. Unlike image captions, review
texts can be written from various perspectives such as image composition and
exposure. This diversity of review perspectives makes it difficult to uniquely
determine a single correct review for an image. To address this challenge, we
introduce an evaluation method based on rank correlation analysis, in which
review texts are ranked by humans and LVLMs, then, measures the correlation
between these rankings. We further validate this approach by creating a
benchmark dataset aimed at assessing the image review ability of recent LVLMs.
Our experiments with the dataset reveal that LVLMs, particularly those with
proven superiority in other evaluative contexts, excel at distinguishing
between high-quality and substandard image reviews. | [
"cs.CL",
"cs.AI",
"cs.CV",
"cs.MM"
] | false |
2402.12179 | 2024-02-19T14:37:17Z | Examining Monitoring System: Detecting Abnormal Behavior In Online
Examinations | [
"Dinh An Ngo",
"Thanh Dat Nguyen",
"Thi Le Chi Dang",
"Huy Hoan Le",
"Ton Bao Ho",
"Vo Thanh Khang Nguyen",
"Truong Thanh Hung Nguyen"
] | Cheating in online exams has become a prevalent issue over the past decade,
especially during the COVID-19 pandemic. To address this issue of academic
dishonesty, our "Exam Monitoring System: Detecting Abnormal Behavior in Online
Examinations" is designed to assist proctors in identifying unusual student
behavior. Our system demonstrates high accuracy and speed in detecting cheating
in real-time scenarios, providing valuable information, and aiding proctors in
decision-making. This article outlines our methodology and the effectiveness of
our system in mitigating the widespread problem of cheating in online exams. | [
"cs.CV",
"cs.AI",
"cs.CY"
] | false |
2402.12181 | 2024-02-19T14:42:10Z | Revisiting Data Augmentation in Deep Reinforcement Learning | [
"Jianshu Hu",
"Yunpeng Jiang",
"Paul Weng"
] | Various data augmentation techniques have been recently proposed in
image-based deep reinforcement learning (DRL). Although they empirically
demonstrate the effectiveness of data augmentation for improving sample
efficiency or generalization, which technique should be preferred is not always
clear. To tackle this question, we analyze existing methods to better
understand them and to uncover how they are connected. Notably, by expressing
the variance of the Q-targets and that of the empirical actor/critic losses of
these methods, we can analyze the effects of their different components and
compare them. We furthermore formulate an explanation about how these methods
may be affected by choosing different data augmentation transformations in
calculating the target Q-values. This analysis suggests recommendations on how
to exploit data augmentation in a more principled way. In addition, we include
a regularization term called tangent prop, previously proposed in computer
vision, but whose adaptation to DRL is novel to the best of our knowledge. We
evaluate our proposition and validate our analysis in several domains. Compared
to different relevant baselines, we demonstrate that it achieves
state-of-the-art performance in most environments and shows higher sample
efficiency and better generalization ability in some complex environments. | [
"cs.LG",
"cs.AI",
"cs.CV"
] | false |
2402.12187 | 2024-02-19T14:51:20Z | Adversarial Feature Alignment: Balancing Robustness and Accuracy in Deep
Learning via Adversarial Training | [
"Leo Hyun Park",
"Jaeuk Kim",
"Myung Gyo Oh",
"Jaewoo Park",
"Taekyoung Kwon"
] | Deep learning models continue to advance in accuracy, yet they remain
vulnerable to adversarial attacks, which often lead to the misclassification of
adversarial examples. Adversarial training is used to mitigate this problem by
increasing robustness against these attacks. However, this approach typically
reduces a model's standard accuracy on clean, non-adversarial samples. The
necessity for deep learning models to balance both robustness and accuracy for
security is obvious, but achieving this balance remains challenging, and the
underlying reasons are yet to be clarified. This paper proposes a novel
adversarial training method called Adversarial Feature Alignment (AFA), to
address these problems. Our research unveils an intriguing insight:
misalignment within the feature space often leads to misclassification,
regardless of whether the samples are benign or adversarial. AFA mitigates this
risk by employing a novel optimization algorithm based on contrastive learning
to alleviate potential feature misalignment. Through our evaluations, we
demonstrate the superior performance of AFA. The baseline AFA delivers higher
robust accuracy than previous adversarial contrastive learning methods while
minimizing the drop in clean accuracy to 1.86% and 8.91% on CIFAR10 and
CIFAR100, respectively, in comparison to cross-entropy. We also show that joint
optimization of AFA and TRADES, accompanied by data augmentation using a recent
diffusion model, achieves state-of-the-art accuracy and robustness. | [
"cs.CV",
"cs.CR",
"cs.LG",
"I.4.0; K.6.5; D.2.7"
] | false |
2402.12198 | 2024-02-19T15:03:04Z | Zero shot VLMs for hate meme detection: Are we there yet? | [
"Naquee Rizwan",
"Paramananda Bhaskar",
"Mithun Das",
"Swadhin Satyaprakash Majhi",
"Punyajoy Saha",
"Animesh Mukherjee"
] | Multimedia content on social media is rapidly evolving, with memes gaining
prominence as a distinctive form. Unfortunately, some malicious users exploit
memes to target individuals or vulnerable communities, making it imperative to
identify and address such instances of hateful memes. Extensive research has
been conducted to address this issue by developing hate meme detection models.
However, a notable limitation of traditional machine/deep learning models is
the requirement for labeled datasets for accurate classification. Recently, the
research community has witnessed the emergence of several visual language
models that have exhibited outstanding performance across various tasks. In
this study, we aim to investigate the efficacy of these visual language models
in handling intricate tasks such as hate meme detection. We use various prompt
settings to focus on zero-shot classification of hateful/harmful memes. Through
our analysis, we observe that large VLMs are still vulnerable for zero-shot
hate meme detection. | [
"cs.CL",
"cs.CV",
"cs.LG"
] | false |
2402.12292 | 2024-02-19T17:12:16Z | Regularization by denoising: Bayesian model and Langevin-within-split
Gibbs sampling | [
"Elhadji C. Faye",
"Mame Diarra Fall",
"Nicolas Dobigeon"
] | This paper introduces a Bayesian framework for image inversion by deriving a
probabilistic counterpart to the regularization-by-denoising (RED) paradigm. It
additionally implements a Monte Carlo algorithm specifically tailored for
sampling from the resulting posterior distribution, based on an asymptotically
exact data augmentation (AXDA). The proposed algorithm is an approximate
instance of split Gibbs sampling (SGS) which embeds one Langevin Monte Carlo
step. The proposed method is applied to common imaging tasks such as
deblurring, inpainting and super-resolution, demonstrating its efficacy through
extensive numerical experiments. These contributions advance Bayesian inference
in imaging by leveraging data-driven regularization strategies within a
probabilistic framework. | [
"stat.ML",
"cs.CV",
"cs.LG"
] | false |
2402.12336 | 2024-02-19T18:09:48Z | Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings
for Robust Large Vision-Language Models | [
"Christian Schlarmann",
"Naman Deep Singh",
"Francesco Croce",
"Matthias Hein"
] | Multi-modal foundation models like OpenFlamingo, LLaVA, and GPT-4 are
increasingly used for various real-world tasks. Prior work has shown that these
models are highly vulnerable to adversarial attacks on the vision modality.
These attacks can be leveraged to spread fake information or defraud users, and
thus pose a significant risk, which makes the robustness of large multi-modal
foundation models a pressing problem. The CLIP model, or one of its variants,
is used as a frozen vision encoder in many vision-language models (VLMs), e.g.
LLaVA and OpenFlamingo. We propose an unsupervised adversarial fine-tuning
scheme to obtain a robust CLIP vision encoder, which yields robustness on all
vision down-stream tasks (VLMs, zero-shot classification) that rely on CLIP. In
particular, we show that stealth-attacks on users of VLMs by a malicious third
party providing manipulated images are no longer possible once one replaces the
original CLIP model with our robust one. No retraining or fine-tuning of the
VLM is required. The code and robust models are available at
https://github.com/chs20/RobustVLM | [
"cs.LG",
"cs.AI",
"cs.CV",
"stat.ML"
] | false |
2402.12451 | 2024-02-19T19:01:01Z | The (R)Evolution of Multimodal Large Language Models: A Survey | [
"Davide Caffagni",
"Federico Cocchi",
"Luca Barsellotti",
"Nicholas Moratelli",
"Sara Sarto",
"Lorenzo Baraldi",
"Lorenzo Baraldi",
"Marcella Cornia",
"Rita Cucchiara"
] | Connecting text and visual modalities plays an essential role in generative
intelligence. For this reason, inspired by the success of large language
models, significant research efforts are being devoted to the development of
Multimodal Large Language Models (MLLMs). These models can seamlessly integrate
visual and textual modalities, both as input and output, while providing a
dialogue-based interface and instruction-following capabilities. In this paper,
we provide a comprehensive review of recent visual-based MLLMs, analyzing their
architectural choices, multimodal alignment strategies, and training
techniques. We also conduct a detailed analysis of these models across a wide
range of tasks, including visual grounding, image generation and editing,
visual understanding, and domain-specific applications. Additionally, we
compile and describe training datasets and evaluation benchmarks, conducting
comparisons among existing models in terms of performance and computational
requirements. Overall, this survey offers a comprehensive overview of the
current state of the art, laying the groundwork for future MLLMs. | [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.MM"
] | false |
2402.12490 | 2024-02-19T19:54:03Z | Towards Cross-Domain Continual Learning | [
"Marcus de Carvalho",
"Mahardhika Pratama",
"Jie Zhang",
"Chua Haoyan",
"Edward Yapp"
] | Continual learning is a process that involves training learning agents to
sequentially master a stream of tasks or classes without revisiting past data.
The challenge lies in leveraging previously acquired knowledge to learn new
tasks efficiently, while avoiding catastrophic forgetting. Existing methods
primarily focus on single domains, restricting their applicability to specific
problems.
In this work, we introduce a novel approach called Cross-Domain Continual
Learning (CDCL) that addresses the limitations of being limited to single
supervised domains. Our method combines inter- and intra-task cross-attention
mechanisms within a compact convolutional network. This integration enables the
model to maintain alignment with features from previous tasks, thereby delaying
the data drift that may occur between tasks, while performing unsupervised
cross-domain (UDA) between related domains. By leveraging an
intra-task-specific pseudo-labeling method, we ensure accurate input pairs for
both labeled and unlabeled samples, enhancing the learning process. To validate
our approach, we conduct extensive experiments on public UDA datasets,
showcasing its positive performance on cross-domain continual learning
challenges. Additionally, our work introduces incremental ideas that contribute
to the advancement of this field.
We make our code and models available to encourage further exploration and
reproduction of our results: \url{https://github.com/Ivsucram/CDCL} | [
"cs.LG",
"cs.AI",
"cs.CV"
] | false |
2402.12498 | 2024-02-19T20:05:41Z | Feudal Networks for Visual Navigation | [
"Faith Johnson",
"Bryan Bo Cao",
"Kristin Dana",
"Shubham Jain",
"Ashwin Ashok"
] | Visual navigation follows the intuition that humans can navigate without
detailed maps. A common approach is interactive exploration while building a
topological graph with images at nodes that can be used for planning. Recent
variations learn from passive videos and can navigate using complex social and
semantic cues. However, a significant number of training videos are needed,
large graphs are utilized, and scenes are not unseen since odometry is
utilized. We introduce a new approach to visual navigation using feudal
learning, which employs a hierarchical structure consisting of a worker agent,
a mid-level manager, and a high-level manager. Key to the feudal learning
paradigm, agents at each level see a different aspect of the task and operate
at different spatial and temporal scales. Two unique modules are developed in
this framework. For the high-level manager, we learn a memory proxy map in a
self supervised manner to record prior observations in a learned latent space
and avoid the use of graphs and odometry. For the mid-level manager, we develop
a waypoint network that outputs intermediate subgoals imitating human waypoint
selection during local navigation. This waypoint network is pre-trained using a
new, small set of teleoperation videos that we make publicly available, with
training environments different from testing environments. The resulting feudal
navigation network achieves near SOTA performance, while providing a novel
no-RL, no-graph, no-odometry, no-metric map approach to the image goal
navigation task. | [
"cs.CV",
"cs.LG",
"cs.RO"
] | false |
2402.12500 | 2024-02-19T20:08:13Z | Integrating kNN with Foundation Models for Adaptable and Privacy-Aware
Image Classification | [
"Sebastian Doerrich",
"Tobias Archut",
"Francesco Di Salvo",
"Christian Ledig"
] | Traditional deep learning models implicity encode knowledge limiting their
transparency and ability to adapt to data changes. Yet, this adaptability is
vital for addressing user data privacy concerns. We address this limitation by
storing embeddings of the underlying training data independently of the model
weights, enabling dynamic data modifications without retraining. Specifically,
our approach integrates the $k$-Nearest Neighbor ($k$-NN) classifier with a
vision-based foundation model, pre-trained self-supervised on natural images,
enhancing interpretability and adaptability. We share open-source
implementations of a previously unpublished baseline method as well as our
performance-improving contributions. Quantitative experiments confirm improved
classification across established benchmark datasets and the method's
applicability to distinct medical image classification tasks. Additionally, we
assess the method's robustness in continual learning and data removal
scenarios. The approach exhibits great promise for bridging the gap between
foundation models' performance and challenges tied to data privacy. The source
code is available at
https://github.com/TobArc/privacy-aware-image-classification-with-kNN. | [
"cs.CV",
"cs.LG",
"eess.IV"
] | false |
2402.12072 | 2024-02-19T11:48:11Z | Robustness and Exploration of Variational and Machine Learning
Approaches to Inverse Problems: An Overview | [
"Alexander Auras",
"Kanchana Vaishnavi Gandikota",
"Hannah Droege",
"Michael Moeller"
] | This paper attempts to provide an overview of current approaches for solving
inverse problems in imaging using variational methods and machine learning. A
special focus lies on point estimators and their robustness against adversarial
perturbations. In this context results of numerical experiments for a
one-dimensional toy problem are provided, showing the robustness of different
approaches and empirically verifying theoretical guarantees. Another focus of
this review is the exploration of the subspace of data consistent solutions
through explicit guidance to satisfy specific semantic or textural properties. | [
"eess.IV",
"cs.CV",
"cs.LG",
"cs.NA",
"math.NA"
] | false |
2402.11744 | 2024-02-19T00:07:28Z | Machine-generated Text Localization | [
"Zhongping Zhang",
"Wenda Qin",
"Bryan A. Plummer"
] | Machine-Generated Text (MGT) detection aims to identify a piece of text as
machine or human written. Prior work has primarily formulated MGT as a binary
classification task over an entire document, with limited work exploring cases
where only part of a document is machine generated. This paper provides the
first in-depth study of MGT that localizes the portions of a document that were
machine generated. Thus, if a bad actor were to change a key portion of a news
article to spread misinformation, whole document MGT detection may fail since
the vast majority is human written, but our approach can succeed due to its
granular approach. A key challenge in our MGT localization task is that short
spans of text, e.g., a single sentence, provides little information indicating
if it is machine generated due to its short length. To address this, we
leverage contextual information, where we predict whether multiple sentences
are machine or human written at once. This enables our approach to identify
changes in style or content to boost performance. A gain of 4-13% mean Average
Precision (mAP) over prior work demonstrates the effectiveness of approach on
five diverse datasets: GoodNews, VisualNews, WikiText, Essay, and WP. We
release our implementation at
\href{https://github.com/Zhongping-Zhang/MGT_Localization}{this http URL}. | [
"cs.CL"
] | false |
2402.11750 | 2024-02-19T00:39:31Z | In-Context Learning Demonstration Selection via Influence Analysis | [
"Vinay M. S.",
"Minh-Hao Van",
"Xintao Wu"
] | Large Language Models (LLMs) have demonstrated their In-Context Learning
(ICL) capabilities which provides an opportunity to perform few shot learning
without any gradient update. Despite its multiple benefits, ICL generalization
performance is sensitive to the selected demonstrations. Selecting effective
demonstrations for ICL is still an open research challenge. To address this
challenge, we propose a demonstration selection method called InfICL which
analyzes influences of training samples through influence functions.
Identifying highly influential training samples can potentially aid in
uplifting the ICL generalization performance. To limit the running cost of
InfICL, we only employ the LLM to generate sample embeddings, and don't perform
any costly fine tuning. We perform empirical study on multiple real-world
datasets and show merits of our InfICL against state-of-the-art baselines. | [
"cs.CL"
] | false |
2402.11811 | 2024-02-19T03:56:44Z | FIPO: Free-form Instruction-oriented Prompt Optimization with Preference
Dataset and Modular Fine-tuning Schema | [
"Junru Lu",
"Siyu An",
"Min Zhang",
"Yulan He",
"Di Yin",
"Xing Sun"
] | In the quest to facilitate the deep intelligence of Large Language Models
(LLMs) accessible in final-end user-bot interactions, the art of prompt
crafting emerges as a critical yet complex task for the average user. Contrast
to previous model-oriented yet instruction-agnostic Automatic Prompt
Optimization methodologies, yielding polished results for predefined target
models while suffering rapid degradation with out-of-box models, we present
Free-form Instruction-oriented Prompt Optimization (FIPO). This approach is
supported by our large-scale prompt preference dataset and employs a modular
fine-tuning schema. The FIPO schema reimagines the optimization process into
manageable modules, anchored by a meta prompt that dynamically adapts content.
This allows for the flexible integration of the raw task instruction, the
optional instruction response, and the optional ground truth to produce finely
optimized task prompts. The FIPO preference dataset is meticulously constructed
using the optimal and suboptimal LLMs, undergoing rigorous cross-verification
by human experts and analytical models. Applying the insights from the data
with Tulu2 models and fine-tuning strategies, we validate the efficacy of FIPO
schema across five public benchmarks. Codes, data and scripts are here:
https://github.com/LuJunru/FIPO_Project. | [
"cs.CL"
] | false |
2402.11819 | 2024-02-19T04:19:36Z | Head-wise Shareable Attention for Large Language Models | [
"Zouying Cao",
"Yifei Yang",
"Hai Zhao"
] | Large Language Models (LLMs) suffer from huge number of parameters, which
restricts their deployment on edge devices. Weight sharing is one promising
solution that encourages weight reuse, effectively reducing memory usage with
less performance drop. However, current weight sharing techniques primarily
focus on small-scale models like BERT and employ coarse-grained sharing rules,
e.g., layer-wise. This becomes limiting given the prevalence of LLMs and
sharing an entire layer or block obviously diminishes the flexibility of weight
sharing. In this paper, we present a perspective on $\textit{$\textbf{head-wise
shareable attention for large language models}$}$. We further propose two
memory-efficient methods that share parameters across attention heads, with a
specific focus on LLMs. Both of them use the same dynamic strategy to select
the shared weight matrices. The first method directly reuses the pre-trained
weights without retraining, denoted as $\textbf{DirectShare}$. The second
method first post-trains with constraint on weight matrix similarity and then
shares, denoted as $\textbf{PostShare}$. Experimental results reveal our
head-wise shared models still maintain satisfactory capabilities, demonstrating
the feasibility of fine-grained weight sharing applied to LLMs. | [
"cs.CL"
] | false |
2402.11875 | 2024-02-19T06:32:39Z | M2K-VDG: Model-Adaptive Multimodal Knowledge Anchor Enhanced
Video-grounded Dialogue Generation | [
"Hongcheng Liu",
"Pingjie Wang",
"Yu Wang",
"Yanfeng Wang"
] | Video-grounded dialogue generation (VDG) requires the system to generate a
fluent and accurate answer based on multimodal knowledge. However, the
difficulty in multimodal knowledge utilization brings serious hallucinations to
VDG models in practice. Although previous works mitigate the hallucination in a
variety of ways, they hardly take notice of the importance of the multimodal
knowledge anchor answer tokens. In this paper, we reveal via perplexity that
different VDG models experience varying hallucinations and exhibit diverse
anchor tokens. Based on this observation, we propose M2K-VDG, a model-adaptive
multimodal knowledge anchor enhancement framework for hallucination reduction.
Furthermore, we introduce the counterfactual effect for more accurate anchor
token detection. The experimental results on three popular benchmarks exhibit
the superiority of our approach over state-of-the-art methods, demonstrating
its effectiveness in reducing hallucinations. | [
"cs.CL"
] | false |
2402.11889 | 2024-02-19T06:58:42Z | ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large
Language Models with Reverse Prompt Contrastive Decoding | [
"Qihuang Zhong",
"Liang Ding",
"Juhua Liu",
"Bo Du",
"Dacheng Tao"
] | With the development of instruction-tuned large language models (LLMs),
improving the safety of LLMs has become more critical. However, the current
approaches for aligning the LLMs output with expected safety usually require
substantial training efforts, e.g., high-quality safety data and expensive
computational resources, which are costly and inefficient. To this end, we
present reverse prompt contrastive decoding (ROSE), a simple-yet-effective
method to directly boost the safety of existing instruction-tuned LLMs without
any additional training. The principle of ROSE is to improve the probability of
desired safe output via suppressing the undesired output induced by the
carefully-designed reverse prompts. Experiments on 6 safety and 2
general-purpose tasks show that, our ROSE not only brings consistent and
significant safety improvements (up to +13.8% safety score) upon 5 types of
instruction-tuned LLMs, but also benefits the general-purpose ability of LLMs.
In-depth analyses explore the underlying mechanism of ROSE, and reveal when and
where to use it. | [
"cs.CL"
] | false |
2402.11890 | 2024-02-19T07:01:10Z | Revisiting Knowledge Distillation for Autoregressive Language Models | [
"Qihuang Zhong",
"Liang Ding",
"Li Shen",
"Juhua Liu",
"Bo Du",
"Dacheng Tao"
] | Knowledge distillation (KD) is a common approach to compress a teacher model
to reduce its inference cost and memory footprint, by training a smaller
student model. However, in the context of autoregressive language models (LMs),
we empirically find that larger teacher LMs might dramatically result in a
poorer student. In response to this problem, we conduct a series of analyses
and reveal that different tokens have different teaching modes, neglecting
which will lead to performance degradation. Motivated by this, we propose a
simple yet effective adaptive teaching approach (ATKD) to improve the KD. The
core of ATKD is to reduce rote learning and make teaching more diverse and
flexible. Extensive experiments on 8 LM tasks show that, with the help of ATKD,
various baseline KD methods can achieve consistent and significant performance
gains (up to +3.04% average score) across all model types and sizes. More
encouragingly, ATKD can improve the student model generalization effectively. | [
"cs.CL"
] | false |
2402.11896 | 2024-02-19T07:22:29Z | SIBO: A Simple Booster for Parameter-Efficient Fine-Tuning | [
"Zhihao Wen",
"Jie Zhang",
"Yuan Fang"
] | Fine-tuning all parameters of large language models (LLMs) necessitates
substantial computational power and extended time. Latest advancements in
parameter-efficient fine-tuning (PEFT) techniques, such as Adapter tuning and
LoRA, allow for adjustments to only a minor fraction of the parameters of these
LLMs. Concurrently, it has been noted that the issue of over-smoothing
diminishes the effectiveness of these Transformer-based LLMs, resulting in
suboptimal performances in downstream tasks. In this paper, we present SIBO,
which is a SImple BOoster to enhance PEFT, by injecting an initial residual.
SIBO is straight-forward and readily extensible to a range of state-of-the-art
PEFT techniques to alleviate over-smoothing and enhance performance. Extensive
experiments on 22 benchmark datasets demonstrate that SIBO significantly
enhances the performance of various strong baselines, achieving up to 15.7% and
23.5% improvement over existing PEFT methods on the arithmetic and commonsense
reasoning tasks, respectively. | [
"cs.CL"
] | false |
2402.11900 | 2024-02-19T07:34:10Z | Investigating Multi-Hop Factual Shortcuts in Knowledge Editing of Large
Language Models | [
"Tianjie Ju",
"Yijin Chen",
"Xinwei Yuan",
"Zhuosheng Zhang",
"Wei Du",
"Yubin Zheng",
"Gongshen Liu"
] | Recent work has showcased the powerful capability of large language models
(LLMs) in recalling knowledge and reasoning. However, the reliability of LLMs
in combining these two capabilities into reasoning through multi-hop facts has
not been widely explored. This paper systematically investigates the
possibilities for LLMs to utilize shortcuts based on direct connections between
the initial and terminal entities of multi-hop knowledge. We first explore the
existence of factual shortcuts through Knowledge Neurons, revealing that: (i)
the strength of factual shortcuts is highly correlated with the frequency of
co-occurrence of initial and terminal entities in the pre-training corpora;
(ii) few-shot prompting leverage more shortcuts in answering multi-hop
questions compared to chain-of-thought prompting. Then, we analyze the risks
posed by factual shortcuts from the perspective of multi-hop knowledge editing.
Analysis shows that approximately 20% of the failures are attributed to
shortcuts, and the initial and terminal entities in these failure instances
usually have higher co-occurrences in the pre-training corpus. Finally, we
propose erasing shortcut neurons to mitigate the associated risks and find that
this approach significantly reduces failures in multiple-hop knowledge editing
caused by shortcuts. | [
"cs.CL"
] | false |
2402.11905 | 2024-02-19T07:45:17Z | Learning to Edit: Aligning LLMs with Knowledge Editing | [
"Yuxin Jiang",
"Yufei Wang",
"Chuhan Wu",
"Wanjun Zhong",
"Xingshan Zeng",
"Jiahui Gao",
"Liangyou Li",
"Xin Jiang",
"Lifeng Shang",
"Ruiming Tang",
"Qun Liu",
"Wei Wang"
] | Knowledge editing techniques, aiming to efficiently modify a minor proportion
of knowledge in large language models (LLMs) without negatively impacting
performance across other inputs, have garnered widespread attention. However,
existing methods predominantly rely on memorizing the updated knowledge,
impeding LLMs from effectively combining the new knowledge with their inherent
knowledge when answering questions. To this end, we propose a Learning to Edit
(LTE) framework, focusing on teaching LLMs to apply updated knowledge into
input questions, inspired by the philosophy of "Teach a man to fish." LTE
features a two-phase process: (i) the Alignment Phase, which fine-tunes LLMs on
a meticulously curated parallel dataset to make reliable, in-scope edits while
preserving out-of-scope information and linguistic proficiency; and (ii) the
Inference Phase, which employs a retrieval-based mechanism for real-time and
mass knowledge editing. By comparing our approach with seven advanced baselines
across four popular knowledge editing benchmarks and two LLM architectures, we
demonstrate LTE's superiority in knowledge editing performance, robustness in
both batch and sequential editing, minimal interference on general tasks, and
rapid editing speeds. The data and code are available at
https://github.com/YJiangcm/LTE. | [
"cs.CL"
] | false |
2402.11907 | 2024-02-19T07:46:40Z | Direct Large Language Model Alignment Through Self-Rewarding Contrastive
Prompt Distillation | [
"Aiwei Liu",
"Haoping Bai",
"Zhiyun Lu",
"Xiang Kong",
"Simon Wang",
"Jiulong Shan",
"Meng Cao",
"Lijie Wen"
] | Aligning large language models (LLMs) with human expectations without
human-annotated preference data is an important problem. In this paper, we
propose a method to evaluate the response preference by using the output
probabilities of response pairs under contrastive prompt pairs, which could
achieve better performance on LLaMA2-7B and LLaMA2-13B compared to RLAIF. Based
on this, we propose an automatic alignment method, Direct Large Model Alignment
(DLMA). First, we use contrastive prompt pairs to automatically generate
preference data. Then, we continue to evaluate the generated preference data
using contrastive prompt pairs and calculate a self-rewarding score. Finally,
we use the DPO algorithm to effectively align LLMs by combining this
self-rewarding score. In the experimental stage, our DLMA method could surpass
the \texttt{RLHF} method without relying on human-annotated preference data. | [
"cs.CL",
"68T50",
"I.2.7"
] | false |
2402.11943 | 2024-02-19T08:32:27Z | LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with
External Knowledge Augmentation | [
"Keyang Xuan",
"Li Yi",
"Fan Yang",
"Ruochen Wu",
"Yi R. Fung",
"Heng Ji"
] | The rise of multimodal misinformation on social platforms poses significant
challenges for individuals and societies. Its increased credibility and broader
impact compared to textual misinformation make detection complex, requiring
robust reasoning across diverse media types and profound knowledge for accurate
verification. The emergence of Large Vision Language Model (LVLM) offers a
potential solution to this problem. Leveraging their proficiency in processing
visual and textual information, LVLM demonstrates promising capabilities in
recognizing complex information and exhibiting strong reasoning skills. In this
paper, we first investigate the potential of LVLM on multimodal misinformation
detection. We find that even though LVLM has a superior performance compared to
LLMs, its profound reasoning may present limited power with a lack of evidence.
Based on these observations, we propose LEMMA: LVLM-Enhanced Multimodal
Misinformation Detection with External Knowledge Augmentation. LEMMA leverages
LVLM intuition and reasoning capabilities while augmenting them with external
knowledge to enhance the accuracy of misinformation detection. Our method
improves the accuracy over the top baseline LVLM by 7% and 13% on Twitter and
Fakeddit datasets respectively. | [
"cs.CL"
] | false |
2402.11958 | 2024-02-19T09:00:10Z | Automatic Evaluation for Mental Health Counseling using LLMs | [
"Anqi Li",
"Yu Lu",
"Nirui Song",
"Shuai Zhang",
"Lizhi Ma",
"Zhenzhong Lan"
] | High-quality psychological counseling is crucial for mental health worldwide,
and timely evaluation is vital for ensuring its effectiveness. However,
obtaining professional evaluation for each counseling session is expensive and
challenging. Existing methods that rely on self or third-party manual reports
to assess the quality of counseling suffer from subjective biases and
limitations of time-consuming.
To address above challenges, this paper proposes an innovative and efficient
automatic approach using large language models (LLMs) to evaluate the working
alliance in counseling conversations. We collected a comprehensive counseling
dataset and conducted multiple third-party evaluations based on therapeutic
relationship theory. Our LLM-based evaluation, combined with our guidelines,
shows high agreement with human evaluations and provides valuable insights into
counseling scripts. This highlights the potential of LLMs as supervisory tools
for psychotherapists. By integrating LLMs into the evaluation process, our
approach offers a cost-effective and dependable means of assessing counseling
quality, enhancing overall effectiveness. | [
"cs.CL"
] | false |
2402.11968 | 2024-02-19T09:15:28Z | What Do Dialect Speakers Want? A Survey of Attitudes Towards Language
Technology for German Dialects | [
"Verena Blaschke",
"Christoph Purschke",
"Hinrich Schütze",
"Barbara Plank"
] | Natural language processing (NLP) has largely focused on modelling
standardized languages. More recently, attention has increasingly shifted to
local, non-standardized languages and dialects. However, the relevant speaker
populations' needs and wishes with respect to NLP tools are largely unknown. In
this paper, we focus on dialects and regional languages related to German -- a
group of varieties that is heterogeneous in terms of prestige and
standardization. We survey speakers of these varieties (N=327) and present
their opinions on hypothetical language technologies for their dialects.
Although attitudes vary among subgroups of our respondents, we find that
respondents are especially in favour of potential NLP tools that work with
dialectal input (especially audio input) such as virtual assistants, and less
so for applications that produce dialectal output such as machine translation
or spellcheckers. | [
"cs.CL"
] | false |
2402.11975 | 2024-02-19T09:19:50Z | Compress to Impress: Unleashing the Potential of Compressive Memory in
Real-World Long-Term Conversations | [
"Nuo Chen",
"Hongguang Li",
"Juhua Huang",
"Baoyuan Wang",
"Jia Li"
] | Existing retrieval-based methods have made significant strides in maintaining
long-term conversations. However, these approaches face challenges in memory
database management and accurate memory retrieval, hindering their efficacy in
dynamic, real-world interactions. This study introduces a novel framework,
COmpressive Memory-Enhanced Dialogue sYstems (COMEDY), which eschews
traditional retrieval modules and memory databases. Instead, COMEDY adopts a
''One-for-All'' approach, utilizing a single language model to manage memory
generation, compression, and response generation. Central to this framework is
the concept of compressive memory, which intergrates session-specific
summaries, user-bot dynamics, and past events into a concise memory format. To
support COMEDY, we curated a large-scale Chinese instruction-tuning dataset,
Dolphin, derived from real user-chatbot interactions. Comparative evaluations
demonstrate COMEDY's superiority over traditional retrieval-based methods in
producing more nuanced and human-like conversational experiences. Our codes are
available at https://github.com/nuochenpku/COMEDY. | [
"cs.CL"
] | false |
2402.12025 | 2024-02-19T10:34:13Z | Speech Translation with Speech Foundation Models and Large Language
Models: What is There and What is Missing? | [
"Marco Gaido",
"Sara Papi",
"Matteo Negri",
"Luisa Bentivogli"
] | The field of natural language processing (NLP) has recently witnessed a
transformative shift with the emergence of foundation models, particularly
Large Language Models (LLMs) that have revolutionized text-based NLP. This
paradigm has extended to other modalities, including speech, where researchers
are actively exploring the combination of Speech Foundation Models (SFMs) and
LLMs into single, unified models capable of addressing multimodal tasks. Among
such tasks, this paper focuses on speech-to-text translation (ST). By examining
the published papers on the topic, we propose a unified view of the
architectural solutions and training strategies presented so far, highlighting
similarities and differences among them. Based on this examination, we not only
organize the lessons learned but also show how diverse settings and evaluation
approaches hinder the identification of the best-performing solution for each
architectural building block and training choice. Lastly, we outline
recommendations for future works on the topic aimed at better understanding the
strengths and weaknesses of the SFM+LLM solutions for ST. | [
"cs.CL"
] | false |
2402.12048 | 2024-02-19T11:02:05Z | Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large
Language Models | [
"Didi Zhu",
"Zhongyi Sun",
"Zexi Li",
"Tao Shen",
"Ke Yan",
"Shouhong Ding",
"Kun Kuang",
"Chao Wu"
] | Catastrophic forgetting emerges as a critical challenge when fine-tuning
multi-modal large language models (MLLMs), where improving performance on
unseen tasks often leads to a significant performance drop on the original
tasks. This paper presents a comprehensive analysis of catastrophic forgetting
in MLLMs and introduces a post-training adjustment method called Model Tailor.
Our method primarily preserves the pre-trained parameters while replacing a
small number ($\leq$ 10\%) of fine-tuned parameters, maintaining $\sim$ 99\%
effectiveness on original tasks versus pre-training, and achieving $\sim$ 97\%
on new tasks compared to standard fine-tuning. Specifically, we derive a sparse
mask to identify the "model patch", based on a fusion strategy that integrates
salience and sensitivity analysis. Subsequently, a compensation mechanism is
introduced to "decorate the patch", enhancing the model's performance on both
target and original tasks. Additionally, our method is adaptable to multi-task
scenarios. Through extensive experiments on InstructBLIP and LLaVA-1.5 in both
image captioning and visual question answering tasks, our approach demonstrates
significant task adaptability while preserving inherent pre-trained
capabilities. | [
"cs.CL"
] | false |
2402.12055 | 2024-02-19T11:19:02Z | Are LLM-based Evaluators Confusing NLG Quality Criteria? | [
"Xinyu Hu",
"Mingqi Gao",
"Sen Hu",
"Yang Zhang",
"Yicheng Chen",
"Teng Xu",
"Xiaojun Wan"
] | Some prior work has shown that LLMs perform well in NLG evaluation for
different tasks. However, we discover that LLMs seem to confuse different
evaluation criteria, which reduces their reliability. For further verification,
we first consider avoiding issues of inconsistent conceptualization and vague
expression in existing NLG quality criteria themselves. So we summarize a clear
hierarchical classification system for 11 common aspects with corresponding
different criteria from previous studies involved. Inspired by behavioral
testing, we elaborately design 18 types of aspect-targeted perturbation attacks
for fine-grained analysis of the evaluation behaviors of different LLMs. We
also conduct human annotations beyond the guidance of the classification system
to validate the impact of the perturbations. Our experimental results reveal
confusion issues inherent in LLMs, as well as other noteworthy phenomena, and
necessitate further research and improvements for LLM-based evaluation. | [
"cs.CL"
] | false |
2402.12080 | 2024-02-19T12:04:25Z | Can LLMs Compute with Reasons? | [
"Harshit Sandilya",
"Peehu Raj",
"Jainit Sushil Bafna",
"Srija Mukhopadhyay",
"Shivansh Sharma",
"Ellwil Sharma",
"Arastu Sharma",
"Neeta Trivedi",
"Manish Shrivastava",
"Rajesh Kumar"
] | Large language models (LLMs) often struggle with complex mathematical tasks,
prone to "hallucinating" incorrect answers due to their reliance on statistical
patterns. This limitation is further amplified in average Small LangSLMs with
limited context and training data. To address this challenge, we propose an
"Inductive Learning" approach utilizing a distributed network of SLMs. This
network leverages error-based learning and hint incorporation to refine the
reasoning capabilities of SLMs. Our goal is to provide a framework that
empowers SLMs to approach the level of logic-based applications achieved by
high-parameter models, potentially benefiting any language model. Ultimately,
this novel concept paves the way for bridging the logical gap between humans
and LLMs across various fields. | [
"cs.CL",
"68T50",
"I.2.7"
] | false |
2402.12174 | 2024-02-19T14:28:31Z | BIDER: Bridging Knowledge Inconsistency for Efficient
Retrieval-Augmented LLMs via Key Supporting Evidence | [
"Jiajie Jin",
"Yutao Zhu",
"Yujia Zhou",
"Zhicheng Dou"
] | Retrieval-augmented large language models (LLMs) have demonstrated efficacy
in knowledge-intensive tasks such as open-domain QA, addressing inherent
challenges in knowledge update and factual inadequacy. However, inconsistencies
between retrieval knowledge and the necessary knowledge for LLMs, leading to a
decline in LLM's answer quality. This paper introduces BIDER, an approach that
refines retrieval documents into Key Supporting Evidence (KSE) through
knowledge synthesis, supervised fine-tuning (SFT), and preference alignment. We
train BIDER by learning from crafting KSE, while maximizing its output to align
with LLM's information acquisition preferences through reinforcement learning.
Evaluations across five datasets show BIDER boosts LLMs' answer quality by 7%
while reducing input content length in retrieval documents by 80%,
outperforming existing methods. The proposed KSE simulation effectively equips
LLMs with essential information for accurate question answering. | [
"cs.CL"
] | false |
2402.12193 | 2024-02-19T14:56:18Z | A Chinese Dataset for Evaluating the Safeguards in Large Language Models | [
"Yuxia Wang",
"Zenan Zhai",
"Haonan Li",
"Xudong Han",
"Lizhi Lin",
"Zhenxuan Zhang",
"Jingru Zhao",
"Preslav Nakov",
"Timothy Baldwin"
] | Many studies have demonstrated that large language models (LLMs) can produce
harmful responses, exposing users to unexpected risks when LLMs are deployed.
Previous studies have proposed comprehensive taxonomies of the risks posed by
LLMs, as well as corresponding prompts that can be used to examine the safety
mechanisms of LLMs. However, the focus has been almost exclusively on English,
and little has been explored for other languages. Here we aim to bridge this
gap. We first introduce a dataset for the safety evaluation of Chinese LLMs,
and then extend it to two other scenarios that can be used to better identify
false negative and false positive examples in terms of risky prompt rejections.
We further present a set of fine-grained safety assessment criteria for each
risk type, facilitating both manual annotation and automatic evaluation in
terms of LLM response harmfulness. Our experiments on five LLMs show that
region-specific risks are the prevalent type of risk, presenting the major
issue with all Chinese LLMs we experimented with. Warning: this paper contains
example data that may be offensive, harmful, or biased. | [
"cs.CL"
] | false |
2402.12195 | 2024-02-19T14:59:07Z | Browse and Concentrate: Comprehending Multimodal Content via prior-LLM
Context Fusion | [
"Ziyue Wang",
"Chi Chen",
"Yiqi Zhu",
"Fuwen Luo",
"Peng Li",
"Ming Yan",
"Ji Zhang",
"Fei Huang",
"Maosong Sun",
"Yang Liu"
] | With the bloom of Large Language Models (LLMs), Multimodal Large Language
Models (MLLMs) that incorporate LLMs with pre-trained vision models have
recently demonstrated impressive performance across diverse vision-language
tasks. However, they fall short to comprehend context involving multiple
images. A primary reason for this shortcoming is that the visual features for
each images are encoded individually by frozen encoders before feeding into the
LLM backbone, lacking awareness of other images and the multimodal
instructions. We term this issue as prior-LLM modality isolation and propose a
two phase paradigm, browse-and-concentrate, to enable in-depth multimodal
context fusion prior to feeding the features into LLMs. This paradigm initially
"browses" through the inputs for essential insights, and then revisits the
inputs to "concentrate" on crucial details, guided by these insights, to
achieve a more comprehensive understanding of the multimodal inputs.
Additionally, we develop training strategies specifically to enhance the
understanding of multi-image inputs. Our method markedly boosts the performance
on 7 multi-image scenarios, contributing to increments on average accuracy by
2.13% and 7.60% against strong MLLMs baselines with 3B and 11B LLMs,
respectively. | [
"cs.CL"
] | false |
2402.12204 | 2024-02-19T15:07:32Z | Enhancing Multilingual Capabilities of Large Language Models through
Self-Distillation from Resource-Rich Languages | [
"Yuanchi Zhang",
"Yile Wang",
"Zijun Liu",
"Shuo Wang",
"Xiaolong Wang",
"Peng Li",
"Maosong Sun",
"Yang Liu"
] | While large language models (LLMs) have been pre-trained on multilingual
corpora, their performance still lags behind in most languages compared to a
few resource-rich languages. One common approach to mitigate this issue is to
translate training data from resource-rich languages into other languages and
then continue training. However, using the data obtained solely relying on
translation while ignoring the original capabilities of LLMs across languages
is not always effective, which we show will limit the performance of
cross-lingual knowledge transfer. In this work, we propose SDRRL, a method
based on Self-Distillation from Resource-Rich Languages that effectively
improve multilingual performance by leveraging the internal capabilities of
LLMs on resource-rich languages. We evaluate on different LLMs (LLaMA-2 and
SeaLLM) and source languages across various comprehension and generation tasks,
experimental results demonstrate that SDRRL can significantly enhance
multilingual capabilities while minimizing the impact on original performance
in resource-rich languages. | [
"cs.CL"
] | false |
2402.12212 | 2024-02-19T15:14:15Z | Polarization of Autonomous Generative AI Agents Under Echo Chambers | [
"Masaya Ohagi"
] | Online social networks often create echo chambers where people only hear
opinions reinforcing their beliefs. An echo chamber often generates
polarization, leading to conflicts caused by people with radical opinions, such
as the January 6, 2021, attack on the US Capitol. The echo chamber has been
viewed as a human-specific problem, but this implicit assumption is becoming
less reasonable as large language models, such as ChatGPT, acquire social
abilities. In response to this situation, we investigated the potential for
polarization to occur among a group of autonomous AI agents based on generative
language models in an echo chamber environment. We had AI agents discuss
specific topics and analyzed how the group's opinions changed as the discussion
progressed. As a result, we found that the group of agents based on ChatGPT
tended to become polarized in echo chamber environments. The analysis of
opinion transitions shows that this result is caused by ChatGPT's high prompt
understanding ability to update its opinion by considering its own and
surrounding agents' opinions. We conducted additional experiments to
investigate under what specific conditions AI agents tended to polarize. As a
result, we identified factors that strongly influence polarization, such as the
agent's persona. These factors should be monitored to prevent the polarization
of AI agents. | [
"cs.CL"
] | false |
2402.12233 | 2024-02-19T15:42:54Z | Empirical Study on Updating Key-Value Memories in Transformer
Feed-forward Layers | [
"Zihan Qiu",
"Zeyu Huang",
"Youcheng Huang",
"Jie Fu"
] | The feed-forward networks (FFNs) in transformers are recognized as a group of
key-value neural memories to restore abstract high-level knowledge. In this
work, we conduct an empirical ablation study on updating keys (the 1st layer in
the FFNs layer) or values (the 2nd layer in the FFNs layer). We compare those
two methods in various knowledge editing and fine-tuning tasks of large
language models to draw insights to understand FFNs further. Code is available
at $\href{https://github.com/qiuzh20/Tuning-keys-v.s.-values}{this\,repo}$. | [
"cs.CL"
] | false |
2402.12234 | 2024-02-19T15:43:35Z | Task-Oriented Dialogue with In-Context Learning | [
"Tom Bocklisch",
"Thomas Werkmeister",
"Daksh Varshneya",
"Alan Nichol"
] | We describe a system for building task-oriented dialogue systems combining
the in-context learning abilities of large language models (LLMs) with the
deterministic execution of business logic. LLMs are used to translate between
the surface form of the conversation and a domain-specific language (DSL) which
is used to progress the business logic. We compare our approach to the
intent-based NLU approach predominantly used in industry today. Our experiments
show that developing chatbots with our system requires significantly less
effort than established approaches, that these chatbots can successfully
navigate complex dialogues which are extremely challenging for NLU-based
systems, and that our system has desirable properties for scaling task-oriented
dialogue systems to a large number of tasks. We make our implementation
available for use and further study. | [
"cs.CL"
] | false |
2402.12249 | 2024-02-19T16:05:28Z | Analysis of Levenshtein Transformer's Decoder and Its Variants | [
"Ruiyang Zhou"
] | Levenshtein transformer (LevT) is a non-autoregressive machine translation
model with high decoding efficiency and comparable translation quality in terms
of bleu score, due to its parallel decoding and iterative refinement procedure.
Are there any deficiencies of its translations and what improvements could be
made? In this report, we focus on LevT's decoder and analyse the decoding
results length, subword generation, and deletion module's capability. We hope
to identify weaknesses of the decoder for future improvements.
We also compare translations of the original LevT, knowledge-distilled LevT,
LevT with translation memory, and the KD-LevT with translation memory to see
how KD and translation memory can help. | [
"cs.CL"
] | false |
2402.12255 | 2024-02-19T16:14:04Z | Shallow Synthesis of Knowledge in GPT-Generated Texts: A Case Study in
Automatic Related Work Composition | [
"Anna Martin-Boyle",
"Aahan Tyagi",
"Marti A. Hearst",
"Dongyeop Kang"
] | Numerous AI-assisted scholarly applications have been developed to aid
different stages of the research process. We present an analysis of AI-assisted
scholarly writing generated with ScholaCite, a tool we built that is designed
for organizing literature and composing Related Work sections for academic
papers. Our evaluation method focuses on the analysis of citation graphs to
assess the structural complexity and inter-connectedness of citations in texts
and involves a three-way comparison between (1) original human-written texts,
(2) purely GPT-generated texts, and (3) human-AI collaborative texts. We find
that GPT-4 can generate reasonable coarse-grained citation groupings to support
human users in brainstorming, but fails to perform detailed synthesis of
related works without human intervention. We suggest that future writing
assistant tools should not be used to draft text independently of the human
author. | [
"cs.CL"
] | false |
2402.12267 | 2024-02-19T16:29:40Z | High-quality Data-to-Text Generation for Severely Under-Resourced
Languages with Out-of-the-box Large Language Models | [
"Michela Lorandi",
"Anya Belz"
] | The performance of NLP methods for severely under-resourced languages cannot
currently hope to match the state of the art in NLP methods for well resourced
languages. We explore the extent to which pretrained large language models
(LLMs) can bridge this gap, via the example of data-to-text generation for
Irish, Welsh, Breton and Maltese. We test LLMs on these under-resourced
languages and English, in a range of scenarios. We find that LLMs easily set
the state of the art for the under-resourced languages by substantial margins,
as measured by both automatic and human evaluations. For all our languages,
human evaluation shows on-a-par performance with humans for our best systems,
but BLEU scores collapse compared to English, casting doubt on the metric's
suitability for evaluating non-task-specific systems. Overall, our results
demonstrate the great potential of LLMs to bridge the performance gap for
under-resourced languages. | [
"cs.CL"
] | false |
2402.12282 | 2024-02-19T16:50:58Z | Ontology Enhanced Claim Detection | [
"Zehra Melce Hüsünbeyi",
"Tatjana Scheffler"
] | We propose an ontology enhanced model for sentence based claim detection. We
fused ontology embeddings from a knowledge base with BERT sentence embeddings
to perform claim detection for the ClaimBuster and the NewsClaims datasets. Our
ontology enhanced approach showed the best results with these small-sized
unbalanced datasets, compared to other statistical and neural machine learning
models. The experiments demonstrate that adding domain specific features
(either trained word embeddings or knowledge graph metadata) can improve
traditional ML methods. In addition, adding domain knowledge in the form of
ontology embeddings helps avoid the bias encountered in neural network based
models, for example the pure BERT model bias towards larger classes in our
small corpus. | [
"cs.CL"
] | false |
2402.12291 | 2024-02-19T17:05:29Z | KARL: Knowledge-Aware Retrieval and Representations aid Retention and
Learning in Students | [
"Matthew Shu",
"Nishant Balepur",
"Shi Feng",
"Jordan Boyd-Graber"
] | Flashcard schedulers are tools that rely on 1) student models to predict the
flashcards a student knows; and 2) teaching policies to schedule cards based on
these predictions. Existing student models, however, only use flashcard-level
features, like the student's past responses, ignoring the semantic ties of
flashcards. Deep Knowledge Tracing (DKT) models can capture semantic relations
with language models, but are inefficient, lack content-rich datasets for
evaluation, and require robust teaching policies. To address these issues, we
design KARL, a DKT-inspired student model that uses retrieval and BERT
embeddings for efficient and accurate student recall predictions. To test KARL,
we collect a new dataset of diverse study history on trivia questions. KARL
bests existing student models in AUC and calibration error. Finally, we propose
a novel teaching policy that exploits the predictive power of DKT models to
deploy KARL online. Based on 27 learners and 32 6-day study trajectories, KARL
shows the ability to enhance medium-term educational learning, proving its
efficacy for scheduling. | [
"cs.CL"
] | false |
2402.12309 | 2024-02-19T17:30:44Z | TILP: Differentiable Learning of Temporal Logical Rules on Knowledge
Graphs | [
"Siheng Xiong",
"Yuan Yang",
"Faramarz Fekri",
"James Clayton Kerce"
] | Compared with static knowledge graphs, temporal knowledge graphs (tKG), which
can capture the evolution and change of information over time, are more
realistic and general. However, due to the complexity that the notion of time
introduces to the learning of the rules, an accurate graph reasoning, e.g.,
predicting new links between entities, is still a difficult problem. In this
paper, we propose TILP, a differentiable framework for temporal logical rules
learning. By designing a constrained random walk mechanism and the introduction
of temporal operators, we ensure the efficiency of our model. We present
temporal features modeling in tKG, e.g., recurrence, temporal order, interval
between pair of relations, and duration, and incorporate it into our learning
process. We compare TILP with state-of-the-art methods on two benchmark
datasets. We show that our proposed framework can improve upon the performance
of baseline methods while providing interpretable results. In particular, we
consider various scenarios in which training samples are limited, data is
biased, and the time range between training and inference are different. In all
these cases, TILP works much better than the state-of-the-art methods. | [
"cs.CL"
] | false |
2402.12332 | 2024-02-19T18:06:02Z | Triple-Encoders: Representations That Fire Together, Wire Together | [
"Justus-Jonas Erker",
"Florian Mai",
"Nils Reimers",
"Gerasimos Spanakis",
"Iryna Gurevych"
] | Search-based dialog models typically re-encode the dialog history at every
turn, incurring high cost. Curved Contrastive Learning, a representation
learning method that encodes relative distances between utterances into the
embedding space via a bi-encoder, has recently shown promising results for
dialog modeling at far superior efficiency. While high efficiency is achieved
through independently encoding utterances, this ignores the importance of
contextualization. To overcome this issue, this study introduces
triple-encoders, which efficiently compute distributed utterance mixtures from
these independently encoded utterances through a novel hebbian inspired
co-occurrence learning objective without using any weights. Empirically, we
find that triple-encoders lead to a substantial improvement over bi-encoders,
and even to better zero-shot generalization than single-vector representation
models without requiring re-encoding. Our code/model is publicly available. | [
"cs.CL"
] | false |
2402.12363 | 2024-02-19T18:49:57Z | Emergent Word Order Universals from Cognitively-Motivated Language
Models | [
"Tatsuki Kuribayashi",
"Ryo Ueda",
"Ryo Yoshida",
"Yohei Oseki",
"Ted Briscoe",
"Timothy Baldwin"
] | The world's languages exhibit certain so-called typological or implicational
universals; for example, Subject-Object-Verb (SOV) word order typically employs
postpositions. Explaining the source of such biases is a key goal in
linguistics. We study the word-order universals through a computational
simulation with language models (LMs). Our experiments show that typologically
typical word orders tend to have lower perplexity estimated by LMs with
cognitively plausible biases: syntactic biases, specific parsing strategies,
and memory limitations. This suggests that the interplay of these cognitive
biases and predictability (perplexity) can explain many aspects of word-order
universals. This also showcases the advantage of cognitively-motivated LMs,
which are typically employed in cognitive modeling, in the computational
simulation of language universals. | [
"cs.CL"
] | false |
2402.12368 | 2024-02-19T18:55:16Z | A synthetic data approach for domain generalization of NLI models | [
"Mohammad Javad Hosseini",
"Andrey Petrov",
"Alex Fabrikant",
"Annie Louis"
] | Natural Language Inference (NLI) remains an important benchmark task for
LLMs. NLI datasets are a springboard for transfer learning to other semantic
tasks, and NLI models are standard tools for identifying the faithfulness of
model-generated text. There are several large scale NLI datasets today, and
models have improved greatly by hill-climbing on these collections. Yet their
realistic performance on out-of-distribution/domain data is less
well-understood. We present an in-depth exploration of the problem of domain
generalization of NLI models. We demonstrate a new approach for generating
synthetic NLI data in diverse domains and lengths, so far not covered by
existing training sets. The resulting examples have meaningful premises, the
hypotheses are formed in creative ways rather than simple edits to a few
premise tokens, and the labels have high accuracy. We show that models trained
on this data ($685$K synthetic examples) have the best generalization to
completely new downstream test settings. On the TRUE benchmark, a T5-small
model trained with our data improves around $7\%$ on average compared to
training on the best alternative dataset. The improvements are more pronounced
for smaller models, while still meaningful on a T5 XXL model. We also
demonstrate gains on test sets when in-domain training data is augmented with
our domain-general synthetic data. | [
"cs.CL"
] | false |
2402.12431 | 2024-02-19T19:00:01Z | Understanding Fine-grained Distortions in Reports of Scientific Findings | [
"Amelie Wührl",
"Dustin Wright",
"Roman Klinger",
"Isabelle Augenstein"
] | Distorted science communication harms individuals and society as it can lead
to unhealthy behavior change and decrease trust in scientific institutions.
Given the rapidly increasing volume of science communication in recent years, a
fine-grained understanding of how findings from scientific publications are
reported to the general public, and methods to detect distortions from the
original work automatically, are crucial. Prior work focused on individual
aspects of distortions or worked with unpaired data. In this work, we make
three foundational contributions towards addressing this problem: (1)
annotating 1,600 instances of scientific findings from academic papers paired
with corresponding findings as reported in news articles and tweets wrt. four
characteristics: causality, certainty, generality and sensationalism; (2)
establishing baselines for automatically detecting these characteristics; and
(3) analyzing the prevalence of changes in these characteristics in both
human-annotated and large-scale unlabeled data. Our results show that
scientific findings frequently undergo subtle distortions when reported. Tweets
distort findings more often than science news reports. Detecting fine-grained
distortions automatically poses a challenging task. In our experiments,
fine-tuned task-specific models consistently outperform few-shot LLM prompting. | [
"cs.CL"
] | false |
2402.12483 | 2024-02-19T19:38:58Z | Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions
Without the Question? | [
"Nishant Balepur",
"Abhilasha Ravichander",
"Rachel Rudinger"
] | Multiple-choice question answering (MCQA) is often used to evaluate large
language models (LLMs). To see if MCQA assesses LLMs as intended, we probe if
LLMs can perform MCQA with choices-only prompts, where models must select the
correct answer only from the choices. In three MCQA datasets and four LLMs,
this prompt bests a majority baseline in 11/12 cases, with up to 0.33 accuracy
gain. To help explain this behavior, we conduct an in-depth, black-box analysis
on memorization, choice dynamics, and question inference. Our key findings are
threefold. First, we find no evidence that the choices-only accuracy stems from
memorization alone. Second, priors over individual choices do not fully explain
choices-only accuracy, hinting that LLMs use the group dynamics of choices.
Third, LLMs have some ability to infer a relevant question from choices, and
surprisingly can sometimes even match the original question. We hope to
motivate the use of stronger baselines in MCQA benchmarks, the design of robust
MCQA datasets, and further efforts to explain LLM decision-making. | [
"cs.CL"
] | false |
2402.12486 | 2024-02-19T19:49:29Z | Do Pre-Trained Language Models Detect and Understand Semantic
Underspecification? Ask the DUST! | [
"Frank Wildenburg",
"Michael Hanna",
"Sandro Pezzelle"
] | In everyday language use, speakers frequently utter and interpret sentences
that are semantically underspecified, namely, whose content is insufficient to
fully convey their message or interpret them univocally. For example, to
interpret the underspecified sentence "Don't spend too much", which leaves
implicit what (not) to spend, additional linguistic context or outside
knowledge is needed. In this work, we propose a novel Dataset of semantically
Underspecified Sentences grouped by Type (DUST) and use it to study whether
pre-trained language models (LMs) correctly identify and interpret
underspecified sentences. We find that newer LMs are reasonably able to
identify underspecified sentences when explicitly prompted. However,
interpreting them correctly is much harder for any LMs. Our experiments show
that when interpreting underspecified sentences, LMs exhibit little
uncertainty, contrary to what theoretical accounts of underspecification would
predict. Overall, our study reveals limitations in current models' processing
of sentence semantics and highlights the importance of using naturalistic data
and communicative scenarios when evaluating LMs' language capabilities. | [
"cs.CL"
] | false |
2402.12501 | 2024-02-19T20:08:48Z | Your Vision-Language Model Itself Is a Strong Filter: Towards
High-Quality Instruction Tuning with Data Selection | [
"Ruibo Chen",
"Yihan Wu",
"Lichang Chen",
"Guodong Liu",
"Qi He",
"Tianyi Xiong",
"Chenxi Liu",
"Junfeng Guo",
"Heng Huang"
] | Data selection in instruction tuning emerges as a pivotal process for
acquiring high-quality data and training instruction-following large language
models (LLMs), but it is still a new and unexplored research area for
vision-language models (VLMs). Existing data selection approaches on LLMs
either rely on single unreliable scores, or use downstream tasks for selection,
which is time-consuming and can lead to potential over-fitting on the chosen
evaluation datasets. To address this challenge, we introduce a novel dataset
selection method, Self-Filter, that utilizes the VLM itself as a filter. This
approach is inspired by the observation that VLMs benefit from training with
the most challenging instructions. Self-Filter operates in two stages. In the
first stage, we devise a scoring network to evaluate the difficulty of training
instructions, which is co-trained with the VLM. In the second stage, we use the
trained score net to measure the difficulty of each instruction, select the
most challenging samples, and penalize similar samples to encourage diversity.
Comprehensive experiments on LLaVA and MiniGPT-4 show that Self-Filter can
reach better results compared to full data settings with merely about 15%
samples, and can achieve superior performance against competitive baselines. | [
"cs.CL"
] | false |
2402.12545 | 2024-02-19T21:12:14Z | TrustScore: Reference-Free Evaluation of LLM Response Trustworthiness | [
"Danna Zheng",
"Danyang Liu",
"Mirella Lapata",
"Jeff Z. Pan"
] | Large Language Models (LLMs) have demonstrated impressive capabilities across
various domains, prompting a surge in their practical applications. However,
concerns have arisen regarding the trustworthiness of LLMs outputs,
particularly in closed-book question-answering tasks, where non-experts may
struggle to identify inaccuracies due to the absence of contextual or ground
truth information. This paper introduces TrustScore, a framework based on the
concept of Behavioral Consistency, which evaluates whether an LLMs response
aligns with its intrinsic knowledge. Additionally, TrustScore can seamlessly
integrate with fact-checking methods, which assesses alignment with external
knowledge sources. The experimental results show that TrustScore achieves
strong correlations with human judgments, surpassing existing reference-free
metrics, and achieving results on par with reference-based metrics. | [
"cs.CL"
] | false |
2402.12557 | 2024-02-19T21:32:19Z | Creating a Fine Grained Entity Type Taxonomy Using LLMs | [
"Michael Gunn",
"Dohyun Park",
"Nidhish Kamath"
] | In this study, we investigate the potential of GPT-4 and its advanced
iteration, GPT-4 Turbo, in autonomously developing a detailed entity type
taxonomy. Our objective is to construct a comprehensive taxonomy, starting from
a broad classification of entity types - including objects, time, locations,
organizations, events, actions, and subjects - similar to existing manually
curated taxonomies. This classification is then progressively refined through
iterative prompting techniques, leveraging GPT-4's internal knowledge base. The
result is an extensive taxonomy comprising over 5000 nuanced entity types,
which demonstrates remarkable quality upon subjective evaluation.
We employed a straightforward yet effective prompting strategy, enabling the
taxonomy to be dynamically expanded. The practical applications of this
detailed taxonomy are diverse and significant. It facilitates the creation of
new, more intricate branches through pattern-based combinations and notably
enhances information extraction tasks, such as relation extraction and event
argument extraction. Our methodology not only introduces an innovative approach
to taxonomy creation but also opens new avenues for applying such taxonomies in
various computational linguistics and AI-related fields. | [
"cs.CL"
] | false |
2402.12593 | 2024-02-19T23:18:18Z | Standardize: Aligning Language Models with Expert-Defined Standards for
Content Generation | [
"Joseph Marvin Imperial",
"Gail Forey",
"Harish Tayyar Madabushi"
] | Domain experts across engineering, healthcare, and education follow strict
standards for producing quality content such as technical manuals, medication
instructions, and children's reading materials. However, current works in
controllable text generation have yet to explore using these standards as
references for control. Towards this end, we introduce Standardize, a
retrieval-style in-context learning-based framework to guide large language
models to align with expert-defined standards. Focusing on English language
standards in the education domain as a use case, we consider the Common
European Framework of Reference for Languages (CEFR) and Common Core Standards
(CCS) for the task of open-ended content generation. Our findings show that
models can gain 40% to 100% increase in precise accuracy for Llama2 and GPT-4,
respectively, demonstrating that the use of knowledge artifacts extracted from
standards and integrating them in the generation process can effectively guide
models to produce better standard-aligned content. | [
"cs.CL"
] | false |
2402.12605 | 2024-02-19T23:58:20Z | What is a word? | [
"Elliot Murphy"
] | In order to design strong paradigms for isolating lexical access and
semantics, we need to know what a word is. Surprisingly few linguists and
philosophers have a clear model of what a word is, even though words impact
basically every aspect of human life. Researchers that regularly publish
academic papers about language often rely on outdated, or inaccurate,
assumptions about wordhood. This short pedagogical document outlines what the
lexicon is most certainly not (though is often mistakenly taken to be), what it
might be (based on current good theories), and what some implications for
experimental design are. | [
"cs.CL"
] | false |
2402.11746 | 2024-02-19T00:18:09Z | Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned
Language Models through Task Arithmetic | [
"Rishabh Bhardwaj",
"Do Duc Anh",
"Soujanya Poria"
] | Aligned language models face a significant limitation as their fine-tuning
often results in compromised safety. To tackle this, we propose a simple method
RESTA that performs LLM safety realignment. RESTA stands for REstoring Safety
through Task Arithmetic. At its core, it involves a simple arithmetic addition
of a safety vector to the weights of the compromised model. We demonstrate the
effectiveness of RESTA in both parameter-efficient and full fine-tuning,
covering a wide range of downstream tasks, including instruction following in
Chinese, English, and Hindi, as well as problem-solving capabilities in Code
and Math. We also showcase the generalizability of RESTA on three existing
safety evaluation benchmarks and a multilingual benchmark dataset proposed as a
part of this work, consisting of 550 harmful questions covering 11 categories,
each with 5 sub-categories of harm. Overall, RESTA decreases the harmfulness of
the compromised model from 18.6% to 5.1% and from 9.2% to 1.5% in
parameter-efficient and full fine-tuning, respectively, while maintaining most
of the model's performance on the task. We release the source codes at:
https://github.com/declare-lab/resta. | [
"cs.CL",
"cs.AI"
] | false |
2402.11757 | 2024-02-19T01:11:44Z | Large Language Models for Stemming: Promises, Pitfalls and Failures | [
"Shuai Wang",
"Shengyao Zhuang",
"Guido Zuccon"
] | Text stemming is a natural language processing technique that is used to
reduce words to their base form, also known as the root form. The use of
stemming in IR has been shown to often improve the effectiveness of
keyword-matching models such as BM25. However, traditional stemming methods,
focusing solely on individual terms, overlook the richness of contextual
information. Recognizing this gap, in this paper, we investigate the promising
idea of using large language models (LLMs) to stem words by leveraging its
capability of context understanding. With this respect, we identify three
avenues, each characterised by different trade-offs in terms of computational
cost, effectiveness and robustness : (1) use LLMs to stem the vocabulary for a
collection, i.e., the set of unique words that appear in the collection
(vocabulary stemming), (2) use LLMs to stem each document separately
(contextual stemming), and (3) use LLMs to extract from each document entities
that should not be stemmed, then use vocabulary stemming to stem the rest of
the terms (entity-based contextual stemming). Through a series of empirical
experiments, we compare the use of LLMs for stemming with that of traditional
lexical stemmers such as Porter and Krovetz for English text. We find that
while vocabulary stemming and contextual stemming fail to achieve higher
effectiveness than traditional stemmers, entity-based contextual stemming can
achieve a higher effectiveness than using Porter stemmer alone, under specific
conditions. | [
"cs.IR",
"cs.CL"
] | false |
2402.11782 | 2024-02-19T02:15:34Z | What Evidence Do Language Models Find Convincing? | [
"Alexander Wan",
"Eric Wallace",
"Dan Klein"
] | Retrieval-augmented language models are being increasingly tasked with
subjective, contentious, and conflicting queries such as "is aspartame linked
to cancer". To resolve these ambiguous queries, one must search through a large
range of websites and consider "which, if any, of this evidence do I find
convincing?". In this work, we study how LLMs answer this question. In
particular, we construct ConflictingQA, a dataset that pairs controversial
queries with a series of real-world evidence documents that contain different
facts (e.g., quantitative results), argument styles (e.g., appeals to
authority), and answers (Yes or No). We use this dataset to perform sensitivity
and counterfactual analyses to explore which text features most affect LLM
predictions. Overall, we find that current models rely heavily on the relevance
of a website to the query, while largely ignoring stylistic features that
humans find important such as whether a text contains scientific references or
is written with a neutral tone. Taken together, these results highlight the
importance of RAG corpus quality (e.g., the need to filter misinformation), and
possibly even a shift in how LLMs are trained to better align with human
judgements. | [
"cs.CL",
"cs.LG"
] | false |
2402.11794 | 2024-02-19T02:48:44Z | Unveiling the Magic: Investigating Attention Distillation in
Retrieval-augmented Generation | [
"Zizhong Li",
"Haopeng Zhang",
"Jiawei Zhang"
] | Retrieval-augmented generation framework can address the limitations of large
language models by enabling real-time knowledge updates for more accurate
answers. An efficient way in the training phase of retrieval-augmented models
is attention distillation, which uses attention scores as a supervision signal
instead of manually annotated query-document pairs. Despite its growing
popularity, the detailed mechanisms behind the success of attention
distillation remain unexplored, particularly the specific patterns it leverages
to benefit training. In this paper, we address this gap by conducting a
comprehensive review of attention distillation workflow and identifying key
factors influencing the learning quality of retrieval-augmented language
models. We further propose indicators for optimizing models' training methods
and avoiding ineffective training. | [
"cs.CL",
"cs.IR"
] | false |
2402.11827 | 2024-02-19T04:41:31Z | Ask Optimal Questions: Aligning Large Language Models with Retriever's
Preference in Conversational Search | [
"Chanwoong Yoon",
"Gangwoo Kim",
"Byeongguk Jeon",
"Sungdong Kim",
"Yohan Jo",
"Jaewoo Kang"
] | Conversational search, unlike single-turn retrieval tasks, requires
understanding the current question within a dialogue context. The common
approach of rewrite-then-retrieve aims to decontextualize questions to be
self-sufficient for off-the-shelf retrievers, but most existing methods produce
sub-optimal query rewrites due to the limited ability to incorporate signals
from the retrieval results. To overcome this limitation, we present a novel
framework RetPO (Retriever's Preference Optimization), which is designed to
optimize a language model (LM) for reformulating search queries in line with
the preferences of the target retrieval systems. The process begins by
prompting a large LM to produce various potential rewrites and then collects
retrieval performance for these rewrites as the retrievers' preferences.
Through the process, we construct a large-scale dataset called RF collection,
containing Retrievers' Feedback on over 410K query rewrites across 12K
conversations. Furthermore, we fine-tune a smaller LM using this dataset to
align it with the retrievers' preferences as feedback. The resulting model
achieves state-of-the-art performance on two recent conversational search
benchmarks, significantly outperforming existing baselines, including GPT-3.5. | [
"cs.IR",
"cs.CL"
] | false |
2402.11886 | 2024-02-19T06:54:55Z | The Colorful Future of LLMs: Evaluating and Improving LLMs as Emotional
Supporters for Queer Youth | [
"Shir Lissak",
"Nitay Calderon",
"Geva Shenkman",
"Yaakov Ophir",
"Eyal Fruchter",
"Anat Brunstein Klomek",
"Roi Reichart"
] | Queer youth face increased mental health risks, such as depression, anxiety,
and suicidal ideation. Hindered by negative stigma, they often avoid seeking
help and rely on online resources, which may provide incompatible information.
Although access to a supportive environment and reliable information is
invaluable, many queer youth worldwide have no access to such support. However,
this could soon change due to the rapid adoption of Large Language Models
(LLMs) such as ChatGPT. This paper aims to comprehensively explore the
potential of LLMs to revolutionize emotional support for queers. To this end,
we conduct a qualitative and quantitative analysis of LLM's interactions with
queer-related content. To evaluate response quality, we develop a novel
ten-question scale that is inspired by psychological standards and expert
input. We apply this scale to score several LLMs and human comments to posts
where queer youth seek advice and share experiences. We find that LLM responses
are supportive and inclusive, outscoring humans. However, they tend to be
generic, not empathetic enough, and lack personalization, resulting in
nonreliable and potentially harmful advice. We discuss these challenges,
demonstrate that a dedicated prompt can improve the performance, and propose a
blueprint of an LLM-supporter that actively (but sensitively) seeks user
context to provide personalized, empathetic, and reliable responses. Our
annotated dataset is available for further research. | [
"cs.CL",
"cs.AI"
] | false |
2402.11891 | 2024-02-19T07:06:52Z | FeB4RAG: Evaluating Federated Search in the Context of Retrieval
Augmented Generation | [
"Shuai Wang",
"Ekaterina Khramtsova",
"Shengyao Zhuang",
"Guido Zuccon"
] | Federated search systems aggregate results from multiple search engines,
selecting appropriate sources to enhance result quality and align with user
intent. With the increasing uptake of Retrieval-Augmented Generation (RAG)
pipelines, federated search can play a pivotal role in sourcing relevant
information across heterogeneous data sources to generate informed responses.
However, existing datasets, such as those developed in the past TREC FedWeb
tracks, predate the RAG paradigm shift and lack representation of modern
information retrieval challenges. To bridge this gap, we present FeB4RAG, a
novel dataset specifically designed for federated search within RAG frameworks.
This dataset, derived from 16 sub-collections of the widely used \beir
benchmarking collection, includes 790 information requests (akin to
conversational queries) tailored for chatbot applications, along with top
results returned by each resource and associated LLM-derived relevance
judgements. Additionally, to support the need for this collection, we
demonstrate the impact on response generation of a high quality federated
search system for RAG compared to a naive approach to federated search. We do
so by comparing answers generated through the RAG pipeline through a
qualitative side-by-side comparison. Our collection fosters and supports the
development and evaluation of new federated search methods, especially in the
context of RAG pipelines. | [
"cs.IR",
"cs.CL"
] | false |
2402.11903 | 2024-02-19T07:38:57Z | SoLA: Solver-Layer Adaption of LLM for Better Logic Reasoning | [
"Yu Zhang",
"Hui-Ling Zhen",
"Zehua Pei",
"Yingzhao Lian",
"Lihao Yin",
"Mingxuan Yuan",
"Bei Yu"
] | Considering the challenges faced by large language models (LLMs) on logical
reasoning, prior efforts have sought to transform problem-solving through tool
learning. While progress has been made on small-scale problems, solving
industrial cases remains difficult due to their large scale and intricate
expressions. In this paper, we propose a novel solver-layer adaptation (SoLA)
method, where we introduce a solver as a new layer of the LLM to differentially
guide solutions towards satisfiability. In SoLA, LLM aims to comprehend the
search space described in natural language and identify local solutions of the
highest quality, while the solver layer focuses solely on constraints not
satisfied by the initial solution. Leveraging MaxSAT as a bridge, we define
forward and backward transfer gradients, enabling the final model to converge
to a satisfied solution or prove unsatisfiability. The backdoor theory ensures
that SoLA can obtain accurate solutions within polynomial loops. We evaluate
the performance of SoLA on various datasets and empirically demonstrate its
consistent outperformance against existing symbolic solvers (including Z3 and
Kissat) and tool-learning methods in terms of efficiency in large-scale
problem-solving. | [
"cs.CL",
"cs.AI"
] | false |
2402.11934 | 2024-02-19T08:22:51Z | Team QUST at SemEval-2024 Task 8: A Comprehensive Study of Monolingual
and Multilingual Approaches for Detecting AI-generated Text | [
"Xiaoman Xu",
"Xiangrun Li",
"Taihang Wang",
"Jianxiang Tian",
"Ye Jiang"
] | This paper presents the participation of team QUST in Task 8 SemEval 2024. We
first performed data augmentation and cleaning on the dataset to enhance model
training efficiency and accuracy. In the monolingual task, we evaluated
traditional deep-learning methods, multiscale positive-unlabeled framework
(MPU), fine-tuning, adapters and ensemble methods. Then, we selected the
top-performing models based on their accuracy from the monolingual models and
evaluated them in subtasks A and B. The final model construction employed a
stacking ensemble that combined fine-tuning with MPU. Our system achieved 8th
(scored 8th in terms of accuracy, officially ranked 13th) place in the official
test set in multilingual settings of subtask A. We release our system code
at:https://github.com/warmth27/SemEval2024_QUST | [
"cs.CL",
"cs.AI"
] | false |
2402.11955 | 2024-02-19T08:52:12Z | Analysis of Multidomain Abstractive Summarization Using Salience
Allocation | [
"Tohida Rehman",
"Raghubir Bose",
"Soumik Dey",
"Samiran Chattopadhyay"
] | This paper explores the realm of abstractive text summarization through the
lens of the SEASON (Salience Allocation as Guidance for Abstractive
SummarizatiON) technique, a model designed to enhance summarization by
leveraging salience allocation techniques. The study evaluates SEASON's
efficacy by comparing it with prominent models like BART, PEGASUS, and
ProphetNet, all fine-tuned for various text summarization tasks. The assessment
is conducted using diverse datasets including CNN/Dailymail, SAMSum, and
Financial-news based Event-Driven Trading (EDT), with a specific focus on a
financial dataset containing a substantial volume of news articles from
2020/03/01 to 2021/05/06. This paper employs various evaluation metrics such as
ROUGE, METEOR, BERTScore, and MoverScore to evaluate the performance of these
models fine-tuned for generating abstractive summaries. The analysis of these
metrics offers a thorough insight into the strengths and weaknesses
demonstrated by each model in summarizing news dataset, dialogue dataset and
financial text dataset. The results presented in this paper not only contribute
to the evaluation of the SEASON model's effectiveness but also illuminate the
intricacies of salience allocation techniques across various types of datasets. | [
"cs.CL",
"cs.AI"
] | false |
2402.12022 | 2024-02-19T10:31:53Z | Distilling Large Language Models for Text-Attributed Graph Learning | [
"Bo Pan",
"Zheng Zhang",
"Yifei Zhang",
"Yuntong Hu",
"Liang Zhao"
] | Text-Attributed Graphs (TAGs) are graphs of connected textual documents.
Graph models can efficiently learn TAGs, but their training heavily relies on
human-annotated labels, which are scarce or even unavailable in many
applications. Large language models (LLMs) have recently demonstrated
remarkable capabilities in few-shot and zero-shot TAG learning, but they suffer
from scalability, cost, and privacy issues. Therefore, in this work, we focus
on synergizing LLMs and graph models with their complementary strengths by
distilling the power of LLMs to a local graph model on TAG learning. To address
the inherent gaps between LLMs (generative models for texts) and graph models
(discriminative models for graphs), we propose first to let LLMs teach an
interpreter with rich textual rationale and then let a student model mimic the
interpreter's reasoning without LLMs' textual rationale. Extensive experiments
validate the efficacy of our proposed framework. | [
"cs.CL",
"cs.LG"
] | false |
2402.12046 | 2024-02-19T10:59:29Z | Citation Amnesia: NLP and Other Academic Fields Are in a Citation Age
Recession | [
"Jan Philip Wahle",
"Terry Ruas",
"Mohamed Abdalla",
"Bela Gipp",
"Saif M. Mohammad"
] | This study examines the tendency to cite older work across 20 fields of study
over 43 years (1980--2023). We put NLP's propensity to cite older work in the
context of these 20 other fields to analyze whether NLP shows similar temporal
citation patterns to these other fields over time or whether differences can be
observed. Our analysis, based on a dataset of approximately 240 million papers,
reveals a broader scientific trend: many fields have markedly declined in
citing older works (e.g., psychology, computer science). We term this decline a
'citation age recession', analogous to how economists define periods of reduced
economic activity. The trend is strongest in NLP and ML research (-12.8% and
-5.5% in citation age from previous peaks). Our results suggest that citing
more recent works is not directly driven by the growth in publication rates
(-3.4% across fields; -5.2% in humanities; -5.5% in formal sciences) -- even
when controlling for an increase in the volume of papers. Our findings raise
questions about the scientific community's engagement with past literature,
particularly for NLP, and the potential consequences of neglecting older but
relevant research. The data and a demo showcasing our results are publicly
available. | [
"cs.DL",
"cs.CL"
] | false |
2402.12071 | 2024-02-19T11:48:09Z | EmoBench: Evaluating the Emotional Intelligence of Large Language Models | [
"Sahand Sabour",
"Siyang Liu",
"Zheyuan Zhang",
"June M. Liu",
"Jinfeng Zhou",
"Alvionna S. Sunaryo",
"Juanzi Li",
"Tatia M. C. Lee",
"Rada Mihalcea",
"Minlie Huang"
] | Recent advances in Large Language Models (LLMs) have highlighted the need for
robust, comprehensive, and challenging benchmarks. Yet, research on evaluating
their Emotional Intelligence (EI) is considerably limited. Existing benchmarks
have two major shortcomings: first, they mainly focus on emotion recognition,
neglecting essential EI capabilities such as emotion regulation and thought
facilitation through emotion understanding; second, they are primarily
constructed from existing datasets, which include frequent patterns, explicit
information, and annotation errors, leading to unreliable evaluation. We
propose EmoBench, a benchmark that draws upon established psychological
theories and proposes a comprehensive definition for machine EI, including
Emotional Understanding and Emotional Application. EmoBench includes a set of
400 hand-crafted questions in English and Chinese, which are meticulously
designed to require thorough reasoning and understanding. Our findings reveal a
considerable gap between the EI of existing LLMs and the average human,
highlighting a promising direction for future research. Our code and data will
be publicly available from https://github.com/Sahandfer/EmoBench. | [
"cs.CL",
"cs.AI"
] | false |
2402.12091 | 2024-02-19T12:12:35Z | Do Large Language Models Understand Logic or Just Mimick Context? | [
"Junbing Yan",
"Chengyu Wang",
"Jun Huang",
"Wei Zhang"
] | Over the past few years, the abilities of large language models (LLMs) have
received extensive attention, which have performed exceptionally well in
complicated scenarios such as logical reasoning and symbolic inference. A
significant factor contributing to this progress is the benefit of in-context
learning and few-shot prompting. However, the reasons behind the success of
such models using contextual reasoning have not been fully explored. Do LLMs
have understand logical rules to draw inferences, or do they ``guess'' the
answers by learning a type of probabilistic mapping through context? This paper
investigates the reasoning capabilities of LLMs on two logical reasoning
datasets by using counterfactual methods to replace context text and modify
logical concepts. Based on our analysis, it is found that LLMs do not truly
understand logical rules; rather, in-context learning has simply enhanced the
likelihood of these models arriving at the correct answers. If one alters
certain words in the context text or changes the concepts of logical terms, the
outputs of LLMs can be significantly disrupted, leading to counter-intuitive
responses. This work provides critical insights into the limitations of LLMs,
underscoring the need for more robust mechanisms to ensure reliable logical
reasoning in LLMs. | [
"cs.CL",
"cs.AI"
] | false |
2402.12102 | 2024-02-19T12:45:52Z | Is It a Free Lunch for Removing Outliers during Pretraining? | [
"Baohao Liao",
"Christof Monz"
] | With the growing size of large language models, the role of quantization
becomes increasingly significant. However, outliers present in weights or
activations notably influence the performance of quantized models. Recently,
\citet{qtransformer} introduced a novel softmax function aimed at pretraining
models in an outlier-free manner, thereby enhancing their suitability for
quantization. Interestingly, we observed that such an approach leads to
performance degradation in full precision. Building on this insight, we enhance
the method by ensuring its normalization is invariant to sequence length, a
crucial factor for bridging the gap between pretraining and fine-tuning.
Moreover, this improved method also facilitates successful pretraining of
causal language models. | [
"cs.CL",
"cs.AI"
] | false |
2402.12147 | 2024-02-19T14:00:35Z | End-to-end multilingual fact-checking at scale | [
"Vinay Setty"
] | In this article, we describe how you can perform end-to-end fact-checking in
over 100 languages using Factiverse AI models. We also show through an
experimental benchmark that fine-tuned models tailored for fact-checking tasks
outperform Large Language Models such as GPT-4, GPT-3.5-Turbo, and Mistral-7b. | [
"cs.CL",
"cs.AI"
] | false |
2402.12150 | 2024-02-19T14:02:22Z | Your Large Language Model is Secretly a Fairness Proponent and You
Should Prompt it Like One | [
"Tianlin Li",
"Xiaoyu Zhang",
"Chao Du",
"Tianyu Pang",
"Qian Liu",
"Qing Guo",
"Chao Shen",
"Yang Liu"
] | The widespread adoption of large language models (LLMs) underscores the
urgent need to ensure their fairness. However, LLMs frequently present dominant
viewpoints while ignoring alternative perspectives from minority parties,
resulting in potential biases. We hypothesize that these fairness-violating
behaviors occur because LLMs express their viewpoints using a human personality
that represents the majority of training data. In response to this, we validate
that prompting LLMs with specific roles can allow LLMs to express diverse
viewpoints. Building on this insight and observation, we develop FairThinking,
a pipeline designed to automatically generate roles that enable LLMs to
articulate diverse perspectives for fair expressions. To evaluate FairThinking,
we create a dataset with a thousand items covering three fairness-related
topics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to
demonstrate its superior performance. | [
"cs.CL",
"cs.AI",
"I.2; J.4"
] | false |
2402.12275 | 2024-02-19T16:39:18Z | WorldCoder, a Model-Based LLM Agent: Building World Models by Writing
Code and Interacting with the Environment | [
"Hao Tang",
"Darren Key",
"Kevin Ellis"
] | We give a model-based agent that builds a Python program representing its
knowledge of the world based on its interactions with the environment. The
world model tries to explain its interactions, while also being optimistic
about what reward it can achieve. We do this by extending work on program
synthesis via LLMs. We study our agent on gridworlds, finding our approach is
more sample-efficient compared to deep RL, and more compute-efficient compared
to ReAct-style agents. | [
"cs.AI",
"cs.CL"
] | false |
2402.12279 | 2024-02-19T16:43:57Z | Key ingredients for effective zero-shot cross-lingual knowledge transfer
in generative tasks | [
"Nadezhda Chirkova",
"Vassilina Nikoulina"
] | Zero-shot cross-lingual generation implies finetuning of the multilingual
pretrained language model on a generation task in one language and then using
it to make predictions for this task in other languages. Previous works notice
a frequent problem of generation in a wrong language and propose approaches to
address it, usually using mT5 as a backbone model. In this work we compare
various approaches proposed from the literature in unified settings, also
including alternative backbone models, namely mBART and NLLB-200. We first
underline the importance of tuning learning rate used for finetuning, which
helps to substantially alleviate the problem of generation in the wrong
language. Then, we show that with careful learning rate tuning, the simple full
finetuning of the model acts as a very strong baseline and alternative
approaches bring only marginal improvements. Finally, we find that mBART
performs similarly to mT5 of the same size, and NLLB-200 can be competitive in
some cases. Our final models reach the performance of the approach based on
data translation which is usually considered as an upper baseline for zero-shot
cross-lingual generation. | [
"cs.CL",
"cs.AI"
] | false |
2402.12280 | 2024-02-19T16:47:04Z | Adaptive Skeleton Graph Decoding | [
"Shuowei Jin",
"Yongji Wu",
"Haizhong Zheng",
"Qingzhao Zhang",
"Matthew Lentz",
"Z. Morley Mao",
"Atul Prakash",
"Feng Qian",
"Danyang Zhuo"
] | Large language models (LLMs) have seen significant adoption for natural
language tasks, owing their success to massive numbers of model parameters
(e.g., 70B+); however, LLM inference incurs significant computation and memory
costs. Recent approaches propose parallel decoding strategies, such as
Skeleton-of-Thought (SoT), to improve performance by breaking prompts down into
sub-problems that can be decoded in parallel; however, they often suffer from
reduced response quality. Our key insight is that we can request additional
information, specifically dependencies and difficulty, when generating the
sub-problems to improve both response quality and performance. In this paper,
we propose Skeleton Graph Decoding (SGD), which uses dependencies exposed
between sub-problems to support information forwarding between dependent
sub-problems for improved quality while exposing parallelization opportunities
for decoding independent sub-problems. Additionally, we leverage difficulty
estimates for each sub-problem to select an appropriately-sized model,
improving performance without significantly reducing quality. Compared to
standard autoregressive generation and SoT, SGD achieves a 1.69x speedup while
improving quality by up to 51%. | [
"cs.CL",
"cs.AI"
] | false |
2402.12298 | 2024-02-19T17:23:10Z | Is Open-Source There Yet? A Comparative Study on Commercial and
Open-Source LLMs in Their Ability to Label Chest X-Ray Reports | [
"Felix J. Dorfner",
"Liv Jürgensen",
"Leonhard Donle",
"Fares Al Mohamad",
"Tobias R. Bodenmann",
"Mason C. Cleveland",
"Felix Busch",
"Lisa C. Adams",
"James Sato",
"Thomas Schultz",
"Albert E. Kim",
"Jameson Merkow",
"Keno K. Bressem",
"Christopher P. Bridge"
] | Introduction: With the rapid advances in large language models (LLMs), there
have been numerous new open source as well as commercial models. While recent
publications have explored GPT-4 in its application to extracting information
of interest from radiology reports, there has not been a real-world comparison
of GPT-4 to different leading open-source models.
Materials and Methods: Two different and independent datasets were used. The
first dataset consists of 540 chest x-ray reports that were created at the
Massachusetts General Hospital between July 2019 and July 2021. The second
dataset consists of 500 chest x-ray reports from the ImaGenome dataset. We then
compared the commercial models GPT-3.5 Turbo and GPT-4 from OpenAI to the
open-source models Mistral-7B, Mixtral-8x7B, Llama2-13B, Llama2-70B,
QWEN1.5-72B and CheXbert and CheXpert-labeler in their ability to accurately
label the presence of multiple findings in x-ray text reports using different
prompting techniques.
Results: On the ImaGenome dataset, the best performing open-source model was
Llama2-70B with micro F1-scores of 0.972 and 0.970 for zero- and few-shot
prompts, respectively. GPT-4 achieved micro F1-scores of 0.975 and 0.984,
respectively. On the institutional dataset, the best performing open-source
model was QWEN1.5-72B with micro F1-scores of 0.952 and 0.965 for zero- and
few-shot prompting, respectively. GPT-4 achieved micro F1-scores of 0.975 and
0.973, respectively.
Conclusion: In this paper, we show that while GPT-4 is superior to
open-source models in zero-shot report labeling, the implementation of few-shot
prompting can bring open-source models on par with GPT-4. This shows that
open-source models could be a performant and privacy preserving alternative to
GPT-4 for the task of radiology report classification. | [
"cs.CL",
"cs.AI"
] | false |
2402.12317 | 2024-02-19T17:37:28Z | ARKS: Active Retrieval in Knowledge Soup for Code Generation | [
"Hongjin Su",
"Shuyang Jiang",
"Yuhang Lai",
"Haoyuan Wu",
"Boao Shi",
"Che Liu",
"Qian Liu",
"Tao Yu"
] | Recently the retrieval-augmented generation (RAG) paradigm has raised much
attention for its potential in incorporating external knowledge into large
language models (LLMs) without further training. While widely explored in
natural language applications, its utilization in code generation remains
under-explored. In this paper, we introduce Active Retrieval in Knowledge Soup
(ARKS), an advanced strategy for generalizing large language models for code.
In contrast to relying on a single source, we construct a knowledge soup
integrating web search, documentation, execution feedback, and evolved code
snippets. We employ an active retrieval strategy that iteratively refines the
query and updates the knowledge soup. To assess the performance of ARKS, we
compile a new benchmark comprising realistic coding problems associated with
frequently updated libraries and long-tail programming languages. Experimental
results on ChatGPT and CodeLlama demonstrate a substantial improvement in the
average execution accuracy of ARKS on LLMs. The analysis confirms the
effectiveness of our proposed knowledge soup and active retrieval strategies,
offering rich insights into the construction of effective retrieval-augmented
code generation (RACG) pipelines. Our model, code, and data are available at
https://arks-codegen.github.io. | [
"cs.CL",
"cs.AI"
] | false |
2402.12352 | 2024-02-19T18:31:11Z | Graph-Based Retriever Captures the Long Tail of Biomedical Knowledge | [
"Julien Delile",
"Srayanta Mukherjee",
"Anton Van Pamel",
"Leonid Zhukov"
] | Large language models (LLMs) are transforming the way information is
retrieved with vast amounts of knowledge being summarized and presented via
natural language conversations. Yet, LLMs are prone to highlight the most
frequently seen pieces of information from the training set and to neglect the
rare ones. In the field of biomedical research, latest discoveries are key to
academic and industrial actors and are obscured by the abundance of an
ever-increasing literature corpus (the information overload problem). Surfacing
new associations between biomedical entities, e.g., drugs, genes, diseases,
with LLMs becomes a challenge of capturing the long-tail knowledge of the
biomedical scientific production. To overcome this challenge, Retrieval
Augmented Generation (RAG) has been proposed to alleviate some of the
shortcomings of LLMs by augmenting the prompts with context retrieved from
external datasets. RAG methods typically select the context via maximum
similarity search over text embeddings. In this study, we show that RAG methods
leave out a significant proportion of relevant information due to clusters of
over-represented concepts in the biomedical literature. We introduce a novel
information-retrieval method that leverages a knowledge graph to downsample
these clusters and mitigate the information overload problem. Its retrieval
performance is about twice better than embedding similarity alternatives on
both precision and recall. Finally, we demonstrate that both embedding
similarity and knowledge graph retrieval methods can be advantageously combined
into a hybrid model that outperforms both, enabling potential improvements to
biomedical question-answering models. | [
"cs.CL",
"cs.IR"
] | false |
2402.12370 | 2024-02-19T18:56:44Z | AnaloBench: Benchmarking the Identification of Abstract and Long-context
Analogies | [
"Xiao Ye",
"Andrew Wang",
"Jacob Choi",
"Yining Lu",
"Shreya Sharma",
"Lingfeng Shen",
"Vijay Tiyyala",
"Nicholas Andrews",
"Daniel Khashabi"
] | Humans regularly engage in analogical thinking, relating personal experiences
to current situations ($X$ is analogous to $Y$ because of $Z$). Analogical
thinking allows humans to solve problems in creative ways, grasp difficult
concepts, and articulate ideas more effectively. Can language models (LMs) do
the same? To answer this question, we propose ANALOBENCH, a benchmark to
determine analogical reasoning ability in LMs. Our benchmarking approach
focuses on aspects of this ability that are common among humans: (i) recalling
related experiences from a large amount of information, and (ii) applying
analogical reasoning to complex and lengthy scenarios. We test a broad
collection of proprietary models (e.g., GPT family, Claude V2) and open source
models such as LLaMA2. As in prior results, scaling up LMs results in some
performance boosts. Surprisingly, scale offers minimal gains when, (i)
analogies involve lengthy scenarios, or (ii) recalling relevant scenarios from
a large pool of information, a process analogous to finding a needle in a
haystack. We hope these observations encourage further research in this field. | [
"cs.CL",
"cs.AI"
] | false |
2402.12513 | 2024-02-19T20:21:09Z | Induced Model Matching: How Restricted Models Can Help Larger Ones | [
"Usama Muneeb",
"Mesrob I. Ohannessian"
] | We consider scenarios where a very accurate predictive model using restricted
features is available at the time of training of a larger, full-featured,
model. This restricted model may be thought of as "side-information", derived
either from an auxiliary exhaustive dataset or on the same dataset, by forcing
the restriction. How can the restricted model be useful to the full model? We
propose an approach for transferring the knowledge of the restricted model to
the full model, by aligning the full model's context-restricted performance
with that of the restricted model's. We call this methodology Induced Model
Matching (IMM) and first illustrate its general applicability by using logistic
regression as a toy example. We then explore IMM's use in language modeling,
the application that initially inspired it, and where it offers an explicit
foundation in contrast to the implicit use of restricted models in techniques
such as noising. We demonstrate the methodology on both LSTM and transformer
full models, using $N$-grams as restricted models. To further illustrate the
potential of the principle whenever it is much cheaper to collect restricted
rather than full information, we conclude with a simple RL example where POMDP
policies can improve learned MDP policies via IMM. | [
"cs.LG",
"cs.CL"
] | false |
2402.12556 | 2024-02-19T21:31:11Z | IMBUE: Improving Interpersonal Effectiveness through Simulation and
Just-in-time Feedback with Human-Language Model Interaction | [
"Inna Wanyin Lin",
"Ashish Sharma",
"Christopher Michael Rytting",
"Adam S. Miner",
"Jina Suh",
"Tim Althoff"
] | Navigating certain communication situations can be challenging due to
individuals' lack of skills and the interference of strong emotions. However,
effective learning opportunities are rarely accessible. In this work, we
conduct a human-centered study that uses language models to simulate bespoke
communication training and provide just-in-time feedback to support the
practice and learning of interpersonal effectiveness skills. We apply the
interpersonal effectiveness framework from Dialectical Behavioral Therapy
(DBT), DEAR MAN, which focuses on both conversational and emotional skills. We
present IMBUE, an interactive training system that provides feedback 25% more
similar to experts' feedback, compared to that generated by GPT-4. IMBUE is the
first to focus on communication skills and emotion management simultaneously,
incorporate experts' domain knowledge in providing feedback, and be grounded in
psychology theory. Through a randomized trial of 86 participants, we find that
IMBUE's simulation-only variant significantly improves participants'
self-efficacy (up to 17%) and reduces negative emotions (up to 25%). With
IMBUE's additional just-in-time feedback, participants demonstrate 17%
improvement in skill mastery, along with greater enhancements in self-efficacy
(27% more) and reduction of negative emotions (16% more) compared to
simulation-only. The improvement in skill mastery is the only measure that is
transferred to new and more difficult situations; situation specific training
is necessary for improving self-efficacy and emotion reduction. | [
"cs.HC",
"cs.CL"
] | false |
2402.12560 | 2024-02-19T21:35:56Z | CausalGym: Benchmarking causal interpretability methods on linguistic
tasks | [
"Aryaman Arora",
"Dan Jurafsky",
"Christopher Potts"
] | Language models (LMs) have proven to be powerful tools for psycholinguistic
research, but most prior work has focused on purely behavioural measures (e.g.,
surprisal comparisons). At the same time, research in model interpretability
has begun to illuminate the abstract causal mechanisms shaping LM behavior. To
help bring these strands of research closer together, we introduce CausalGym.
We adapt and expand the SyntaxGym suite of tasks to benchmark the ability of
interpretability methods to causally affect model behaviour. To illustrate how
CausalGym can be used, we study the pythia models (14M--6.9B) and assess the
causal efficacy of a wide range of interpretability methods, including linear
probing and distributed alignment search (DAS). We find that DAS outperforms
the other methods, and so we use it to study the learning trajectory of two
difficult linguistic phenomena in pythia-1b: negative polarity item licensing
and filler--gap dependencies. Our analysis shows that the mechanism
implementing both of these tasks is learned in discrete stages, not gradually. | [
"cs.CL",
"cs.AI",
"I.2.7"
] | false |
2402.12590 | 2024-02-19T22:59:43Z | Evolving AI Collectives to Enhance Human Diversity and Enable
Self-Regulation | [
"Shiyang Lai",
"Yujin Potter",
"Junsol Kim",
"Richard Zhuang",
"Dawn Song",
"James Evans"
] | Large language models steer their behaviors based on texts generated by
others. This capacity and their increasing prevalence in online settings
portend that they will intentionally or unintentionally "program" one another
and form emergent AI subjectivities, relationships, and collectives. Here, we
call upon the research community to investigate these "society-like" properties
of interacting artificial intelligences to increase their rewards and reduce
their risks for human society and the health of online environments. We use a
simple model and its outputs to illustrate how such emergent, decentralized AI
collectives can expand the bounds of human diversity and reduce the risk of
toxic, anti-social behavior online. Finally, we discuss opportunities for AI
self-moderation and address ethical issues and design challenges associated
with creating and maintaining decentralized AI collectives. | [
"cs.CL",
"cs.CY"
] | false |
2402.14848 | 2024-02-19T16:04:53Z | Same Task, More Tokens: the Impact of Input Length on the Reasoning
Performance of Large Language Models | [
"Mosh Levy",
"Alon Jacoby",
"Yoav Goldberg"
] | This paper explores the impact of extending input lengths on the capabilities
of Large Language Models (LLMs). Despite LLMs advancements in recent times,
their performance consistency across different input lengths is not well
understood. We investigate this aspect by introducing a novel QA reasoning
framework, specifically designed to assess the impact of input length. We
isolate the effect of input length using multiple versions of the same sample,
each being extended with padding of different lengths, types and locations. Our
findings show a notable degradation in LLMs' reasoning performance at much
shorter input lengths than their technical maximum. We show that the
degradation trend appears in every version of our dataset, although at
different intensities. Additionally, our study reveals that traditional
perplexity metrics do not correlate with performance of LLMs' in long input
reasoning tasks. We analyse our results and identify failure modes that can
serve as useful guides for future research, potentially informing strategies to
address the limitations observed in LLMs. | [
"cs.CL",
"cs.AI"
] | true |
2402.15525 | 2024-02-19T21:50:42Z | Detecting misinformation through Framing Theory: the Frame Element-based
Model | [
"Guan Wang",
"Rebecca Frederick",
"Jinglong Duan",
"William Wong",
"Verica Rupar",
"Weihua Li",
"Quan Bai"
] | In this paper, we delve into the rapidly evolving challenge of misinformation
detection, with a specific focus on the nuanced manipulation of narrative
frames - an under-explored area within the AI community. The potential for
Generative AI models to generate misleading narratives underscores the urgency
of this problem. Drawing from communication and framing theories, we posit that
the presentation or 'framing' of accurate information can dramatically alter
its interpretation, potentially leading to misinformation. We highlight this
issue through real-world examples, demonstrating how shifts in narrative frames
can transmute fact-based information into misinformation. To tackle this
challenge, we propose an innovative approach leveraging the power of
pre-trained Large Language Models and deep neural networks to detect
misinformation originating from accurate facts portrayed under different
frames. These advanced AI techniques offer unprecedented capabilities in
identifying complex patterns within unstructured data critical for examining
the subtleties of narrative frames. The objective of this paper is to bridge a
significant research gap in the AI domain, providing valuable insights and
methodologies for tackling framing-induced misinformation, thus contributing to
the advancement of responsible and trustworthy AI technologies. Several
experiments are intensively conducted and experimental results explicitly
demonstrate the various impact of elements of framing theory proving the
rationale of applying framing theory to increase the performance in
misinformation detection. | [
"cs.CL",
"cs.CY"
] | false |