title
stringlengths 4
246
| id
stringlengths 32
39
| arxiv_url
stringlengths 32
39
| pdf_url
stringlengths 32
39
| published_date
stringlengths 10
10
| updated_date
stringlengths 10
10
| authors
sequencelengths 1
535
| affiliations
sequencelengths 1
535
| summary
stringlengths 23
3.54k
| comment
stringlengths 0
762
| journal_ref
stringlengths 0
545
| doi
stringlengths 0
151
| primary_category
stringclasses 156
values | categories
sequencelengths 1
11
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Knowledge Graph Modeling-Driven Large Language Model Operating System
(LLM OS) for Task Automation in Process Engineering Problem-Solving | http://arxiv.org/abs/2408.14494v1 | http://arxiv.org/abs/2408.14494v1 | http://arxiv.org/pdf/2408.14494v1 | 2024-08-23 | 2024-08-23 | [
"Sakhinana Sagar Srinivas",
"Vijay Sri Vaikunth",
"Venkataramana Runkana"
] | [
"",
"",
""
] | We present the Process Engineering Operations Assistant (PEOA), an AI-driven
framework designed to solve complex problems in the chemical and process
industries. The framework employs a modular architecture orchestrated by a
meta-agent, which serves as the central coordinator, managing an action
generator and instruction-tuned small-scale language models (expert models).
The action generator decomposes complex problems into sub-tasks and identifies
suitable expert models to execute each, delivering precise solutions for
multi-step problem-solving. Key techniques include advanced knowledge modeling
using property graphs for improved information retrieval, facilitating more
accurate and contextually relevant solutions. Additionally, the framework
utilizes a teacher-student transfer-learning approach with GPT-4 (Omni) to
fine-tune the action generator and expert models for domain adaptation,
alongside an iterative problem-solving mechanism with sophisticated error
handling. Custom datasets were developed to evaluate the framework against
leading proprietary language models on various engineering tasks. The results
demonstrate the framework effectiveness in automating calculations,
accelerating prototyping, and providing AI-augmented decision support for
industrial processes, marking a significant advancement in process engineering
capabilities. | Accepted for Publication by Association for the Advancement of
Artificial Intelligence, Fall Symposium Series | cs.LG | [
"cs.LG",
"cs.AI"
] |
||
cc-DRL: a Convex Combined Deep Reinforcement Learning Flight Control
Design for a Morphing Quadrotor | http://arxiv.org/abs/2408.13054v1 | http://arxiv.org/abs/2408.13054v1 | http://arxiv.org/pdf/2408.13054v1 | 2024-08-23 | 2024-08-23 | [
"Tao Yang",
"Huai-Ning Wu",
"Jun-Wei Wang"
] | [
"",
"",
""
] | In comparison to common quadrotors, the shape change of morphing quadrotors
endows it with a more better flight performance but also results in more
complex flight dynamics. Generally, it is extremely difficult or even
impossible for morphing quadrotors to establish an accurate mathematical model
describing their complex flight dynamics. To figure out the issue of flight
control design for morphing quadrotors, this paper resorts to a combination of
model-free control techniques (e.g., deep reinforcement learning, DRL) and
convex combination (CC) technique, and proposes a convex-combined-DRL (cc-DRL)
flight control algorithm for position and attitude of a class of morphing
quadrotors, where the shape change is realized by the length variation of four
arm rods. In the proposed cc-DRL flight control algorithm, proximal policy
optimization algorithm that is a model-free DRL algorithm is utilized to
off-line train the corresponding optimal flight control laws for some selected
representative arm length modes and hereby a cc-DRL flight control scheme is
constructed by the convex combination technique. Finally, simulation results
are presented to show the effectiveness and merit of the proposed flight
control algorithm. | cs.RO | [
"cs.RO",
"cs.AI",
"cs.LG",
"cs.SY",
"eess.SY"
] |
|||
SpeechPrompt: Prompting Speech Language Models for Speech Processing
Tasks | http://arxiv.org/abs/2408.13040v1 | http://arxiv.org/abs/2408.13040v1 | http://arxiv.org/pdf/2408.13040v1 | 2024-08-23 | 2024-08-23 | [
"Kai-Wei Chang",
"Haibin Wu",
"Yu-Kai Wang",
"Yuan-Kuei Wu",
"Hua Shen",
"Wei-Cheng Tseng",
"Iu-thing Kang",
"Shang-Wen Li",
"Hung-yi Lee"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Prompting has become a practical method for utilizing pre-trained language
models (LMs). This approach offers several advantages. It allows an LM to adapt
to new tasks with minimal training and parameter updates, thus achieving
efficiency in both storage and computation. Additionally, prompting modifies
only the LM's inputs and harnesses the generative capabilities of language
models to address various downstream tasks in a unified manner. This
significantly reduces the need for human labor in designing task-specific
models. These advantages become even more evident as the number of tasks served
by the LM scales up. Motivated by the strengths of prompting, we are the first
to explore the potential of prompting speech LMs in the domain of speech
processing. Recently, there has been a growing interest in converting speech
into discrete units for language modeling. Our pioneer research demonstrates
that these quantized speech units are highly versatile within our unified
prompting framework. Not only can they serve as class labels, but they also
contain rich phonetic information that can be re-synthesized back into speech
signals for speech generation tasks. Specifically, we reformulate speech
processing tasks into speech-to-unit generation tasks. As a result, we can
seamlessly integrate tasks such as speech classification, sequence generation,
and speech generation within a single, unified prompting framework. The
experiment results show that the prompting method can achieve competitive
performance compared to the strong fine-tuning method based on self-supervised
learning models with a similar number of trainable parameters. The prompting
method also shows promising results in the few-shot setting. Moreover, with the
advanced speech LMs coming into the stage, the proposed prompting framework
attains great potential. | Published in IEEE/ACM Transactions on Audio, Speech, and Language
Processing (TASLP) | in IEEE/ACM Transactions on Audio, Speech, and Language
Processing, vol. 32, pp. 3730-3744, 2024 | 10.1109/TASLP.2024.3436618 | eess.AS | [
"eess.AS",
"cs.AI",
"cs.CL",
"cs.LG"
] |
VFM-Det: Towards High-Performance Vehicle Detection via Large Foundation
Models | http://arxiv.org/abs/2408.13031v1 | http://arxiv.org/abs/2408.13031v1 | http://arxiv.org/pdf/2408.13031v1 | 2024-08-23 | 2024-08-23 | [
"Wentao Wu",
"Fanghua Hong",
"Xiao Wang",
"Chenglong Li",
"Jin Tang"
] | [
"",
"",
"",
"",
""
] | Existing vehicle detectors are usually obtained by training a typical
detector (e.g., YOLO, RCNN, DETR series) on vehicle images based on a
pre-trained backbone (e.g., ResNet, ViT). Some researchers also exploit and
enhance the detection performance using pre-trained large foundation models.
However, we think these detectors may only get sub-optimal results because the
large models they use are not specifically designed for vehicles. In addition,
their results heavily rely on visual features, and seldom of they consider the
alignment between the vehicle's semantic information and visual
representations. In this work, we propose a new vehicle detection paradigm
based on a pre-trained foundation vehicle model (VehicleMAE) and a large
language model (T5), termed VFM-Det. It follows the region proposal-based
detection framework and the features of each proposal can be enhanced using
VehicleMAE. More importantly, we propose a new VAtt2Vec module that predicts
the vehicle semantic attributes of these proposals and transforms them into
feature vectors to enhance the vision features via contrastive learning.
Extensive experiments on three vehicle detection benchmark datasets thoroughly
proved the effectiveness of our vehicle detector. Specifically, our model
improves the baseline approach by $+5.1\%$, $+6.2\%$ on the $AP_{0.5}$,
$AP_{0.75}$ metrics, respectively, on the Cityscapes dataset.The source code of
this work will be released at https://github.com/Event-AHU/VFM-Det. | In Peer Review | cs.CV | [
"cs.CV",
"cs.AI",
"cs.NE"
] |
||
BoostTrack++: using tracklet information to detect more objects in
multiple object tracking | http://arxiv.org/abs/2408.13003v1 | http://arxiv.org/abs/2408.13003v1 | http://arxiv.org/pdf/2408.13003v1 | 2024-08-23 | 2024-08-23 | [
"Vukašin Stanojević",
"Branimir Todorović"
] | [
"",
""
] | Multiple object tracking (MOT) depends heavily on selection of true positive
detected bounding boxes. However, this aspect of the problem is mostly
overlooked or mitigated by employing two-stage association and utilizing low
confidence detections in the second stage. Recently proposed BoostTrack
attempts to avoid the drawbacks of multiple stage association approach and use
low-confidence detections by applying detection confidence boosting. In this
paper, we identify the limitations of the confidence boost used in BoostTrack
and propose a method to improve its performance. To construct a richer
similarity measure and enable a better selection of true positive detections,
we propose to use a combination of shape, Mahalanobis distance and novel soft
BIoU similarity. We propose a soft detection confidence boost technique which
calculates new confidence scores based on the similarity measure and the
previous confidence scores, and we introduce varying similarity threshold to
account for lower similarity measure between detections and tracklets which are
not regularly updated. The proposed additions are mutually independent and can
be used in any MOT algorithm.
Combined with the BoostTrack+ baseline, our method achieves near state of the
art results on the MOT17 dataset and new state of the art HOTA and IDF1 scores
on the MOT20 dataset.
The source code is available at:
https://github.com/vukasin-stanojevic/BoostTrack . | cs.CV | [
"cs.CV",
"cs.AI"
] |
|||
CRUXEval-X: A Benchmark for Multilingual Code Reasoning, Understanding
and Execution | http://arxiv.org/abs/2408.13001v1 | http://arxiv.org/abs/2408.13001v1 | http://arxiv.org/pdf/2408.13001v1 | 2024-08-23 | 2024-08-23 | [
"Ruiyang Xu",
"Jialun Cao",
"Yaojie Lu",
"Hongyu Lin",
"Xianpei Han",
"Ben He",
"Shing-Chi Cheung",
"Le Sun"
] | [
"",
"",
"",
"",
"",
"",
"",
""
] | Code benchmarks such as HumanEval are widely adopted to evaluate Large
Language Models' (LLMs) coding capabilities. However, there is an unignorable
programming language bias in existing code benchmarks -- over 95% code
generation benchmarks are dominated by Python, leaving the LLMs' capabilities
in other programming languages such as Java and C/C++ unknown. Moreover, coding
task bias is also crucial. Most benchmarks focus on code generation capability,
while benchmarks for code reasoning (given input, reasoning output; and given
output, reasoning input), an essential coding capability, are insufficient.
Yet, constructing multi-lingual benchmarks can be expensive and
labor-intensive, and codes in contest websites such as Leetcode suffer from
data contamination during training. To fill this gap, we propose CRUXEVAL-X, a
multi-lingual code reasoning benchmark that contains 19 programming languages.
It comprises at least 600 subjects for each language, along with 19K
content-consistent tests in total. In particular, the construction pipeline of
CRUXEVAL-X works in a fully automated and test-guided manner, which iteratively
generates and repairs based on execution feedback. Also, to cross language
barriers (e.g., dynamic/static type systems in Python/C++), we formulated
various transition rules between language pairs to facilitate translation. Our
intensive evaluation of 24 representative LLMs reveals the correlation between
language pairs. For example, TypeScript and JavaScript show a significant
positive correlation, while Racket has less correlation with other languages.
More interestingly, even a model trained solely on Python can achieve at most
34.4% Pass@1 in other languages, revealing the cross-language generalization of
LLMs. | 13pages | cs.AI | [
"cs.AI"
] |
||
Enhancing Knowledge Tracing with Concept Map and Response
Disentanglement | http://arxiv.org/abs/2408.12996v1 | http://arxiv.org/abs/2408.12996v1 | http://arxiv.org/pdf/2408.12996v1 | 2024-08-23 | 2024-08-23 | [
"Soonwook Park",
"Donghoon Lee",
"Hogun Park"
] | [
"",
"",
""
] | In the rapidly advancing realm of educational technology, it becomes critical
to accurately trace and understand student knowledge states. Conventional
Knowledge Tracing (KT) models have mainly focused on binary responses (i.e.,
correct and incorrect answers) to questions. Unfortunately, they largely
overlook the essential information in students' actual answer choices,
particularly for Multiple Choice Questions (MCQs), which could help reveal each
learner's misconceptions or knowledge gaps. To tackle these challenges, we
propose the Concept map-driven Response disentanglement method for enhancing
Knowledge Tracing (CRKT) model. CRKT benefits KT by directly leveraging answer
choices--beyond merely identifying correct or incorrect answers--to distinguish
responses with different incorrect choices. We further introduce the novel use
of unchosen responses by employing disentangled representations to get insights
from options not selected by students. Additionally, CRKT tracks the student's
knowledge state at the concept level and encodes the concept map, representing
the relationships between them, to better predict unseen concepts. This
approach is expected to provide actionable feedback, improving the learning
experience. Our comprehensive experiments across multiple datasets demonstrate
CRKT's effectiveness, achieving superior performance in prediction accuracy and
interpretability over state-of-the-art models. | Accepted to Knowledge-Based Systems Journal | 10.1016/j.knosys.2024.112346 | cs.AI | [
"cs.AI",
"cs.LG"
] |
|
RIFF: Inducing Rules for Fraud Detection from Decision Trees | http://arxiv.org/abs/2408.12989v1 | http://arxiv.org/abs/2408.12989v1 | http://arxiv.org/pdf/2408.12989v1 | 2024-08-23 | 2024-08-23 | [
"João Lucas Martins",
"João Bravo",
"Ana Sofia Gomes",
"Carlos Soares",
"Pedro Bizarro"
] | [
"",
"",
"",
"",
""
] | Financial fraud is the cause of multi-billion dollar losses annually.
Traditionally, fraud detection systems rely on rules due to their transparency
and interpretability, key features in domains where decisions need to be
explained. However, rule systems require significant input from domain experts
to create and tune, an issue that rule induction algorithms attempt to mitigate
by inferring rules directly from data. We explore the application of these
algorithms to fraud detection, where rule systems are constrained to have a low
false positive rate (FPR) or alert rate, by proposing RIFF, a rule induction
algorithm that distills a low FPR rule set directly from decision trees. Our
experiments show that the induced rules are often able to maintain or improve
performance of the original models for low FPR tasks, while substantially
reducing their complexity and outperforming rules hand-tuned by experts. | Published as a conference paper at RuleML+RR 2024 | cs.LG | [
"cs.LG",
"cs.AI"
] |
||
Zeoformer: Coarse-Grained Periodic Graph Transformer for OSDA-Zeolite
Affinity Prediction | http://arxiv.org/abs/2408.12984v2 | http://arxiv.org/abs/2408.12984v2 | http://arxiv.org/pdf/2408.12984v2 | 2024-08-23 | 2024-08-26 | [
"Xiangxiang Shen",
"Zheng Wan",
"Lingfeng Wen",
"Licheng Sun",
"Ou Yang Ming Jie",
"Xuan Tang",
"Xian Zeng",
"Mingsong Chen",
"Xiao He",
"Xian Wei"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | To date, the International Zeolite Association Structure Commission (IZA-SC)
has cataloged merely 255 distinct zeolite structures, with millions of
theoretically possible structures yet to be discovered. The synthesis of a
specific zeolite typically necessitates the use of an organic
structure-directing agent (OSDA), since the selectivity for a particular
zeolite is largely determined by the affinity between the OSDA and the zeolite.
Therefore, finding the best affinity OSDA-zeolite pair is the key to the
synthesis of targeted zeolite. However, OSDA-zeolite pairs frequently exhibit
complex geometric structures, i.e., a complex crystal structure formed by a
large number of atoms. Although some existing machine learning methods can
represent the periodicity of crystals, they cannot accurately represent crystal
structures with local variability. To address this issue, we propose a novel
approach called Zeoformer, which can effectively represent coarse-grained
crystal periodicity and fine-grained local variability. Zeoformer reconstructs
the unit cell centered around each atom and encodes the pairwise distances
between this central atom and other atoms within the reconstructed unit cell.
The introduction of pairwise distances within the reconstructed unit cell more
effectively represents the overall structure of the unit cell and the
differences between different unit cells, enabling the model to more accurately
and efficiently predict the properties of OSDA-zeolite pairs and general
crystal structures. Through comprehensive evaluation, our Zeoformer model
demonstrates the best performance on OSDA-zeolite pair datasets and two types
of crystal material datasets. | 7 pages, 5 figures | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cs.AI"
] |
||
QD-VMR: Query Debiasing with Contextual Understanding Enhancement for
Video Moment Retrieval | http://arxiv.org/abs/2408.12981v1 | http://arxiv.org/abs/2408.12981v1 | http://arxiv.org/pdf/2408.12981v1 | 2024-08-23 | 2024-08-23 | [
"Chenghua Gao",
"Min Li",
"Jianshuo Liu",
"Junxing Ren",
"Lin Chen",
"Haoyu Liu",
"Bo Meng",
"Jitao Fu",
"Wenwen Su"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Video Moment Retrieval (VMR) aims to retrieve relevant moments of an
untrimmed video corresponding to the query. While cross-modal interaction
approaches have shown progress in filtering out query-irrelevant information in
videos, they assume the precise alignment between the query semantics and the
corresponding video moments, potentially overlooking the misunderstanding of
the natural language semantics. To address this challenge, we propose a novel
model called \textit{QD-VMR}, a query debiasing model with enhanced contextual
understanding. Firstly, we leverage a Global Partial Aligner module via video
clip and query features alignment and video-query contrastive learning to
enhance the cross-modal understanding capabilities of the model. Subsequently,
we employ a Query Debiasing Module to obtain debiased query features
efficiently, and a Visual Enhancement module to refine the video features
related to the query. Finally, we adopt the DETR structure to predict the
possible target video moments. Through extensive evaluations of three benchmark
datasets, QD-VMR achieves state-of-the-art performance, proving its potential
to improve the accuracy of VMR. Further analytical experiments demonstrate the
effectiveness of our proposed module. Our code will be released to facilitate
future research. | 9 pages, 4 figures, 4 tables | cs.AI | [
"cs.AI"
] |
||
Open Llama2 Model for the Lithuanian Language | http://arxiv.org/abs/2408.12963v1 | http://arxiv.org/abs/2408.12963v1 | http://arxiv.org/pdf/2408.12963v1 | 2024-08-23 | 2024-08-23 | [
"Artūras Nakvosas",
"Povilas Daniušis",
"Vytas Mulevičius"
] | [
"",
"",
""
] | In this paper, we propose and describe the first open Llama2 large language
models (LLMs) for the Lithuanian language, including an accompanying
question/answer (Q/A) dataset and translations of popular LLM benchmarks. We
provide a brief review of open regional LLMs and detailed information on the
proposed LLMs and their training process. We also conduct an empirical
evaluation, comparing the perplexities of the proposed LLMs with those of other
modern open LLMs. In addition, benchmarking the proposed LLMs against language
understanding tasks reveals that high-quality pretraining datasets may be
essential for achieving models that perform efficiently on these benchmarks.
The full realisations of the described LLMs are available in the accompanying
open repository~\url{https://huggingface.co/neurotechnology}. | 12 pages, 8 figures, 5 tables | cs.CL | [
"cs.CL",
"cs.AI",
"cs.LG"
] |
||
Multimodal Contrastive In-Context Learning | http://arxiv.org/abs/2408.12959v1 | http://arxiv.org/abs/2408.12959v1 | http://arxiv.org/pdf/2408.12959v1 | 2024-08-23 | 2024-08-23 | [
"Yosuke Miyanishi",
"Minh Le Nguyen"
] | [
"",
""
] | The rapid growth of Large Language Models (LLMs) usage has highlighted the
importance of gradient-free in-context learning (ICL). However, interpreting
their inner workings remains challenging. This paper introduces a novel
multimodal contrastive in-context learning framework to enhance our
understanding of ICL in LLMs. First, we present a contrastive learning-based
interpretation of ICL in real-world settings, marking the distance of the
key-value representation as the differentiator in ICL. Second, we develop an
analytical framework to address biases in multimodal input formatting for
real-world datasets. We demonstrate the effectiveness of ICL examples where
baseline performance is poor, even when they are represented in unseen formats.
Lastly, we propose an on-the-fly approach for ICL (Anchored-by-Text ICL) that
demonstrates effectiveness in detecting hateful memes, a task where typical ICL
struggles due to resource limitations. Extensive experiments on multimodal
datasets reveal that our approach significantly improves ICL performance across
various scenarios, such as challenging tasks and resource-constrained
environments. Moreover, it provides valuable insights into the mechanisms of
in-context learning in LLMs. Our findings have important implications for
developing more interpretable, efficient, and robust multimodal AI systems,
especially in challenging tasks and resource-constrained environments. | cs.CL | [
"cs.CL",
"cs.AI"
] |
|||
Informational Embodiment: Computational role of information structure in
codes and robots | http://arxiv.org/abs/2408.12950v1 | http://arxiv.org/abs/2408.12950v1 | http://arxiv.org/pdf/2408.12950v1 | 2024-08-23 | 2024-08-23 | [
"Alexandre Pitti",
"Kohei Nakajima",
"Yasuo Kuniyoshi"
] | [
"",
"",
""
] | The body morphology plays an important role in the way information is
perceived and processed by an agent. We address an information theory (IT)
account on how the precision of sensors, the accuracy of motors, their
placement, the body geometry, shape the information structure in robots and
computational codes. As an original idea, we envision the robot's body as a
physical communication channel through which information is conveyed, in and
out, despite intrinsic noise and material limitations. Following this, entropy,
a measure of information and uncertainty, can be used to maximize the
efficiency of robot design and of algorithmic codes per se. This is known as
the principle of Entropy Maximization (PEM) introduced in biology by Barlow in
1969. The Shannon's source coding theorem provides then a framework to compare
different types of bodies in terms of sensorimotor information. In line with
PME, we introduce a special class of efficient codes used in IT that reached
the Shannon limits in terms of information capacity for error correction and
robustness against noise, and parsimony. These efficient codes, which exploit
insightfully quantization and randomness, permit to deal with uncertainty,
redundancy and compacity. These features can be used for perception and control
in intelligent systems. In various examples and closing discussions, we reflect
on the broader implications of our framework that we called Informational
Embodiment to motor theory and bio-inspired robotics, touching upon concepts
like motor synergies, reservoir computing, and morphological computation. These
insights can contribute to a deeper understanding of how information theory
intersects with the embodiment of intelligence in both natural and artificial
systems. | cs.RO | [
"cs.RO",
"cs.AI",
"cs.IT",
"math.IT"
] |
|||
Causal-Guided Active Learning for Debiasing Large Language Models | http://arxiv.org/abs/2408.12942v1 | http://arxiv.org/abs/2408.12942v1 | http://arxiv.org/pdf/2408.12942v1 | 2024-08-23 | 2024-08-23 | [
"Zhouhao Sun",
"Li Du",
"Xiao Ding",
"Yixuan Ma",
"Kaitao Qiu",
"Ting Liu",
"Bing Qin"
] | [
"",
"",
"",
"",
"",
"",
""
] | Although achieving promising performance, recent analyses show that current
generative large language models (LLMs) may still capture dataset biases and
utilize them for generation, leading to poor generalizability and harmfulness
of LLMs. However, due to the diversity of dataset biases and the
over-optimization problem, previous prior-knowledge-based debiasing methods and
fine-tuning-based debiasing methods may not be suitable for current LLMs. To
address this issue, we explore combining active learning with the causal
mechanisms and propose a casual-guided active learning (CAL) framework, which
utilizes LLMs itself to automatically and autonomously identify informative
biased samples and induce the bias patterns. Then a cost-effective and
efficient in-context learning based method is employed to prevent LLMs from
utilizing dataset biases during generation. Experimental results show that CAL
can effectively recognize typical biased instances and induce various bias
patterns for debiasing LLMs. | ACL main conference | cs.CL | [
"cs.CL",
"cs.AI"
] |
||
iSee: Advancing Multi-Shot Explainable AI Using Case-based
Recommendations | http://arxiv.org/abs/2408.12941v1 | http://arxiv.org/abs/2408.12941v1 | http://arxiv.org/pdf/2408.12941v1 | 2024-08-23 | 2024-08-23 | [
"Anjana Wijekoon",
"Nirmalie Wiratunga",
"David Corsar",
"Kyle Martin",
"Ikechukwu Nkisi-Orji",
"Chamath Palihawadana",
"Marta Caro-Martínez",
"Belen Díaz-Agudo",
"Derek Bridge",
"Anne Liret"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Explainable AI (XAI) can greatly enhance user trust and satisfaction in
AI-assisted decision-making processes. Recent findings suggest that a single
explainer may not meet the diverse needs of multiple users in an AI system;
indeed, even individual users may require multiple explanations. This
highlights the necessity for a "multi-shot" approach, employing a combination
of explainers to form what we introduce as an "explanation strategy". Tailored
to a specific user or a user group, an "explanation experience" describes
interactions with personalised strategies designed to enhance their AI
decision-making processes. The iSee platform is designed for the intelligent
sharing and reuse of explanation experiences, using Case-based Reasoning to
advance best practices in XAI. The platform provides tools that enable AI
system designers, i.e. design users, to design and iteratively revise the most
suitable explanation strategy for their AI system to satisfy end-user needs.
All knowledge generated within the iSee platform is formalised by the iSee
ontology for interoperability. We use a summative mixed methods study protocol
to evaluate the usability and utility of the iSee platform with six design
users across varying levels of AI and XAI expertise. Our findings confirm that
the iSee platform effectively generalises across applications and its potential
to promote the adoption of XAI best practices. | Accepted to appear at the ECAI-PAIS 2024 main conference proceedings | cs.AI | [
"cs.AI",
"cs.HC",
"cs.IR"
] |
||
Smooth InfoMax -- Towards easier Post-Hoc interpretability | http://arxiv.org/abs/2408.12936v1 | http://arxiv.org/abs/2408.12936v1 | http://arxiv.org/pdf/2408.12936v1 | 2024-08-23 | 2024-08-23 | [
"Fabian Denoodt",
"Bart de Boer",
"José Oramas"
] | [
"",
"",
""
] | We introduce Smooth InfoMax (SIM), a novel method for self-supervised
representation learning that incorporates an interpretability constraint into
the learned representations at various depths of the neural network. SIM's
architecture is split up into probabilistic modules, each locally optimized
using the InfoNCE bound. Inspired by VAEs, the representations from these
modules are designed to be samples from Gaussian distributions and are further
constrained to be close to the standard normal distribution. This results in a
smooth and predictable space, enabling traversal of the latent space through a
decoder for easier post-hoc analysis of the learned representations. We
evaluate SIM's performance on sequential speech data, showing that it performs
competitively with its less interpretable counterpart, Greedy InfoMax (GIM).
Moreover, we provide insights into SIM's internal representations,
demonstrating that the contained information is less entangled throughout the
representation and more concentrated in a smaller subset of the dimensions.
This further highlights the improved interpretability of SIM. | cs.LG | [
"cs.LG",
"cs.AI"
] |
|||
Trustworthy, Responsible, and Safe AI: A Comprehensive Architectural
Framework for AI Safety with Challenges and Mitigations | http://arxiv.org/abs/2408.12935v1 | http://arxiv.org/abs/2408.12935v1 | http://arxiv.org/pdf/2408.12935v1 | 2024-08-23 | 2024-08-23 | [
"Chen Chen",
"Ziyao Liu",
"Weifeng Jiang",
"Goh Si Qi",
"KwoK-Yan Lam"
] | [
"",
"",
"",
"",
""
] | AI Safety is an emerging area of critical importance to the safe adoption and
deployment of AI systems. With the rapid proliferation of AI and especially
with the recent advancement of Generative AI (or GAI), the technology ecosystem
behind the design, development, adoption, and deployment of AI systems has
drastically changed, broadening the scope of AI Safety to address impacts on
public safety and national security. In this paper, we propose a novel
architectural framework for understanding and analyzing AI Safety; defining its
characteristics from three perspectives: Trustworthy AI, Responsible AI, and
Safe AI. We provide an extensive review of current research and advancements in
AI safety from these perspectives, highlighting their key challenges and
mitigation approaches. Through examples from state-of-the-art technologies,
particularly Large Language Models (LLMs), we present innovative mechanism,
methodologies, and techniques for designing and testing AI safety. Our goal is
to promote advancement in AI safety research, and ultimately enhance people's
trust in digital transformation. | cs.AI | [
"cs.AI"
] |
|||
Abductive and Contrastive Explanations for Scoring Rules in Voting | http://arxiv.org/abs/2408.12927v2 | http://arxiv.org/abs/2408.12927v2 | http://arxiv.org/pdf/2408.12927v2 | 2024-08-23 | 2024-08-26 | [
"Clément Contet",
"Umberto Grandi",
"Jérôme Mengin"
] | [
"",
"",
""
] | We view voting rules as classifiers that assign a winner (a class) to a
profile of voters' preferences (an instance). We propose to apply techniques
from formal explainability, most notably abductive and contrastive
explanations, to identify minimal subsets of a preference profile that either
imply the current winner or explain why a different candidate was not elected.
Formal explanations turn out to have strong connections with classical problems
studied in computational social choice such as bribery, possible and necessary
winner identification, and preference learning. We design algorithms for
computing abductive and contrastive explanations for scoring rules. For the
Borda rule, we find a lower bound on the size of the smallest abductive
explanations, and we conduct simulations to identify correlations between
properties of preference profiles and the size of their smallest abductive
explanations. | 10 pages, 2 figures Extended version of a paper in proceedings of
ECAI 2024 | cs.AI | [
"cs.AI"
] |
||
What Do You Want? User-centric Prompt Generation for Text-to-image
Synthesis via Multi-turn Guidance | http://arxiv.org/abs/2408.12910v1 | http://arxiv.org/abs/2408.12910v1 | http://arxiv.org/pdf/2408.12910v1 | 2024-08-23 | 2024-08-23 | [
"Yilun Liu",
"Minggui He",
"Feiyu Yao",
"Yuhe Ji",
"Shimin Tao",
"Jingzhou Du",
"Duan Li",
"Jian Gao",
"Li Zhang",
"Hao Yang",
"Boxing Chen",
"Osamu Yoshie"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | The emergence of text-to-image synthesis (TIS) models has significantly
influenced digital image creation by producing high-quality visuals from
written descriptions. Yet these models heavily rely on the quality and
specificity of textual prompts, posing a challenge for novice users who may not
be familiar with TIS-model-preferred prompt writing. Existing solutions relieve
this via automatic model-preferred prompt generation from user queries.
However, this single-turn manner suffers from limited user-centricity in terms
of result interpretability and user interactivity. To address these issues, we
propose DialPrompt, a multi-turn dialogue-based TIS prompt generation model
that emphasises user-centricity. DialPrompt is designed to follow a multi-turn
guidance workflow, where in each round of dialogue the model queries user with
their preferences on possible optimization dimensions before generating the
final TIS prompt. To achieve this, we mined 15 essential dimensions for
high-quality prompts from advanced users and curated a multi-turn dataset.
Through training on this dataset, DialPrompt can improve interpretability by
allowing users to understand the correlation between specific phrases and image
attributes. Additionally, it enables greater user control and engagement in the
prompt generation process, leading to more personalized and visually satisfying
outputs. Experiments indicate that DialPrompt achieves a competitive result in
the quality of synthesized images, outperforming existing prompt engineering
approaches by 5.7%. Furthermore, in our user evaluation, DialPrompt outperforms
existing approaches by 46.5% in user-centricity score and is rated 7.9/10 by 19
human reviewers. | cs.AI | [
"cs.AI"
] |
|||
CSPs with Few Alien Constraints | http://arxiv.org/abs/2408.12909v2 | http://arxiv.org/abs/2408.12909v2 | http://arxiv.org/pdf/2408.12909v2 | 2024-08-23 | 2024-08-27 | [
"Peter Jonsson",
"Victor Lagerkvist",
"George Osipov"
] | [
"",
"",
""
] | The constraint satisfaction problem asks to decide if a set of constraints
over a relational structure $\mathcal{A}$ is satisfiable (CSP$(\mathcal{A})$).
We consider CSP$(\mathcal{A} \cup \mathcal{B})$ where $\mathcal{A}$ is a
structure and $\mathcal{B}$ is an alien structure, and analyse its
(parameterized) complexity when at most $k$ alien constraints are allowed. We
establish connections and obtain transferable complexity results to several
well-studied problems that previously escaped classification attempts. Our
novel approach, utilizing logical and algebraic methods, yields an FPT versus
pNP dichotomy for arbitrary finite structures and sharper dichotomies for
Boolean structures and first-order reducts of $(\mathbb{N},=)$ (equality CSPs),
together with many partial results for general $\omega$-categorical structures. | cs.CC | [
"cs.CC",
"cs.AI"
] |
|||
IAA: Inner-Adaptor Architecture Empowers Frozen Large Language Model
with Multimodal Capabilities | http://arxiv.org/abs/2408.12902v1 | http://arxiv.org/abs/2408.12902v1 | http://arxiv.org/pdf/2408.12902v1 | 2024-08-23 | 2024-08-23 | [
"Bin Wang",
"Chunyu Xie",
"Dawei Leng",
"Yuhui Yin"
] | [
"",
"",
"",
""
] | In the field of multimodal large language models (MLLMs), common methods
typically involve unfreezing the language model during training to foster
profound visual understanding. However, the fine-tuning of such models with
vision-language data often leads to a diminution of their natural language
processing (NLP) capabilities. To avoid this performance degradation, a
straightforward solution is to freeze the language model while developing
multimodal competencies. Unfortunately, previous works have not attained
satisfactory outcomes. Building on the strategy of freezing the language model,
we conduct thorough structural exploration and introduce the Inner-Adaptor
Architecture (IAA). Specifically, the architecture incorporates multiple
multimodal adaptors at varying depths within the large language model to
facilitate direct interaction with the inherently text-oriented transformer
layers, thereby enabling the frozen language model to acquire multimodal
capabilities. Unlike previous approaches of freezing language models that
require large-scale aligned data, our proposed architecture is able to achieve
superior performance on small-scale datasets. We conduct extensive experiments
to improve the general multimodal capabilities and visual grounding abilities
of the MLLM. Our approach remarkably outperforms previous state-of-the-art
methods across various vision-language benchmarks without sacrificing
performance on NLP tasks. Code and models are available at
https://github.com/360CVGroup/Inner-Adaptor-Architecture. | cs.AI | [
"cs.AI",
"cs.CL",
"cs.LG"
] |
|||
Multiple Areal Feature Aware Transportation Demand Prediction | http://arxiv.org/abs/2408.12890v1 | http://arxiv.org/abs/2408.12890v1 | http://arxiv.org/pdf/2408.12890v1 | 2024-08-23 | 2024-08-23 | [
"Sumin Han",
"Jisun An",
"Youngjun Park",
"Suji Kim",
"Kitae Jang",
"Dongman Lee"
] | [
"",
"",
"",
"",
"",
""
] | A reliable short-term transportation demand prediction supports the
authorities in improving the capability of systems by optimizing schedules,
adjusting fleet sizes, and generating new transit networks. A handful of
research efforts incorporate one or a few areal features while learning
spatio-temporal correlation, to capture similar demand patterns between similar
areas. However, urban characteristics are polymorphic, and they need to be
understood by multiple areal features such as land use, sociodemographics, and
place-of-interest (POI) distribution. In this paper, we propose a novel
spatio-temporal multi-feature-aware graph convolutional recurrent network
(ST-MFGCRN) that fuses multiple areal features during spatio-temproal
understanding. Inside ST-MFGCRN, we devise sentinel attention to calculate the
areal similarity matrix by allowing each area to take partial attention if the
feature is not useful. We evaluate the proposed model on two real-world
transportation datasets, one with our constructed BusDJ dataset and one with
benchmark TaxiBJ. Results show that our model outperforms the state-of-the-art
baselines up to 7\% on BusDJ and 8\% on TaxiBJ dataset. | cs.AI | [
"cs.AI"
] |
|||
Spatio-Temporal Road Traffic Prediction using Real-time Regional
Knowledge | http://arxiv.org/abs/2408.12882v1 | http://arxiv.org/abs/2408.12882v1 | http://arxiv.org/pdf/2408.12882v1 | 2024-08-23 | 2024-08-23 | [
"Sumin Han",
"Jisun An",
"Dongman Lee"
] | [
"",
"",
""
] | For traffic prediction in transportation services such as car-sharing and
ride-hailing, mid-term road traffic prediction (within a few hours) is
considered essential. However, the existing road-level traffic prediction has
mainly studied how significantly micro traffic events propagate to the adjacent
roads in terms of short-term prediction. On the other hand, recent attempts
have been made to incorporate regional knowledge such as POIs, road
characteristics, and real-time social events to help traffic prediction.
However, these studies lack in understandings of different modalities of
road-level and region-level spatio-temporal correlations and how to combine
such knowledge. This paper proposes a novel method that embeds real-time
region-level knowledge using POIs, satellite images, and real-time LTE access
traces via a regional spatio-temporal module that consists of dynamic
convolution and temporal attention, and conducts bipartite spatial transform
attention to convert into road-level knowledge. Then the model ingests this
embedded knowledge into a road-level attention-based prediction model.
Experimental results on real-world road traffic prediction show that our model
outperforms the baselines. | cs.AI | [
"cs.AI"
] |
|||
Has Multimodal Learning Delivered Universal Intelligence in Healthcare?
A Comprehensive Survey | http://arxiv.org/abs/2408.12880v1 | http://arxiv.org/abs/2408.12880v1 | http://arxiv.org/pdf/2408.12880v1 | 2024-08-23 | 2024-08-23 | [
"Qika Lin",
"Yifan Zhu",
"Xin Mei",
"Ling Huang",
"Jingying Ma",
"Kai He",
"Zhen Peng",
"Erik Cambria",
"Mengling Feng"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
""
] | The rapid development of artificial intelligence has constantly reshaped the
field of intelligent healthcare and medicine. As a vital technology, multimodal
learning has increasingly garnered interest due to data complementarity,
comprehensive modeling form, and great application potential. Currently,
numerous researchers are dedicating their attention to this field, conducting
extensive studies and constructing abundant intelligent systems. Naturally, an
open question arises that has multimodal learning delivered universal
intelligence in healthcare? To answer the question, we adopt three unique
viewpoints for a holistic analysis. Firstly, we conduct a comprehensive survey
of the current progress of medical multimodal learning from the perspectives of
datasets, task-oriented methods, and universal foundation models. Based on
them, we further discuss the proposed question from five issues to explore the
real impacts of advanced techniques in healthcare, from data and technologies
to performance and ethics. The answer is that current technologies have NOT
achieved universal intelligence and there remains a significant journey to
undertake. Finally, in light of the above reviews and discussions, we point out
ten potential directions for exploration towards the goal of universal
intelligence in healthcare. | 21 pages, 6 figures | cs.AI | [
"cs.AI"
] |
||
Frequency-aware Feature Fusion for Dense Image Prediction | http://arxiv.org/abs/2408.12879v1 | http://arxiv.org/abs/2408.12879v1 | http://arxiv.org/pdf/2408.12879v1 | 2024-08-23 | 2024-08-23 | [
"Linwei Chen",
"Ying Fu",
"Lin Gu",
"Chenggang Yan",
"Tatsuya Harada",
"Gao Huang"
] | [
"",
"",
"",
"",
"",
""
] | Dense image prediction tasks demand features with strong category information
and precise spatial boundary details at high resolution. To achieve this,
modern hierarchical models often utilize feature fusion, directly adding
upsampled coarse features from deep layers and high-resolution features from
lower levels. In this paper, we observe rapid variations in fused feature
values within objects, resulting in intra-category inconsistency due to
disturbed high-frequency features. Additionally, blurred boundaries in fused
features lack accurate high frequency, leading to boundary displacement.
Building upon these observations, we propose Frequency-Aware Feature Fusion
(FreqFusion), integrating an Adaptive Low-Pass Filter (ALPF) generator, an
offset generator, and an Adaptive High-Pass Filter (AHPF) generator. The ALPF
generator predicts spatially-variant low-pass filters to attenuate
high-frequency components within objects, reducing intra-class inconsistency
during upsampling. The offset generator refines large inconsistent features and
thin boundaries by replacing inconsistent features with more consistent ones
through resampling, while the AHPF generator enhances high-frequency detailed
boundary information lost during downsampling. Comprehensive visualization and
quantitative analysis demonstrate that FreqFusion effectively improves feature
consistency and sharpens object boundaries. Extensive experiments across
various dense prediction tasks confirm its effectiveness. The code is made
publicly available at https://github.com/Linwei-Chen/FreqFusion. | Accepted by TPAMI (2024) | cs.CV | [
"cs.CV",
"cs.AI"
] |
||
Flexible categorization using formal concept analysis and
Dempster-Shafer theory | http://arxiv.org/abs/2408.15012v1 | http://arxiv.org/abs/2408.15012v1 | http://arxiv.org/pdf/2408.15012v1 | 2024-08-23 | 2024-08-23 | [
"Marcel Boersma",
"Krishna Manoorkar",
"Alessandra Palmigiano",
"Mattia Panettiere",
"Apostolos Tzimoulis",
"Nachoem Wijnberg"
] | [
"",
"",
"",
"",
"",
""
] | Categorization of business processes is an important part of auditing. Large
amounts of transactional data in auditing can be represented as transactions
between financial accounts using weighted bipartite graphs. We view such
bipartite graphs as many-valued formal contexts, which we use to obtain
explainable categorization of these business processes in terms of financial
accounts involved in a business process by using methods in formal concept
analysis. We use Dempster-Shafer mass functions to represent agendas showing
different interest in different set of financial accounts. We also model some
possible deliberation scenarios between agents with different interrogative
agendas to reach an aggregated agenda and categorization. The framework
developed in this paper provides a formal ground to obtain and study
explainable categorizations from the data represented as bipartite graphs
according to the agendas of different agents in an organization (e.g. an audit
firm), and interaction between these through deliberation. We use this
framework to describe a machine-leaning meta algorithm for outlier detection
and classification which can provide local and global explanations of its
result and demonstrate it through an outlier detection algorithm. | arXiv admin note: substantial text overlap with arXiv:2210.17330 | cs.AI | [
"cs.AI"
] |
||
DeepDelveAI: Identifying AI Related Documents in Large Scale Literature
Data | http://arxiv.org/abs/2408.12871v2 | http://arxiv.org/abs/2408.12871v2 | http://arxiv.org/pdf/2408.12871v2 | 2024-08-23 | 2024-08-28 | [
"Zhou Xiaochen",
"Liang Xingzhou",
"Zou Hui",
"Lu Yi",
"Qu Jingjing"
] | [
"",
"",
"",
"",
""
] | This paper presents DeepDelveAI, a comprehensive dataset specifically curated
to identify AI-related research papers from a large-scale academic literature
database. The dataset was created using an advanced Long Short-Term Memory
(LSTM) model trained on a binary classification task to distinguish between
AI-related and non-AI-related papers. The model was trained and validated on a
vast dataset, achieving high accuracy, precision, recall, and F1-score. The
resulting DeepDelveAI dataset comprises over 9.4 million AI-related papers
published since Dartmouth Conference, from 1956 to 2024, providing a crucial
resource for analyzing trends, thematic developments, and the evolution of AI
research across various disciplines. | 28 pages and 10 figures | cs.AI | [
"cs.AI"
] |
||
Can AI Assistance Aid in the Grading of Handwritten Answer Sheets? | http://arxiv.org/abs/2408.12870v1 | http://arxiv.org/abs/2408.12870v1 | http://arxiv.org/pdf/2408.12870v1 | 2024-08-23 | 2024-08-23 | [
"Pritam Sil",
"Parag Chaudhuri",
"Bhaskaran Raman"
] | [
"",
"",
""
] | With recent advancements in artificial intelligence (AI), there has been
growing interest in using state of the art (SOTA) AI solutions to provide
assistance in grading handwritten answer sheets. While a few commercial
products exist, the question of whether AI-assistance can actually reduce
grading effort and time has not yet been carefully considered in published
literature. This work introduces an AI-assisted grading pipeline. The pipeline
first uses text detection to automatically detect question regions present in a
question paper PDF. Next, it uses SOTA text detection methods to highlight
important keywords present in the handwritten answer regions of scanned answer
sheets to assist in the grading process. We then evaluate a prototype
implementation of the AI-assisted grading pipeline deployed on an existing
e-learning management platform. The evaluation involves a total of 5 different
real-life examinations across 4 different courses at a reputed institute; it
consists of a total of 42 questions, 17 graders, and 468 submissions. We log
and analyze the grading time for each handwritten answer while using AI
assistance and without it. Our evaluations have shown that, on average, the
graders take 31% less time while grading a single response and 33% less grading
time while grading a single answer sheet using AI assistance. | cs.AI | [
"cs.AI",
"cs.CV"
] |
|||
Obfuscated Memory Malware Detection | http://arxiv.org/abs/2408.12866v1 | http://arxiv.org/abs/2408.12866v1 | http://arxiv.org/pdf/2408.12866v1 | 2024-08-23 | 2024-08-23 | [
"Sharmila S P",
"Aruna Tiwari",
"Narendra S Chaudhari"
] | [
"",
"",
""
] | Providing security for information is highly critical in the current era with
devices enabled with smart technology, where assuming a day without the
internet is highly impossible. Fast internet at a cheaper price, not only made
communication easy for legitimate users but also for cybercriminals to induce
attacks in various dimensions to breach privacy and security. Cybercriminals
gain illegal access and breach the privacy of users to harm them in multiple
ways. Malware is one such tool used by hackers to execute their malicious
intent. Development in AI technology is utilized by malware developers to cause
social harm. In this work, we intend to show how Artificial Intelligence and
Machine learning can be used to detect and mitigate these cyber-attacks induced
by malware in specific obfuscated malware. We conducted experiments with memory
feature engineering on memory analysis of malware samples. Binary
classification can identify whether a given sample is malware or not, but
identifying the type of malware will only guide what next step to be taken for
that malware, to stop it from proceeding with its further action. Hence, we
propose a multi-class classification model to detect the three types of
obfuscated malware with an accuracy of 89.07% using the Classic Random Forest
algorithm. To the best of our knowledge, there is very little amount of work
done in classifying multiple obfuscated malware by a single model. We also
compared our model with a few state-of-the-art models and found it
comparatively better. | 8 pages 9 figures presented in IEEE CCEM Conference paper | cs.CR | [
"cs.CR",
"cs.AI"
] |
||
Abstract Art Interpretation Using ControlNet | http://arxiv.org/abs/2408.13287v1 | http://arxiv.org/abs/2408.13287v1 | http://arxiv.org/pdf/2408.13287v1 | 2024-08-23 | 2024-08-23 | [
"Rishabh Srivastava",
"Addrish Roy"
] | [
"",
""
] | Our study delves into the fusion of abstract art interpretation and
text-to-image synthesis, addressing the challenge of achieving precise spatial
control over image composition solely through textual prompts. Leveraging the
capabilities of ControlNet, we empower users with finer control over the
synthesis process, enabling enhanced manipulation of synthesized imagery.
Inspired by the minimalist forms found in abstract artworks, we introduce a
novel condition crafted from geometric primitives such as triangles. | 5 pages, 4 figures | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
||
Memory-Efficient LLM Training with Online Subspace Descent | http://arxiv.org/abs/2408.12857v1 | http://arxiv.org/abs/2408.12857v1 | http://arxiv.org/pdf/2408.12857v1 | 2024-08-23 | 2024-08-23 | [
"Kaizhao Liang",
"Bo Liu",
"Lizhang Chen",
"Qiang Liu"
] | [
"",
"",
"",
""
] | Recently, a wide range of memory-efficient LLM training algorithms have
gained substantial popularity. These methods leverage the low-rank structure of
gradients to project optimizer states into a subspace using projection matrix
found by singular value decomposition (SVD). However, convergence of these
algorithms is highly dependent on the update rules of their projection matrix.
In this work, we provide the \emph{first} convergence guarantee for arbitrary
update rules of projection matrix. This guarantee is generally applicable to
optimizers that can be analyzed with Hamiltonian Descent, including most common
ones, such as LION, Adam. Inspired by our theoretical understanding, we propose
Online Subspace Descent, a new family of subspace descent optimizer without
SVD. Instead of updating the projection matrix with eigenvectors, Online
Subspace Descent updates the projection matrix with online PCA. Online Subspace
Descent is flexible and introduces only minimum overhead to training. We show
that for the task of pretraining LLaMA models ranging from 60M to 7B parameters
on the C4 dataset, Online Subspace Descent achieves lower perplexity and better
downstream tasks performance than state-of-the-art low-rank training methods
across different settings and narrows the gap with full-rank baselines. | Code is available at
https://github.com/kyleliang919/Online-Subspace-Descent | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CL"
] |
||
Online Fair Division with Contextual Bandits | http://arxiv.org/abs/2408.12845v1 | http://arxiv.org/abs/2408.12845v1 | http://arxiv.org/pdf/2408.12845v1 | 2024-08-23 | 2024-08-23 | [
"Arun Verma",
"Indrajit Saha",
"Makoto Yokoo",
"Bryan Kian Hsiang Low"
] | [
"",
"",
"",
""
] | This paper considers a novel online fair division problem involving multiple
agents in which a learner observes an indivisible item that has to be
irrevocably allocated to one of the agents while satisfying a fairness and
efficiency constraint. Existing algorithms assume a small number of items with
a sufficiently large number of copies, which ensures a good utility estimation
for all item-agent pairs. However, such an assumption may not hold in many
real-life applications, e.g., an online platform that has a large number of
users (items) who only use the platform's service providers (agents) a few
times (a few copies of items), which makes it difficult to estimate the utility
for all item-agent pairs. To overcome this challenge, we model the online fair
division problem using contextual bandits, assuming the utility is an unknown
function of the item-agent features. We then propose algorithms for online fair
division with sub-linear regret guarantees. Our experimental results also
verify the different performance aspects of the proposed algorithms. | We study an online fair division problem that has a large number of
items with only a few copies of each item and propose contextual
bandits-based algorithms with sub-linear regret guarantees | cs.LG | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
||
Predicting Affective States from Screen Text Sentiment | http://arxiv.org/abs/2408.12844v1 | http://arxiv.org/abs/2408.12844v1 | http://arxiv.org/pdf/2408.12844v1 | 2024-08-23 | 2024-08-23 | [
"Songyan Teng",
"Tianyi Zhang",
"Simon D'Alfonso",
"Vassilis Kostakos"
] | [
"",
"",
"",
""
] | The proliferation of mobile sensing technologies has enabled the study of
various physiological and behavioural phenomena through unobtrusive data
collection from smartphone sensors. This approach offers real-time insights
into individuals' physical and mental states, creating opportunities for
personalised treatment and interventions. However, the potential of analysing
the textual content viewed on smartphones to predict affective states remains
underexplored. To better understand how the screen text that users are exposed
to and interact with can influence their affects, we investigated a subset of
data obtained from a digital phenotyping study of Australian university
students conducted in 2023. We employed linear regression, zero-shot, and
multi-shot prompting using a large language model (LLM) to analyse
relationships between screen text and affective states. Our findings indicate
that multi-shot prompting substantially outperforms both linear regression and
zero-shot prompting, highlighting the importance of context in affect
prediction. We discuss the value of incorporating textual and sentiment data
for improving affect prediction, providing a basis for future advancements in
understanding smartphone use and wellbeing. | 7 pages | 10.1145/3675094.3678489 | cs.HC | [
"cs.HC",
"cs.AI"
] |
|
COVID-19 Probability Prediction Using Machine Learning: An Infectious
Approach | http://arxiv.org/abs/2408.12841v1 | http://arxiv.org/abs/2408.12841v1 | http://arxiv.org/pdf/2408.12841v1 | 2024-08-23 | 2024-08-23 | [
"Mohsen Asghari Ilani",
"Saba Moftakhar Tehran",
"Ashkan Kavei",
"Arian Radmehr"
] | [
"",
"",
"",
""
] | The ongoing COVID-19 pandemic continues to pose significant challenges to
global public health, despite the widespread availability of vaccines. Early
detection of the disease remains paramount in curbing its transmission and
mitigating its impact on public health systems. In response, this study delves
into the application of advanced machine learning (ML) techniques for
predicting COVID-19 infection probability. We conducted a rigorous
investigation into the efficacy of various ML models, including XGBoost, LGBM,
AdaBoost, Logistic Regression, Decision Tree, RandomForest, CatBoost, KNN, and
Deep Neural Networks (DNN). Leveraging a dataset comprising 4000 samples, with
3200 allocated for training and 800 for testing, our experiment offers
comprehensive insights into the performance of these models in COVID-19
prediction. Our findings reveal that Deep Neural Networks (DNN) emerge as the
top-performing model, exhibiting superior accuracy and recall metrics. With an
impressive accuracy rate of 89%, DNN demonstrates remarkable potential in early
COVID-19 detection. This underscores the efficacy of deep learning approaches
in leveraging complex data patterns to identify COVID-19 infections accurately.
This study underscores the critical role of machine learning, particularly deep
learning methodologies, in augmenting early detection efforts amidst the
ongoing pandemic. The success of DNN in accurately predicting COVID-19
infection probability highlights the importance of continued research and
development in leveraging advanced technologies to combat infectious diseases. | cs.LG | [
"cs.LG",
"cs.AI"
] |
|||
Exploring Machine Learning Models for Lung Cancer Level Classification:
A comparative ML Approach | http://arxiv.org/abs/2408.12838v1 | http://arxiv.org/abs/2408.12838v1 | http://arxiv.org/pdf/2408.12838v1 | 2024-08-23 | 2024-08-23 | [
"Mohsen Asghari Ilani",
"Saba Moftakhar Tehran",
"Ashkan Kavei",
"Hamed Alizadegan"
] | [
"",
"",
"",
""
] | This paper explores machine learning (ML) models for classifying lung cancer
levels to improve diagnostic accuracy and prognosis. Through parameter tuning
and rigorous evaluation, we assess various ML algorithms. Techniques like
minimum child weight and learning rate monitoring were used to reduce
overfitting and optimize performance. Our findings highlight the robust
performance of Deep Neural Network (DNN) models across all phases. Ensemble
methods, including voting and bagging, also showed promise in enhancing
predictive accuracy and robustness. However, Support Vector Machine (SVM)
models with the Sigmoid kernel faced challenges, indicating a need for further
refinement. Overall, our study provides insights into ML-based lung cancer
classification, emphasizing the importance of parameter tuning to optimize
model performance and improve diagnostic accuracy in oncological care. | cs.AI | [
"cs.AI"
] |
|||
Underwater SONAR Image Classification and Analysis using LIME-based
Explainable Artificial Intelligence | http://arxiv.org/abs/2408.12837v1 | http://arxiv.org/abs/2408.12837v1 | http://arxiv.org/pdf/2408.12837v1 | 2024-08-23 | 2024-08-23 | [
"Purushothaman Natarajan",
"Athira Nambiar"
] | [
"",
""
] | Deep learning techniques have revolutionized image classification by
mimicking human cognition and automating complex decision-making processes.
However, the deployment of AI systems in the wild, especially in high-security
domains such as defence, is curbed by the lack of explainability of the model.
To this end, eXplainable AI (XAI) is an emerging area of research that is
intended to explore the unexplained hidden black box nature of deep neural
networks. This paper explores the application of the eXplainable Artificial
Intelligence (XAI) tool to interpret the underwater image classification
results, one of the first works in the domain to the best of our knowledge. Our
study delves into the realm of SONAR image classification using a custom
dataset derived from diverse sources, including the Seabed Objects KLSG
dataset, the camera SONAR dataset, the mine SONAR images dataset, and the SCTD
dataset. An extensive analysis of transfer learning techniques for image
classification using benchmark Convolutional Neural Network (CNN) architectures
such as VGG16, ResNet50, InceptionV3, DenseNet121, etc. is carried out. On top
of this classification model, a post-hoc XAI technique, viz. Local
Interpretable Model-Agnostic Explanations (LIME) are incorporated to provide
transparent justifications for the model's decisions by perturbing input data
locally to see how predictions change. Furthermore, Submodular Picks LIME
(SP-LIME) a version of LIME particular to images, that perturbs the image based
on the submodular picks is also extensively studied. To this end, two
submodular optimization algorithms i.e. Quickshift and Simple Linear Iterative
Clustering (SLIC) are leveraged towards submodular picks. The extensive
analysis of XAI techniques highlights interpretability of the results in a more
human-compliant way, thus boosting our confidence and reliability. | 55 pages, 9 tables, 18 figures | cs.CV | [
"cs.CV",
"cs.AI",
"cs.HC",
"cs.LG",
"68T07 (Primary) 68T45, 68U10 (Secondary)",
"I.4.8; I.2.10; I.5.4"
] |
||
CLLMFS: A Contrastive Learning enhanced Large Language Model Framework
for Few-Shot Named Entity Recognition | http://arxiv.org/abs/2408.12834v1 | http://arxiv.org/abs/2408.12834v1 | http://arxiv.org/pdf/2408.12834v1 | 2024-08-23 | 2024-08-23 | [
"Yafeng Zhang",
"Zilan Yu",
"Yuang Huang",
"Jing Tang"
] | [
"",
"",
"",
""
] | Few-shot Named Entity Recognition (NER), the task of identifying named
entities with only a limited amount of labeled data, has gained increasing
significance in natural language processing. While existing methodologies have
shown some effectiveness, such as enriching label semantics through various
prompting modes or employing metric learning techniques, their performance
exhibits limited robustness across diverse domains due to the lack of rich
knowledge in their pre-trained models. To address this issue, we propose
CLLMFS, a Contrastive Learning enhanced Large Language Model (LLM) Framework
for Few-Shot Named Entity Recognition, achieving promising results with limited
training data. Considering the impact of LLM's internal representations on
downstream tasks, CLLMFS integrates Low-Rank Adaptation (LoRA) and contrastive
learning mechanisms specifically tailored for few-shot NER. By enhancing the
model's internal representations, CLLMFS effectively improves both entity
boundary awareness ability and entity recognition accuracy. Our method has
achieved state-of-the-art performance improvements on F1-score ranging from
2.58\% to 97.74\% over existing best-performing methods across several
recognized benchmarks. Furthermore, through cross-domain NER experiments
conducted on multiple datasets, we have further validated the robust
generalization capability of our method. Our code will be released in the near
future. | 27TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE | cs.CL | [
"cs.CL",
"cs.AI"
] |
||
Examining the Commitments and Difficulties Inherent in Multimodal
Foundation Models for Street View Imagery | http://arxiv.org/abs/2408.12821v1 | http://arxiv.org/abs/2408.12821v1 | http://arxiv.org/pdf/2408.12821v1 | 2024-08-23 | 2024-08-23 | [
"Zhenyuan Yang",
"Xuhui Lin",
"Qinyi He",
"Ziye Huang",
"Zhengliang Liu",
"Hanqi Jiang",
"Peng Shu",
"Zihao Wu",
"Yiwei Li",
"Stephen Law",
"Gengchen Mai",
"Tianming Liu",
"Tao Yang"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | The emergence of Large Language Models (LLMs) and multimodal foundation
models (FMs) has generated heightened interest in their applications that
integrate vision and language. This paper investigates the capabilities of
ChatGPT-4V and Gemini Pro for Street View Imagery, Built Environment, and
Interior by evaluating their performance across various tasks. The assessments
include street furniture identification, pedestrian and car counts, and road
width measurement in Street View Imagery; building function classification,
building age analysis, building height analysis, and building structure
classification in the Built Environment; and interior room classification,
interior design style analysis, interior furniture counts, and interior length
measurement in Interior. The results reveal proficiency in length measurement,
style analysis, question answering, and basic image understanding, but
highlight limitations in detailed recognition and counting tasks. While
zero-shot learning shows potential, performance varies depending on the problem
domains and image complexities. This study provides new insights into the
strengths and weaknesses of multimodal foundation models for practical
challenges in Street View Imagery, Built Environment, and Interior. Overall,
the findings demonstrate foundational multimodal intelligence, emphasizing the
potential of FMs to drive forward interdisciplinary applications at the
intersection of computer vision and language. | cs.CV | [
"cs.CV",
"cs.AI"
] |
|||
Staircase Cascaded Fusion of Lightweight Local Pattern Recognition and
Long-Range Dependencies for Structural Crack Segmentation | http://arxiv.org/abs/2408.12815v1 | http://arxiv.org/abs/2408.12815v1 | http://arxiv.org/pdf/2408.12815v1 | 2024-08-23 | 2024-08-23 | [
"Hui Liu",
"Chen Jia",
"Fan Shi",
"Xu Cheng",
"Mianzhao Wang",
"Shengyong Chen"
] | [
"",
"",
"",
"",
"",
""
] | Detecting cracks with pixel-level precision for key structures is a
significant challenge, as existing methods struggle to effectively integrate
local textures and pixel dependencies of cracks. Furthermore, these methods
often possess numerous parameters and substantial computational requirements,
complicating deployment on edge devices. In this paper, we propose a staircase
cascaded fusion crack segmentation network (CrackSCF) that generates
high-quality crack segmentation maps using minimal computational resources. We
constructed a staircase cascaded fusion module that effectively captures local
patterns of cracks and long-range dependencies of pixels, and it can suppress
background noise well. To reduce the computational resources required by the
model, we introduced a lightweight convolution block, which replaces all
convolution operations in the network, significantly reducing the required
computation and parameters without affecting the network's performance. To
evaluate our method, we created a challenging benchmark dataset called TUT and
conducted experiments on this dataset and five other public datasets. The
experimental results indicate that our method offers significant advantages
over existing methods, especially in handling background noise interference and
detailed crack segmentation. The F1 and mIoU scores on the TUT dataset are
0.8382 and 0.8473, respectively, achieving state-of-the-art (SOTA) performance
while requiring the least computational resources. The code and dataset is
available at https://github.com/Karl1109/CrackSCF. | cs.CV | [
"cs.CV",
"cs.AI"
] |
|||
DutyTTE: Deciphering Uncertainty in Origin-Destination Travel Time
Estimation | http://arxiv.org/abs/2408.12809v1 | http://arxiv.org/abs/2408.12809v1 | http://arxiv.org/pdf/2408.12809v1 | 2024-08-23 | 2024-08-23 | [
"Xiaowei Mao",
"Yan Lin",
"Shengnan Guo",
"Yubin Chen",
"Xingyu Xian",
"Haomin Wen",
"Qisen Xu",
"Youfang Lin",
"Huaiyu Wan"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Uncertainty quantification in travel time estimation (TTE) aims to estimate
the confidence interval for travel time, given the origin (O), destination (D),
and departure time (T). Accurately quantifying this uncertainty requires
generating the most likely path and assessing travel time uncertainty along the
path. This involves two main challenges: 1) Predicting a path that aligns with
the ground truth, and 2) modeling the impact of travel time in each segment on
overall uncertainty under varying conditions. We propose DutyTTE to address
these challenges. For the first challenge, we introduce a deep reinforcement
learning method to improve alignment between the predicted path and the ground
truth, providing more accurate travel time information from road segments to
improve TTE. For the second challenge, we propose a mixture of experts guided
uncertainty quantification mechanism to better capture travel time uncertainty
for each segment under varying contexts. Additionally, we calibrate our results
using Hoeffding's upper-confidence bound to provide statistical guarantees for
the estimated confidence intervals. Extensive experiments on two real-world
datasets demonstrate the superiority of our proposed method. | 7 pages | cs.AI | [
"cs.AI"
] |
||
VALE: A Multimodal Visual and Language Explanation Framework for Image
Classifiers using eXplainable AI and Language Models | http://arxiv.org/abs/2408.12808v1 | http://arxiv.org/abs/2408.12808v1 | http://arxiv.org/pdf/2408.12808v1 | 2024-08-23 | 2024-08-23 | [
"Purushothaman Natarajan",
"Athira Nambiar"
] | [
"",
""
] | Deep Neural Networks (DNNs) have revolutionized various fields by enabling
task automation and reducing human error. However, their internal workings and
decision-making processes remain obscure due to their black box nature.
Consequently, the lack of interpretability limits the application of these
models in high-risk scenarios. To address this issue, the emerging field of
eXplainable Artificial Intelligence (XAI) aims to explain and interpret the
inner workings of DNNs. Despite advancements, XAI faces challenges such as the
semantic gap between machine and human understanding, the trade-off between
interpretability and performance, and the need for context-specific
explanations. To overcome these limitations, we propose a novel multimodal
framework named VALE Visual and Language Explanation. VALE integrates
explainable AI techniques with advanced language models to provide
comprehensive explanations. This framework utilizes visual explanations from
XAI tools, an advanced zero-shot image segmentation model, and a visual
language model to generate corresponding textual explanations. By combining
visual and textual explanations, VALE bridges the semantic gap between machine
outputs and human interpretation, delivering results that are more
comprehensible to users. In this paper, we conduct a pilot study of the VALE
framework for image classification tasks. Specifically, Shapley Additive
Explanations (SHAP) are used to identify the most influential regions in
classified images. The object of interest is then extracted using the Segment
Anything Model (SAM), and explanations are generated using state-of-the-art
pre-trained Vision-Language Models (VLMs). Extensive experimental studies are
performed on two datasets: the ImageNet dataset and a custom underwater SONAR
image dataset, demonstrating VALEs real-world applicability in underwater image
classification. | 15 pages, 10 tables, 3 figures | cs.CV | [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.LG",
"68T07 (Primary) 68T45, 68U10 (Secondary)",
"I.4.8; I.2.10; I.5.4"
] |
||
Is Generative AI the Next Tactical Cyber Weapon For Threat Actors?
Unforeseen Implications of AI Generated Cyber Attacks | http://arxiv.org/abs/2408.12806v1 | http://arxiv.org/abs/2408.12806v1 | http://arxiv.org/pdf/2408.12806v1 | 2024-08-23 | 2024-08-23 | [
"Yusuf Usman",
"Aadesh Upadhyay",
"Prashnna Gyawali",
"Robin Chataut"
] | [
"",
"",
"",
""
] | In an era where digital threats are increasingly sophisticated, the
intersection of Artificial Intelligence and cybersecurity presents both
promising defenses and potent dangers. This paper delves into the escalating
threat posed by the misuse of AI, specifically through the use of Large
Language Models (LLMs). This study details various techniques like the switch
method and character play method, which can be exploited by cybercriminals to
generate and automate cyber attacks. Through a series of controlled
experiments, the paper demonstrates how these models can be manipulated to
bypass ethical and privacy safeguards to effectively generate cyber attacks
such as social engineering, malicious code, payload generation, and spyware. By
testing these AI generated attacks on live systems, the study assesses their
effectiveness and the vulnerabilities they exploit, offering a practical
perspective on the risks AI poses to critical infrastructure. We also introduce
Occupy AI, a customized, finetuned LLM specifically engineered to automate and
execute cyberattacks. This specialized AI driven tool is adept at crafting
steps and generating executable code for a variety of cyber threats, including
phishing, malware injection, and system exploitation. The results underscore
the urgency for ethical AI practices, robust cybersecurity measures, and
regulatory oversight to mitigate AI related threats. This paper aims to elevate
awareness within the cybersecurity community about the evolving digital threat
landscape, advocating for proactive defense strategies and responsible AI
development to protect against emerging cyber threats. | Journal Paper | cs.CR | [
"cs.CR",
"cs.AI",
"Primary 03C90, Secondary 03-02,",
"I.2"
] |
||
A Safe Self-evolution Algorithm for Autonomous Driving Based on
Data-Driven Risk Quantification Model | http://arxiv.org/abs/2408.12805v1 | http://arxiv.org/abs/2408.12805v1 | http://arxiv.org/pdf/2408.12805v1 | 2024-08-23 | 2024-08-23 | [
"Shuo Yang",
"Shizhen Li",
"Yanjun Huang",
"Hong Chen"
] | [
"",
"",
"",
""
] | Autonomous driving systems with self-evolution capabilities have the
potential to independently evolve in complex and open environments, allowing to
handle more unknown scenarios. However, as a result of the safety-performance
trade-off mechanism of evolutionary algorithms, it is difficult to ensure safe
exploration without sacrificing the improvement ability. This problem is
especially prominent in dynamic traffic scenarios. Therefore, this paper
proposes a safe self-evolution algorithm for autonomous driving based on
data-driven risk quantification model. Specifically, a risk quantification
model based on the attention mechanism is proposed by modeling the way humans
perceive risks during driving, with the idea of achieving safety situation
estimation of the surrounding environment through a data-driven approach. To
prevent the impact of over-conservative safety guarding policies on the
self-evolution capability of the algorithm, a safety-evolutionary
decision-control integration algorithm with adjustable safety limits is
proposed, and the proposed risk quantization model is integrated into it.
Simulation and real-vehicle experiments results illustrate the effectiveness of
the proposed method. The results show that the proposed algorithm can generate
safe and reasonable actions in a variety of complex scenarios and guarantee
safety without losing the evolutionary potential of learning-based autonomous
driving systems. | cs.AI | [
"cs.AI"
] |
|||
Multi-Treatment Multi-Task Uplift Modeling for Enhancing User Growth | http://arxiv.org/abs/2408.12803v1 | http://arxiv.org/abs/2408.12803v1 | http://arxiv.org/pdf/2408.12803v1 | 2024-08-23 | 2024-08-23 | [
"Yuxiang Wei",
"Zhaoxin Qiu",
"Yingjie Li",
"Yuke Sun",
"Xiaoling Li"
] | [
"",
"",
"",
"",
""
] | As a key component in boosting online user growth, uplift modeling aims to
measure individual user responses (e.g., whether to play the game) to various
treatments, such as gaming bonuses, thereby enhancing business outcomes.
However, previous research typically considers a single-task, single-treatment
setting, where only one treatment exists and the overall treatment effect is
measured by a single type of user response. In this paper, we propose a
Multi-Treatment Multi-Task (MTMT) uplift network to estimate treatment effects
in a multi-task scenario. We identify the multi-treatment problem as a causal
inference problem with a tiered response, comprising a base effect (from
offering a treatment) and an incremental effect (from offering a specific type
of treatment), where the base effect can be numerically much larger than the
incremental effect. Specifically, MTMT separately encodes user features and
treatments. The user feature encoder uses a multi-gate mixture of experts
(MMOE) network to encode relevant user features, explicitly learning inter-task
relations. The resultant embeddings are used to measure natural responses per
task. Furthermore, we introduce a treatment-user feature interaction module to
model correlations between each treatment and user feature. Consequently, we
separately measure the base and incremental treatment effect for each task
based on the produced treatment-aware representations. Experimental results
based on an offline public dataset and an online proprietary dataset
demonstrate the effectiveness of MTMT in single/multi-treatment and
single/multi-task settings. Additionally, MTMT has been deployed in our gaming
platform to improve user experience. | cs.LG | [
"cs.LG",
"cs.AI",
"cs.IR"
] |
|||
Less for More: Enhancing Preference Learning in Generative Language
Models with Automated Self-Curation of Training Corpora | http://arxiv.org/abs/2408.12799v1 | http://arxiv.org/abs/2408.12799v1 | http://arxiv.org/pdf/2408.12799v1 | 2024-08-23 | 2024-08-23 | [
"JoonHo Lee",
"JuYoun Son",
"Juree Seok",
"Wooseok Jang",
"Yeong-Dae Kwon"
] | [
"",
"",
"",
"",
""
] | Ambiguity in language presents challenges in developing more enhanced
language models, particularly in preference learning, where variability among
annotators results in inconsistently annotated datasets used for model
alignment. To address this issue, we introduce a self-curation method that
preprocesses annotated datasets by leveraging proxy models trained directly on
these datasets. Our method enhances preference learning by automatically
detecting and removing ambiguous annotations within the dataset. The proposed
approach is validated through extensive experiments, demonstrating a marked
improvement in performance across various instruction-following tasks. Our work
provides a straightforward and reliable method to overcome annotation
inconsistencies, serving as an initial step towards the development of more
advanced preference learning techniques. | cs.CL | [
"cs.CL",
"cs.AI"
] |
|||
BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large
Language Models | http://arxiv.org/abs/2408.12798v1 | http://arxiv.org/abs/2408.12798v1 | http://arxiv.org/pdf/2408.12798v1 | 2024-08-23 | 2024-08-23 | [
"Yige Li",
"Hanxun Huang",
"Yunhan Zhao",
"Xingjun Ma",
"Jun Sun"
] | [
"",
"",
"",
"",
""
] | Generative Large Language Models (LLMs) have made significant strides across
various tasks, but they remain vulnerable to backdoor attacks, where specific
triggers in the prompt cause the LLM to generate adversary-desired responses.
While most backdoor research has focused on vision or text classification
tasks, backdoor attacks in text generation have been largely overlooked. In
this work, we introduce \textit{BackdoorLLM}, the first comprehensive benchmark
for studying backdoor attacks on LLMs. \textit{BackdoorLLM} features: 1) a
repository of backdoor benchmarks with a standardized training pipeline, 2)
diverse attack strategies, including data poisoning, weight poisoning, hidden
state attacks, and chain-of-thought attacks, 3) extensive evaluations with over
200 experiments on 8 attacks across 7 scenarios and 6 model architectures, and
4) key insights into the effectiveness and limitations of backdoors in LLMs. We
hope \textit{BackdoorLLM} will raise awareness of backdoor threats and
contribute to advancing AI safety. The code is available at
\url{https://github.com/bboylyg/BackdoorLLM}. | cs.AI | [
"cs.AI"
] |
|||
SIn-NeRF2NeRF: Editing 3D Scenes with Instructions through Segmentation
and Inpainting | http://arxiv.org/abs/2408.13285v1 | http://arxiv.org/abs/2408.13285v1 | http://arxiv.org/pdf/2408.13285v1 | 2024-08-23 | 2024-08-23 | [
"Jiseung Hong",
"Changmin Lee",
"Gyusang Yu"
] | [
"",
"",
""
] | TL;DR Perform 3D object editing selectively by disentangling it from the
background scene. Instruct-NeRF2NeRF (in2n) is a promising method that enables
editing of 3D scenes composed of Neural Radiance Field (NeRF) using text
prompts. However, it is challenging to perform geometrical modifications such
as shrinking, scaling, or moving on both the background and object
simultaneously. In this project, we enable geometrical changes of objects
within the 3D scene by selectively editing the object after separating it from
the scene. We perform object segmentation and background inpainting
respectively, and demonstrate various examples of freely resizing or moving
disentangled objects within the three-dimensional space. | Code is available at: https://github.com/KAISTChangmin/SIn-NeRF2NeRF | cs.CV | [
"cs.CV",
"cs.AI"
] |
||
Real-Time Posture Monitoring and Risk Assessment for Manual Lifting
Tasks Using MediaPipe and LSTM | http://arxiv.org/abs/2408.12796v1 | http://arxiv.org/abs/2408.12796v1 | http://arxiv.org/pdf/2408.12796v1 | 2024-08-23 | 2024-08-23 | [
"Ereena Bagga",
"Ang Yang"
] | [
"",
""
] | This research focuses on developing a real-time posture monitoring and risk
assessment system for manual lifting tasks using advanced AI and computer
vision technologies. Musculoskeletal disorders (MSDs) are a significant concern
for workers involved in manual lifting, and traditional methods for posture
correction are often inadequate due to delayed feedback and lack of
personalized assessment. Our proposed solution integrates AI-driven posture
detection, detailed keypoint analysis, risk level determination, and real-time
feedback delivered through a user-friendly web interface. The system aims to
improve posture, reduce the risk of MSDs, and enhance user engagement. The
research involves comprehensive data collection, model training, and iterative
development to ensure high accuracy and user satisfaction. The solution's
effectiveness is evaluated against existing methodologies, demonstrating
significant improvements in real-time feedback and risk assessment. This study
contributes to the field by offering a novel approach to posture correction
that addresses existing gaps and provides practical, immediate benefits to
users. | Proceedings of the 1st International Workshop on Multimedia Computing
for Health and Medicine at ACM MM'24 | cs.AI | [
"cs.AI",
"cs.CV"
] |
||
Event Detection via Probability Density Function Regression | http://arxiv.org/abs/2408.12792v1 | http://arxiv.org/abs/2408.12792v1 | http://arxiv.org/pdf/2408.12792v1 | 2024-08-23 | 2024-08-23 | [
"Clark Peng",
"Tolga Dinçer"
] | [
"",
""
] | In the domain of time series analysis, particularly in event detection tasks,
current methodologies predominantly rely on segmentation-based approaches,
which predict the class label for each individual timesteps and use the
changepoints of these labels to detect events. However, these approaches may
not effectively detect the precise onset and offset of events within the data
and suffer from class imbalance problems. This study introduces a generalized
regression-based approach to reframe the time-interval-defined event detection
problem. Inspired by heatmap regression techniques from computer vision, our
approach aims to predict probability densities at event locations rather than
class labels across the entire time series. The primary aim of this approach is
to improve the accuracy of event detection methods, particularly for
long-duration events where identifying the onset and offset is more critical
than classifying individual event states. We demonstrate that regression-based
approaches outperform segmentation-based methods across various
state-of-the-art baseline networks and datasets, offering a more effective
solution for specific event detection tasks. | cs.AI | [
"cs.AI",
"cs.LG",
"stat.ML",
"I.2.0; I.5.4"
] |
|||
Context-Aware Temporal Embedding of Objects in Video Data | http://arxiv.org/abs/2408.12789v1 | http://arxiv.org/abs/2408.12789v1 | http://arxiv.org/pdf/2408.12789v1 | 2024-08-23 | 2024-08-23 | [
"Ahnaf Farhan",
"M. Shahriar Hossain"
] | [
"",
""
] | In video analysis, understanding the temporal context is crucial for
recognizing object interactions, event patterns, and contextual changes over
time. The proposed model leverages adjacency and semantic similarities between
objects from neighboring video frames to construct context-aware temporal
object embeddings. Unlike traditional methods that rely solely on visual
appearance, our temporal embedding model considers the contextual relationships
between objects, creating a meaningful embedding space where temporally
connected object's vectors are positioned in proximity. Empirical studies
demonstrate that our context-aware temporal embeddings can be used in
conjunction with conventional visual embeddings to enhance the effectiveness of
downstream applications. Moreover, the embeddings can be used to narrate a
video using a Large Language Model (LLM). This paper describes the intricate
details of the proposed objective function to generate context-aware temporal
object embeddings for video data and showcases the potential applications of
the generated embeddings in video analysis and object classification tasks. | cs.CV | [
"cs.CV",
"cs.AI"
] |
|||
LLM-PBE: Assessing Data Privacy in Large Language Models | http://arxiv.org/abs/2408.12787v1 | http://arxiv.org/abs/2408.12787v1 | http://arxiv.org/pdf/2408.12787v1 | 2024-08-23 | 2024-08-23 | [
"Qinbin Li",
"Junyuan Hong",
"Chulin Xie",
"Jeffrey Tan",
"Rachel Xin",
"Junyi Hou",
"Xavier Yin",
"Zhun Wang",
"Dan Hendrycks",
"Zhangyang Wang",
"Bo Li",
"Bingsheng He",
"Dawn Song"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Large Language Models (LLMs) have become integral to numerous domains,
significantly advancing applications in data management, mining, and analysis.
Their profound capabilities in processing and interpreting complex language
data, however, bring to light pressing concerns regarding data privacy,
especially the risk of unintentional training data leakage. Despite the
critical nature of this issue, there has been no existing literature to offer a
comprehensive assessment of data privacy risks in LLMs. Addressing this gap,
our paper introduces LLM-PBE, a toolkit crafted specifically for the systematic
evaluation of data privacy risks in LLMs. LLM-PBE is designed to analyze
privacy across the entire lifecycle of LLMs, incorporating diverse attack and
defense strategies, and handling various data types and metrics. Through
detailed experimentation with multiple LLMs, LLM-PBE facilitates an in-depth
exploration of data privacy concerns, shedding light on influential factors
such as model size, data characteristics, and evolving temporal dimensions.
This study not only enriches the understanding of privacy issues in LLMs but
also serves as a vital resource for future research in the field. Aimed at
enhancing the breadth of knowledge in this area, the findings, resources, and
our full technical report are made available at https://llm-pbe.github.io/,
providing an open platform for academic and practical advancements in LLM
privacy assessment. | cs.CR | [
"cs.CR",
"cs.AI"
] |
|||
The Model Mastery Lifecycle: A Framework for Designing Human-AI
Interaction | http://arxiv.org/abs/2408.12781v1 | http://arxiv.org/abs/2408.12781v1 | http://arxiv.org/pdf/2408.12781v1 | 2024-08-23 | 2024-08-23 | [
"Mark Chignell",
"Mu-Huan Miles Chung",
"Jaturong Kongmanee",
"Khilan Jerath",
"Abhay Raman"
] | [
"",
"",
"",
"",
""
] | The utilization of AI in an increasing number of fields is the latest
iteration of a long process, where machines and systems have been replacing
humans, or changing the roles that they play, in various tasks. Although humans
are often resistant to technological innovation, especially in workplaces,
there is a general trend towards increasing automation, and more recently, AI.
AI is now capable of carrying out, or assisting with, many tasks that used to
be regarded as exclusively requiring human expertise. In this paper we consider
the case of tasks that could be performed either by human experts or by AI and
locate them on a continuum running from exclusively human task performance at
one end to AI autonomy on the other, with a variety of forms of human-AI
interaction between those extremes. Implementation of AI is constrained by the
context of the systems and workflows that it will be embedded within. There is
an urgent need for methods to determine how AI should be used in different
situations and to develop appropriate methods of human-AI interaction so that
humans and AI can work together effectively to perform tasks. In response to
the evolving landscape of AI progress and increasing mastery, we introduce an
AI Mastery Lifecycle framework and discuss its implications for human-AI
interaction. The framework provides guidance on human-AI task allocation and
how human-AI interfaces need to adapt to improvements in AI task performance
over time. Within the framework we identify a zone of uncertainty where the
issues of human-AI task allocation and user interface design are likely to be
most challenging. | cs.HC | [
"cs.HC",
"cs.AI",
"cs.LG"
] |
|||
Investigating LLM Applications in E-Commerce | http://arxiv.org/abs/2408.12779v1 | http://arxiv.org/abs/2408.12779v1 | http://arxiv.org/pdf/2408.12779v1 | 2024-08-23 | 2024-08-23 | [
"Chester Palen-Michel",
"Ruixiang Wang",
"Yipeng Zhang",
"David Yu",
"Canran Xu",
"Zhe Wu"
] | [
"",
"",
"",
"",
"",
""
] | The emergence of Large Language Models (LLMs) has revolutionized natural
language processing in various applications especially in e-commerce. One
crucial step before the application of such LLMs in these fields is to
understand and compare the performance in different use cases in such tasks.
This paper explored the efficacy of LLMs in the e-commerce domain, focusing on
instruction-tuning an open source LLM model with public e-commerce datasets of
varying sizes and comparing the performance with the conventional models
prevalent in industrial applications. We conducted a comprehensive comparison
between LLMs and traditional pre-trained language models across specific tasks
intrinsic to the e-commerce domain, namely classification, generation,
summarization, and named entity recognition (NER). Furthermore, we examined the
effectiveness of the current niche industrial application of very large LLM,
using in-context learning, in e-commerce specific tasks. Our findings indicate
that few-shot inference with very large LLMs often does not outperform
fine-tuning smaller pre-trained models, underscoring the importance of
task-specific model optimization.Additionally, we investigated different
training methodologies such as single-task training, mixed-task training, and
LoRA merging both within domain/tasks and between different tasks. Through
rigorous experimentation and analysis, this paper offers valuable insights into
the potential effectiveness of LLMs to advance natural language processing
capabilities within the e-commerce industry. | cs.CL | [
"cs.CL",
"cs.AI"
] |
|||
Data-Centric Approach to Constrained Machine Learning: A Case Study on
Conway's Game of Life | http://arxiv.org/abs/2408.12778v1 | http://arxiv.org/abs/2408.12778v1 | http://arxiv.org/pdf/2408.12778v1 | 2024-08-23 | 2024-08-23 | [
"Anton Bibin",
"Anton Dereventsov"
] | [
"",
""
] | This paper focuses on a data-centric approach to machine learning
applications in the context of Conway's Game of Life. Specifically, we consider
the task of training a minimal architecture network to learn the transition
rules of Game of Life for a given number of steps ahead, which is known to be
challenging due to restrictions on the allowed number of trainable parameters.
An extensive quantitative analysis showcases the benefits of utilizing a
strategically designed training dataset, with its advantages persisting
regardless of other parameters of the learning configuration, such as network
initialization weights or optimization algorithm. Importantly, our findings
highlight the integral role of domain expert insights in creating effective
machine learning applications for constrained real-world scenarios. | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CV",
"cs.IR"
] |
|||
Environment-Centric Active Inference | http://arxiv.org/abs/2408.12777v1 | http://arxiv.org/abs/2408.12777v1 | http://arxiv.org/pdf/2408.12777v1 | 2024-08-23 | 2024-08-23 | [
"Kanako Esaki",
"Tadayuki Matsumura",
"Takeshi Kato",
"Shunsuke Minusa",
"Yang Shao",
"Hiroyuki Mizuno"
] | [
"",
"",
"",
"",
"",
""
] | To handle unintended changes in the environment by agents, we propose an
environment-centric active inference EC-AIF in which the Markov Blanket of
active inference is defined starting from the environment. In normal active
inference, the Markov Blanket is defined starting from the agent. That is,
first the agent was defined as the entity that performs the "action" such as a
robot or a person, then the environment was defined as other people or objects
that are directly affected by the agent's "action," and the boundary between
the agent and the environment was defined as the Markov Blanket. This
agent-centric definition does not allow the agent to respond to unintended
changes in the environment caused by factors outside of the defined
environment. In the proposed EC-AIF, there is no entity corresponding to an
agent. The environment includes all observable things, including people and
things conventionally considered to be the environment, as well as entities
that perform "actions" such as robots and people. Accordingly, all states,
including robots and people, are included in inference targets, eliminating
unintended changes in the environment. The EC-AIF was applied to a robot arm
and validated with an object transport task by the robot arm. The results
showed that the robot arm successfully transported objects while responding to
changes in the target position of the object and to changes in the orientation
of another robot arm. | 14 pages, 9 figures | cs.RO | [
"cs.RO",
"cs.AI"
] |
||
Intelligent OPC Engineer Assistant for Semiconductor Manufacturing | http://arxiv.org/abs/2408.12775v2 | http://arxiv.org/abs/2408.12775v2 | http://arxiv.org/pdf/2408.12775v2 | 2024-08-23 | 2024-08-27 | [
"Guojin Chen",
"Haoyu Yang",
"Bei Yu",
"Haoxing Ren"
] | [
"",
"",
"",
""
] | Advancements in chip design and manufacturing have enabled the processing of
complex tasks such as deep learning and natural language processing, paving the
way for the development of artificial general intelligence (AGI). AI, on the
other hand, can be leveraged to innovate and streamline semiconductor
technology from planning and implementation to manufacturing. In this paper, we
present \textit{Intelligent OPC Engineer Assistant}, an AI/LLM-powered
methodology designed to solve the core manufacturing-aware optimization problem
known as optical proximity correction (OPC). The methodology involves a
reinforcement learning-based OPC recipe search and a customized multi-modal
agent system for recipe summarization. Experiments demonstrate that our
methodology can efficiently build OPC recipes on various chip designs with
specially handled design topologies, a task that typically requires the
full-time effort of OPC engineers with years of experience. | cs.AI | [
"cs.AI",
"cs.AR"
] |
|||
Symmetric masking strategy enhances the performance of Masked Image
Modeling | http://arxiv.org/abs/2408.12772v1 | http://arxiv.org/abs/2408.12772v1 | http://arxiv.org/pdf/2408.12772v1 | 2024-08-23 | 2024-08-23 | [
"Khanh-Binh Nguyen",
"Chae Jung Park"
] | [
"",
""
] | Masked Image Modeling (MIM) is a technique in self-supervised learning that
focuses on acquiring detailed visual representations from unlabeled images by
estimating the missing pixels in randomly masked sections. It has proven to be
a powerful tool for the preliminary training of Vision Transformers (ViTs),
yielding impressive results across various tasks. Nevertheless, most MIM
methods heavily depend on the random masking strategy to formulate the pretext
task. This strategy necessitates numerous trials to ascertain the optimal
dropping ratio, which can be resource-intensive, requiring the model to be
pre-trained for anywhere between 800 to 1600 epochs. Furthermore, this approach
may not be suitable for all datasets. In this work, we propose a new masking
strategy that effectively helps the model capture global and local features.
Based on this masking strategy, SymMIM, our proposed training pipeline for MIM
is introduced. SymMIM achieves a new SOTA accuracy of 85.9\% on ImageNet using
ViT-Large and surpasses previous SOTA across downstream tasks such as image
classification, semantic segmentation, object detection, instance segmentation
tasks, and so on. | Accepted at ICPR 2024 | cs.CV | [
"cs.CV",
"cs.AI"
] |
||
When In-memory Computing Meets Spiking Neural Networks -- A Perspective
on Device-Circuit-System-and-Algorithm Co-design | http://arxiv.org/abs/2408.12767v1 | http://arxiv.org/abs/2408.12767v1 | http://arxiv.org/pdf/2408.12767v1 | 2024-08-22 | 2024-08-22 | [
"Abhishek Moitra",
"Abhiroop Bhattacharjee",
"Yuhang Li",
"Youngeun Kim",
"Priyadarshini Panda"
] | [
"",
"",
"",
"",
""
] | This review explores the intersection of bio-plausible artificial
intelligence in the form of Spiking Neural Networks (SNNs) with the analog
In-Memory Computing (IMC) domain, highlighting their collective potential for
low-power edge computing environments. Through detailed investigation at the
device, circuit, and system levels, we highlight the pivotal synergies between
SNNs and IMC architectures. Additionally, we emphasize the critical need for
comprehensive system-level analyses, considering the inter-dependencies between
algorithms, devices, circuit & system parameters, crucial for optimal
performance. An in-depth analysis leads to identification of key system-level
bottlenecks arising from device limitations which can be addressed using
SNN-specific algorithm-hardware co-design techniques. This review underscores
the imperative for holistic device to system design space co-exploration,
highlighting the critical aspects of hardware and algorithm research endeavors
for low-power neuromorphic solutions. | 19 Pages, 13 Figures | cs.NE | [
"cs.NE",
"cs.AI",
"cs.AR",
"cs.LG"
] |
||
Assessing Modality Bias in Video Question Answering Benchmarks with
Multimodal Large Language Models | http://arxiv.org/abs/2408.12763v1 | http://arxiv.org/abs/2408.12763v1 | http://arxiv.org/pdf/2408.12763v1 | 2024-08-22 | 2024-08-22 | [
"Jean Park",
"Kuk Jin Jang",
"Basam Alasaly",
"Sriharsha Mopidevi",
"Andrew Zolensky",
"Eric Eaton",
"Insup Lee",
"Kevin Johnson"
] | [
"",
"",
"",
"",
"",
"",
"",
""
] | Multimodal large language models (MLLMs) can simultaneously process visual,
textual, and auditory data, capturing insights that complement human analysis.
However, existing video question-answering (VidQA) benchmarks and datasets
often exhibit a bias toward a single modality, despite the goal of requiring
advanced reasoning skills that integrate diverse modalities to answer the
queries. In this work, we introduce the modality importance score (MIS) to
identify such bias. It is designed to assess which modality embeds the
necessary information to answer the question. Additionally, we propose an
innovative method using state-of-the-art MLLMs to estimate the modality
importance, which can serve as a proxy for human judgments of modality
perception. With this MIS, we demonstrate the presence of unimodal bias and the
scarcity of genuinely multimodal questions in existing datasets. We further
validate the modality importance score with multiple ablation studies to
evaluate the performance of MLLMs on permuted feature sets. Our results
indicate that current models do not effectively integrate information due to
modality imbalance in existing datasets. Our proposed MLLM-derived MIS can
guide the curation of modality-balanced datasets that advance multimodal
learning and enhance MLLMs' capabilities to understand and utilize synergistic
relations across modalities. | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CL"
] |
|||
Visual Verity in AI-Generated Imagery: Computational Metrics and
Human-Centric Analysis | http://arxiv.org/abs/2408.12762v1 | http://arxiv.org/abs/2408.12762v1 | http://arxiv.org/pdf/2408.12762v1 | 2024-08-22 | 2024-08-22 | [
"Memoona Aziz",
"Umair Rahman",
"Syed Ali Safi",
"Amir Zaib Abbasi"
] | [
"",
"",
"",
""
] | The rapid advancements in AI technologies have revolutionized the production
of graphical content across various sectors, including entertainment,
advertising, and e-commerce. These developments have spurred the need for
robust evaluation methods to assess the quality and realism of AI-generated
images. To address this, we conducted three studies. First, we introduced and
validated a questionnaire called Visual Verity, which measures photorealism,
image quality, and text-image alignment. Second, we applied this questionnaire
to assess images from AI models (DALL-E2, DALL-E3, GLIDE, Stable Diffusion) and
camera-generated images, revealing that camera-generated images excelled in
photorealism and text-image alignment, while AI models led in image quality. We
also analyzed statistical properties, finding that camera-generated images
scored lower in hue, saturation, and brightness. Third, we evaluated
computational metrics' alignment with human judgments, identifying MS-SSIM and
CLIP as the most consistent with human assessments. Additionally, we proposed
the Neural Feature Similarity Score (NFSS) for assessing image quality. Our
findings highlight the need for refining computational metrics to better
capture human visual perception, thereby enhancing AI-generated content
evaluation. | cs.HC | [
"cs.HC",
"cs.AI"
] |
|||
SLM Meets LLM: Balancing Latency, Interpretability and Consistency in
Hallucination Detection | http://arxiv.org/abs/2408.12748v1 | http://arxiv.org/abs/2408.12748v1 | http://arxiv.org/pdf/2408.12748v1 | 2024-08-22 | 2024-08-22 | [
"Mengya Hu",
"Rui Xu",
"Deren Lei",
"Yaxi Li",
"Mingyu Wang",
"Emily Ching",
"Eslam Kamal",
"Alex Deng"
] | [
"",
"",
"",
"",
"",
"",
"",
""
] | Large language models (LLMs) are highly capable but face latency challenges
in real-time applications, such as conducting online hallucination detection.
To overcome this issue, we propose a novel framework that leverages a small
language model (SLM) classifier for initial detection, followed by a LLM as
constrained reasoner to generate detailed explanations for detected
hallucinated content. This study optimizes the real-time interpretable
hallucination detection by introducing effective prompting techniques that
align LLM-generated explanations with SLM decisions. Empirical experiment
results demonstrate its effectiveness, thereby enhancing the overall user
experience. | preprint under review | cs.CL | [
"cs.CL",
"cs.AI",
"cs.LG"
] |
||
TReX- Reusing Vision Transformer's Attention for Efficient Xbar-based
Computing | http://arxiv.org/abs/2408.12742v1 | http://arxiv.org/abs/2408.12742v1 | http://arxiv.org/pdf/2408.12742v1 | 2024-08-22 | 2024-08-22 | [
"Abhishek Moitra",
"Abhiroop Bhattacharjee",
"Youngeun Kim",
"Priyadarshini Panda"
] | [
"",
"",
"",
""
] | Due to the high computation overhead of Vision Transformers (ViTs), In-memory
Computing architectures are being researched towards energy-efficient
deployment in edge-computing scenarios. Prior works have proposed efficient
algorithm-hardware co-design and IMC-architectural improvements to improve the
energy-efficiency of IMC-implemented ViTs. However, all prior works have
neglected the overhead and co-depencence of attention blocks on the
accuracy-energy-delay-area of IMC-implemented ViTs. To this end, we propose
TReX- an attention-reuse-driven ViT optimization framework that effectively
performs attention reuse in ViT models to achieve optimal
accuracy-energy-delay-area tradeoffs. TReX optimally chooses the transformer
encoders for attention reuse to achieve near iso-accuracy performance while
meeting the user-specified delay requirement. Based on our analysis on the
Imagenet-1k dataset, we find that TReX achieves 2.3x (2.19x) EDAP reduction and
1.86x (1.79x) TOPS/mm2 improvement with ~1% accuracy drop in case of DeiT-S
(LV-ViT-S) ViT models. Additionally, TReX achieves high accuracy at high EDAP
reduction compared to state-of-the-art token pruning and weight sharing
approaches. On NLP tasks such as CoLA, TReX leads to 2% higher non-ideal
accuracy compared to baseline at 1.6x lower EDAP. | 12 pages | cs.AI | [
"cs.AI",
"cs.AR"
] |
||
Towards measuring fairness in speech recognition: Fair-Speech dataset | http://arxiv.org/abs/2408.12734v1 | http://arxiv.org/abs/2408.12734v1 | http://arxiv.org/pdf/2408.12734v1 | 2024-08-22 | 2024-08-22 | [
"Irina-Elena Veliche",
"Zhuangqun Huang",
"Vineeth Ayyat Kochaniyan",
"Fuchun Peng",
"Ozlem Kalinli",
"Michael L. Seltzer"
] | [
"",
"",
"",
"",
"",
""
] | The current public datasets for speech recognition (ASR) tend not to focus
specifically on the fairness aspect, such as performance across different
demographic groups. This paper introduces a novel dataset, Fair-Speech, a
publicly released corpus to help researchers evaluate their ASR models for
accuracy across a diverse set of self-reported demographic information, such as
age, gender, ethnicity, geographic variation and whether the participants
consider themselves native English speakers. Our dataset includes approximately
26.5K utterances in recorded speech by 593 people in the United States, who
were paid to record and submit audios of themselves saying voice commands. We
also provide ASR baselines, including on models trained on transcribed and
untranscribed social media videos and open source models. | cs.AI | [
"cs.AI",
"cs.CY",
"cs.SD",
"eess.AS",
"stat.ML"
] |
|||
SQL-GEN: Bridging the Dialect Gap for Text-to-SQL Via Synthetic Data And
Model Merging | http://arxiv.org/abs/2408.12733v1 | http://arxiv.org/abs/2408.12733v1 | http://arxiv.org/pdf/2408.12733v1 | 2024-08-22 | 2024-08-22 | [
"Mohammadreza Pourreza",
"Ruoxi Sun",
"Hailong Li",
"Lesly Miculicich",
"Tomas Pfister",
"Sercan O. Arik"
] | [
"",
"",
"",
"",
"",
""
] | Text-to-SQL systems, which convert natural language queries into SQL
commands, have seen significant progress primarily for the SQLite dialect.
However, adapting these systems to other SQL dialects like BigQuery and
PostgreSQL remains a challenge due to the diversity in SQL syntax and
functions. We introduce SQL-GEN, a framework for generating high-quality
dialect-specific synthetic data guided by dialect-specific tutorials, and
demonstrate its effectiveness in creating training datasets for multiple
dialects. Our approach significantly improves performance, by up to 20\%, over
previous methods and reduces the gap with large-scale human-annotated datasets.
Moreover, combining our synthetic data with human-annotated data provides
additional performance boosts of 3.3\% to 5.6\%. We also introduce a novel
Mixture of Experts (MoE) initialization method that integrates dialect-specific
models into a unified system by merging self-attention layers and initializing
the gates with dialect-specific keywords, further enhancing performance across
different SQL dialects. | cs.AI | [
"cs.AI",
"cs.CL",
"cs.DB",
"cs.LG"
] |
|||
BankTweak: Adversarial Attack against Multi-Object Trackers by
Manipulating Feature Banks | http://arxiv.org/abs/2408.12727v1 | http://arxiv.org/abs/2408.12727v1 | http://arxiv.org/pdf/2408.12727v1 | 2024-08-22 | 2024-08-22 | [
"Woojin Shin",
"Donghwa Kang",
"Daejin Choi",
"Brent Kang",
"Jinkyu Lee",
"Hyeongboo Baek"
] | [
"",
"",
"",
"",
"",
""
] | Multi-object tracking (MOT) aims to construct moving trajectories for
objects, and modern multi-object trackers mainly utilize the
tracking-by-detection methodology. Initial approaches to MOT attacks primarily
aimed to degrade the detection quality of the frames under attack, thereby
reducing accuracy only in those specific frames, highlighting a lack of
\textit{efficiency}. To improve efficiency, recent advancements manipulate
object positions to cause persistent identity (ID) switches during the
association phase, even after the attack ends within a few frames. However,
these position-manipulating attacks have inherent limitations, as they can be
easily counteracted by adjusting distance-related parameters in the association
phase, revealing a lack of \textit{robustness}. In this paper, we present
\textsf{BankTweak}, a novel adversarial attack designed for MOT trackers, which
features efficiency and robustness. \textsf{BankTweak} focuses on the feature
extractor in the association phase and reveals vulnerability in the Hungarian
matching method used by feature-based MOT systems. Exploiting the
vulnerability, \textsf{BankTweak} induces persistent ID switches (addressing
\textit{efficiency}) even after the attack ends by strategically injecting
altered features into the feature banks without modifying object positions
(addressing \textit{robustness}). To demonstrate the applicability, we apply
\textsf{BankTweak} to three multi-object trackers (DeepSORT, StrongSORT, and
MOTDT) with one-stage, two-stage, anchor-free, and transformer detectors.
Extensive experiments on the MOT17 and MOT20 datasets show that our method
substantially surpasses existing attacks, exposing the vulnerability of the
tracking-by-detection framework to \textsf{BankTweak}. | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
|||
Generating Realistic X-ray Scattering Images Using Stable Diffusion and
Human-in-the-loop Annotations | http://arxiv.org/abs/2408.12720v1 | http://arxiv.org/abs/2408.12720v1 | http://arxiv.org/pdf/2408.12720v1 | 2024-08-22 | 2024-08-22 | [
"Zhuowen Zhao",
"Xiaoya Chong",
"Tanny Chavez",
"Alexander Hexemer"
] | [
"",
"",
"",
""
] | We fine-tuned a foundational stable diffusion model using X-ray scattering
images and their corresponding descriptions to generate new scientific images
from given prompts. However, some of the generated images exhibit significant
unrealistic artifacts, commonly known as "hallucinations". To address this
issue, we trained various computer vision models on a dataset composed of 60%
human-approved generated images and 40% experimental images to detect
unrealistic images. The classified images were then reviewed and corrected by
human experts, and subsequently used to further refine the classifiers in next
rounds of training and inference. Our evaluations demonstrate the feasibility
of generating high-fidelity, domain-specific images using a fine-tuned
diffusion model. We anticipate that generative AI will play a crucial role in
enhancing data augmentation and driving the development of digital twins in
scientific research facilities. | eess.IV | [
"eess.IV",
"cs.AI",
"cs.LG"
] |
|||
From Radiologist Report to Image Label: Assessing Latent Dirichlet
Allocation in Training Neural Networks for Orthopedic Radiograph
Classification | http://arxiv.org/abs/2408.13284v1 | http://arxiv.org/abs/2408.13284v1 | http://arxiv.org/pdf/2408.13284v1 | 2024-08-22 | 2024-08-22 | [
"Jakub Olczak",
"Max Gordon"
] | [
"",
""
] | Background: Radiography (X-rays) is the dominant modality in orthopedics, and
improving the interpretation of radiographs is clinically relevant. Machine
learning (ML) has revolutionized data analysis and has been applied to
medicine, with some success, in the form of natural language processing (NLP)
and artificial neural networks (ANN). Latent Dirichlet allocation (LDA) is an
NLP method that automatically categorizes documents into topics. Successfully
applying ML to orthopedic radiography could enable the creation of
computer-aided decision systems for use in the clinic. We studied how an
automated ML pipeline could classify orthopedic trauma radiographs from
radiologist reports. Methods: Wrist and ankle radiographs from Danderyd
Hospital in Sweden taken between 2002 and 2015, with radiologist reports. LDA
was used to create image labels for radiographs from the radiologist reports.
Radiographs and labels were used to train an image recognition ANN. The ANN
outcomes were manually reviewed to get an accurate estimate of the method's
utility and accuracy. Results: Image Labels generated via LDA could
successfully train the ANN. The ANN reached an accuracy between 91% and 60%
compared to a gold standard, depending on the label. Conclusions: We found that
LDA was unsuited to label orthopedic radiographs from reports with high
accuracy. However, despite this, the ANN could learn to detect some features in
radiographs with high accuracy. The study also illustrates how ML and ANN can
be applied to medical research. | This article is an abridged version of a 2016 master's thesis at the
Karolinska Institute. The original is available upon request | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
||
Learning Valid Dual Bounds in Constraint Programming: Boosted Lagrangian
Decomposition with Self-Supervised Learning | http://arxiv.org/abs/2408.12695v1 | http://arxiv.org/abs/2408.12695v1 | http://arxiv.org/pdf/2408.12695v1 | 2024-08-22 | 2024-08-22 | [
"Swann Bessa",
"Darius Dabert",
"Max Bourgeat",
"Louis-Martin Rousseau",
"Quentin Cappart"
] | [
"",
"",
"",
"",
""
] | Lagrangian decomposition (LD) is a relaxation method that provides a dual
bound for constrained optimization problems by decomposing them into more
manageable sub-problems. This bound can be used in branch-and-bound algorithms
to prune the search space effectively. In brief, a vector of Lagrangian
multipliers is associated with each sub-problem, and an iterative procedure
(e.g., a sub-gradient optimization) adjusts these multipliers to find the
tightest bound. Initially applied to integer programming, Lagrangian
decomposition also had success in constraint programming due to its versatility
and the fact that global constraints provide natural sub-problems. However, the
non-linear and combinatorial nature of sub-problems in constraint programming
makes it computationally intensive to optimize the Lagrangian multipliers with
sub-gradient methods at each node of the tree search. This currently limits the
practicality of LD as a general bounding mechanism for constraint programming.
To address this challenge, we propose a self-supervised learning approach that
leverages neural networks to generate multipliers directly, yielding tight
bounds. This approach significantly reduces the number of sub-gradient
optimization steps required, enhancing the pruning efficiency and reducing the
execution time of constraint programming solvers. This contribution is one of
the few that leverage learning to enhance bounding mechanisms on the dual side,
a critical element in the design of combinatorial solvers. To our knowledge,
this work presents the first generic method for learning valid dual bounds in
constraint programming. | cs.AI | [
"cs.AI"
] |
|||
Unlocking Intrinsic Fairness in Stable Diffusion | http://arxiv.org/abs/2408.12692v1 | http://arxiv.org/abs/2408.12692v1 | http://arxiv.org/pdf/2408.12692v1 | 2024-08-22 | 2024-08-22 | [
"Eunji Kim",
"Siwon Kim",
"Rahim Entezari",
"Sungroh Yoon"
] | [
"",
"",
"",
""
] | Recent text-to-image models like Stable Diffusion produce photo-realistic
images but often show demographic biases. Previous debiasing methods focused on
training-based approaches, failing to explore the root causes of bias and
overlooking Stable Diffusion's potential for unbiased image generation. In this
paper, we demonstrate that Stable Diffusion inherently possesses fairness,
which can be unlocked to achieve debiased outputs. Through carefully designed
experiments, we identify the excessive bonding between text prompts and the
diffusion process as a key source of bias. To address this, we propose a novel
approach that perturbs text conditions to unleash Stable Diffusion's intrinsic
fairness. Our method effectively mitigates bias without additional tuning,
while preserving image-text alignment and image quality. | 21 pages, 20 figures; First two authors contributed equally | cs.AI | [
"cs.AI"
] |
||
MultiMed: Massively Multimodal and Multitask Medical Understanding | http://arxiv.org/abs/2408.12682v1 | http://arxiv.org/abs/2408.12682v1 | http://arxiv.org/pdf/2408.12682v1 | 2024-08-22 | 2024-08-22 | [
"Shentong Mo",
"Paul Pu Liang"
] | [
"",
""
] | Biomedical data is inherently multimodal, consisting of electronic health
records, medical imaging, digital pathology, genome sequencing, wearable
sensors, and more. The application of artificial intelligence tools to these
multifaceted sensing technologies has the potential to revolutionize the
prognosis, diagnosis, and management of human health and disease. However,
current approaches to biomedical AI typically only train and evaluate with one
or a small set of medical modalities and tasks. This limitation hampers the
development of comprehensive tools that can leverage the rich interconnected
information across many heterogeneous biomedical sensors. To address this
challenge, we present MultiMed, a benchmark designed to evaluate and enable
large-scale learning across a wide spectrum of medical modalities and tasks.
MultiMed consists of 2.56 million samples across ten medical modalities such as
medical reports, pathology, genomics, and protein data, and is structured into
eleven challenging tasks, including disease prognosis, protein structure
prediction, and medical question answering. Using MultiMed, we conduct
comprehensive experiments benchmarking state-of-the-art unimodal, multimodal,
and multitask models. Our analysis highlights the advantages of training
large-scale medical models across many related modalities and tasks. Moreover,
MultiMed enables studies of generalization across related medical concepts,
robustness to real-world noisy data and distribution shifts, and novel modality
combinations to improve prediction performance. MultiMed will be publicly
available and regularly updated and welcomes inputs from the community. | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CL",
"cs.CV",
"cs.MM"
] |
|||
Can LLMs Understand Social Norms in Autonomous Driving Games? | http://arxiv.org/abs/2408.12680v1 | http://arxiv.org/abs/2408.12680v1 | http://arxiv.org/pdf/2408.12680v1 | 2024-08-22 | 2024-08-22 | [
"Boxuan Wang",
"Haonan Duan",
"Yanhao Feng",
"Xu Chen",
"Yongjie Fu",
"Zhaobin Mo",
"Xuan Di"
] | [
"",
"",
"",
"",
"",
"",
""
] | Social norm is defined as a shared standard of acceptable behavior in a
society. The emergence of social norms fosters coordination among agents
without any hard-coded rules, which is crucial for the large-scale deployment
of AVs in an intelligent transportation system. This paper explores the
application of LLMs in understanding and modeling social norms in autonomous
driving games. We introduce LLMs into autonomous driving games as intelligent
agents who make decisions according to text prompts. These agents are referred
to as LLM-based agents. Our framework involves LLM-based agents playing Markov
games in a multi-agent system (MAS), allowing us to investigate the emergence
of social norms among individual agents. We aim to identify social norms by
designing prompts and utilizing LLMs on textual information related to the
environment setup and the observations of LLM-based agents. Using the OpenAI
Chat API powered by GPT-4.0, we conduct experiments to simulate interactions
and evaluate the performance of LLM-based agents in two driving scenarios:
unsignalized intersection and highway platoon. The results show that LLM-based
agents can handle dynamically changing environments in Markov games, and social
norms evolve among LLM-based agents in both scenarios. In the intersection
game, LLM-based agents tend to adopt a conservative driving policy when facing
a potential car crash. The advantage of LLM-based agents in games lies in their
strong operability and analyzability, which facilitate experimental design. | cs.AI | [
"cs.AI"
] |
|||
Enhancing Transferability of Adversarial Attacks with GE-AdvGAN+: A
Comprehensive Framework for Gradient Editing | http://arxiv.org/abs/2408.12673v1 | http://arxiv.org/abs/2408.12673v1 | http://arxiv.org/pdf/2408.12673v1 | 2024-08-22 | 2024-08-22 | [
"Zhibo Jin",
"Jiayu Zhang",
"Zhiyu Zhu",
"Yuchen Zhang",
"Jiahao Huang",
"Jianlong Zhou",
"Fang Chen"
] | [
"",
"",
"",
"",
"",
"",
""
] | Transferable adversarial attacks pose significant threats to deep neural
networks, particularly in black-box scenarios where internal model information
is inaccessible. Studying adversarial attack methods helps advance the
performance of defense mechanisms and explore model vulnerabilities. These
methods can uncover and exploit weaknesses in models, promoting the development
of more robust architectures. However, current methods for transferable attacks
often come with substantial computational costs, limiting their deployment and
application, especially in edge computing scenarios. Adversarial generative
models, such as Generative Adversarial Networks (GANs), are characterized by
their ability to generate samples without the need for retraining after an
initial training phase. GE-AdvGAN, a recent method for transferable adversarial
attacks, is based on this principle. In this paper, we propose a novel general
framework for gradient editing-based transferable attacks, named GE-AdvGAN+,
which integrates nearly all mainstream attack methods to enhance
transferability while significantly reducing computational resource
consumption. Our experiments demonstrate the compatibility and effectiveness of
our framework. Compared to the baseline AdvGAN, our best-performing method,
GE-AdvGAN++, achieves an average ASR improvement of 47.8. Additionally, it
surpasses the latest competing algorithm, GE-AdvGAN, with an average ASR
increase of 5.9. The framework also exhibits enhanced computational efficiency,
achieving 2217.7 FPS, outperforming traditional methods such as BIM and
MI-FGSM. The implementation code for our GE-AdvGAN+ framework is available at
https://github.com/GEAdvGANP | cs.AI | [
"cs.AI"
] |
|||
Leveraging Information Consistency in Frequency and Spatial Domain for
Adversarial Attacks | http://arxiv.org/abs/2408.12670v1 | http://arxiv.org/abs/2408.12670v1 | http://arxiv.org/pdf/2408.12670v1 | 2024-08-22 | 2024-08-22 | [
"Zhibo Jin",
"Jiayu Zhang",
"Zhiyu Zhu",
"Xinyi Wang",
"Yiyun Huang",
"Huaming Chen"
] | [
"",
"",
"",
"",
"",
""
] | Adversarial examples are a key method to exploit deep neural networks. Using
gradient information, such examples can be generated in an efficient way
without altering the victim model. Recent frequency domain transformation has
further enhanced the transferability of such adversarial examples, such as
spectrum simulation attack. In this work, we investigate the effectiveness of
frequency domain-based attacks, aligning with similar findings in the spatial
domain. Furthermore, such consistency between the frequency and spatial domains
provides insights into how gradient-based adversarial attacks induce
perturbations across different domains, which is yet to be explored. Hence, we
propose a simple, effective, and scalable gradient-based adversarial attack
algorithm leveraging the information consistency in both frequency and spatial
domains. We evaluate the algorithm for its effectiveness against different
models. Extensive experiments demonstrate that our algorithm achieves
state-of-the-art results compared to other gradient-based algorithms. Our code
is available at: https://github.com/LMBTough/FSA. | Accepted by PRICAI 2024 | cs.LG | [
"cs.LG",
"cs.AI"
] |
||
Benchmarking Counterfactual Interpretability in Deep Learning Models for
Time Series Classification | http://arxiv.org/abs/2408.12666v1 | http://arxiv.org/abs/2408.12666v1 | http://arxiv.org/pdf/2408.12666v1 | 2024-08-22 | 2024-08-22 | [
"Ziwen Kan",
"Shahbaz Rezaei",
"Xin liu"
] | [
"",
"",
""
] | The popularity of deep learning methods in the time series domain boosts
interest in interpretability studies, including counterfactual (CF) methods. CF
methods identify minimal changes in instances to alter the model predictions.
Despite extensive research, no existing work benchmarks CF methods in the time
series domain. Additionally, the results reported in the literature are
inconclusive due to the limited number of datasets and inadequate metrics. In
this work, we redesign quantitative metrics to accurately capture desirable
characteristics in CFs. We specifically redesign the metrics for sparsity and
plausibility and introduce a new metric for consistency. Combined with
validity, generation time, and proximity, we form a comprehensive metric set.
We systematically benchmark 6 different CF methods on 20 univariate datasets
and 10 multivariate datasets with 3 different classifiers. Results indicate
that the performance of CF methods varies across metrics and among different
models. Finally, we provide case studies and a guideline for practical usage. | 15 pages, 27 figures | cs.LG | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
||
Multilevel Interpretability Of Artificial Neural Networks: Leveraging
Framework And Methods From Neuroscience | http://arxiv.org/abs/2408.12664v2 | http://arxiv.org/abs/2408.12664v2 | http://arxiv.org/pdf/2408.12664v2 | 2024-08-22 | 2024-08-26 | [
"Zhonghao He",
"Jascha Achterberg",
"Katie Collins",
"Kevin Nejad",
"Danyal Akarca",
"Yinzhu Yang",
"Wes Gurnee",
"Ilia Sucholutsky",
"Yuhan Tang",
"Rebeca Ianov",
"George Ogden",
"Chole Li",
"Kai Sandbrink",
"Stephen Casper",
"Anna Ivanova",
"Grace W. Lindsay"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | As deep learning systems are scaled up to many billions of parameters,
relating their internal structure to external behaviors becomes very
challenging. Although daunting, this problem is not new: Neuroscientists and
cognitive scientists have accumulated decades of experience analyzing a
particularly complex system - the brain. In this work, we argue that
interpreting both biological and artificial neural systems requires analyzing
those systems at multiple levels of analysis, with different analytic tools for
each level. We first lay out a joint grand challenge among scientists who study
the brain and who study artificial neural networks: understanding how
distributed neural mechanisms give rise to complex cognition and behavior. We
then present a series of analytical tools that can be used to analyze
biological and artificial neural systems, organizing those tools according to
Marr's three levels of analysis: computation/behavior,
algorithm/representation, and implementation. Overall, the multilevel
interpretability framework provides a principled way to tackle neural system
complexity; links structure, computation, and behavior; clarifies assumptions
and research priorities at each level; and paves the way toward a unified
effort for understanding intelligent systems, may they be biological or
artificial. | cs.AI | [
"cs.AI",
"q-bio.NC"
] |
|||
Disentangled Structural and Featural Representation for Task-Agnostic
Graph Valuation | http://arxiv.org/abs/2408.12659v1 | http://arxiv.org/abs/2408.12659v1 | http://arxiv.org/pdf/2408.12659v1 | 2024-08-22 | 2024-08-22 | [
"Ali Falahati",
"Mohammad Mohammadi Amiri"
] | [
"",
""
] | With the emergence of data marketplaces, the demand for methods to assess the
value of data has increased significantly. While numerous techniques have been
proposed for this purpose, none have specifically addressed graphs as the main
data modality. Graphs are widely used across various fields, ranging from
chemical molecules to social networks. In this study, we break down graphs into
two main components: structural and featural, and we focus on evaluating data
without relying on specific task-related metrics, making it applicable in
practical scenarios where validation requirements may be lacking. We introduce
a novel framework called blind message passing, which aligns the seller's and
buyer's graphs using a shared node permutation based on graph matching. This
allows us to utilize the graph Wasserstein distance to quantify the differences
in the structural distribution of graph datasets, called the structural
disparities. We then consider featural aspects of buyers' and sellers' graphs
for data valuation and capture their statistical similarities and differences,
referred to as relevance and diversity, respectively. Our approach ensures that
buyers and sellers remain unaware of each other's datasets. Our experiments on
real datasets demonstrate the effectiveness of our approach in capturing the
relevance, diversity, and structural disparities of seller data for buyers,
particularly in graph-based data valuation scenarios. | cs.LG | [
"cs.LG",
"cs.AI",
"cs.IT",
"math.IT",
"stat.ML"
] |
|||
Hierarchical Generative Modeling of Melodic Vocal Contours in Hindustani
Classical Music | http://arxiv.org/abs/2408.12658v2 | http://arxiv.org/abs/2408.12658v2 | http://arxiv.org/pdf/2408.12658v2 | 2024-08-22 | 2024-08-26 | [
"Nithya Shikarpur",
"Krishna Maneesha Dendukuri",
"Yusong Wu",
"Antoine Caillon",
"Cheng-Zhi Anna Huang"
] | [
"",
"",
"",
"",
""
] | Hindustani music is a performance-driven oral tradition that exhibits the
rendition of rich melodic patterns. In this paper, we focus on generative
modeling of singers' vocal melodies extracted from audio recordings, as the
voice is musically prominent within the tradition. Prior generative work in
Hindustani music models melodies as coarse discrete symbols which fails to
capture the rich expressive melodic intricacies of singing. Thus, we propose to
use a finely quantized pitch contour, as an intermediate representation for
hierarchical audio modeling. We propose GaMaDHaNi, a modular two-level
hierarchy, consisting of a generative model on pitch contours, and a pitch
contour to audio synthesis model. We compare our approach to non-hierarchical
audio models and hierarchical models that use a self-supervised intermediate
representation, through a listening test and qualitative analysis. We also
evaluate audio model's ability to faithfully represent the pitch contour input
using Pearson correlation coefficient. By using pitch contours as an
intermediate representation, we show that our model may be better equipped to
listen and respond to musicians in a human-AI collaborative setting by
highlighting two potential interaction use cases (1) primed generation, and (2)
coarse pitch conditioning. | Accepted at International Society for Music Information Retrieval
(ISMIR) 2024 | cs.SD | [
"cs.SD",
"cs.AI",
"cs.LG",
"eess.AS"
] |
||
ND-SDF: Learning Normal Deflection Fields for High-Fidelity Indoor
Reconstruction | http://arxiv.org/abs/2408.12598v1 | http://arxiv.org/abs/2408.12598v1 | http://arxiv.org/pdf/2408.12598v1 | 2024-08-22 | 2024-08-22 | [
"Ziyu Tang",
"Weicai Ye",
"Yifan Wang",
"Di Huang",
"Hujun Bao",
"Tong He",
"Guofeng Zhang"
] | [
"",
"",
"",
"",
"",
"",
""
] | Neural implicit reconstruction via volume rendering has demonstrated its
effectiveness in recovering dense 3D surfaces. However, it is non-trivial to
simultaneously recover meticulous geometry and preserve smoothness across
regions with differing characteristics. To address this issue, previous methods
typically employ geometric priors, which are often constrained by the
performance of the prior models. In this paper, we propose ND-SDF, which learns
a Normal Ddeflection field to represent the angular deviation between the scene
normal and the prior normal. Unlike previous methods that uniformly apply
geometric priors on all samples, introducing significant bias in accuracy, our
proposed normal deflection field dynamically learns and adapts the utilization
of samples based on their specific characteristics, thereby improving both the
accuracy and effectiveness of the model. Our method not only obtains smooth
weakly textured regions such as walls and floors but also preserves the
geometric details of complex structures. In addition, we introduce a novel ray
sampling strategy based on the deflection angle to facilitate the unbiased
rendering process, which significantly improves the quality and accuracy of
intricate surfaces, especially on thin structures. Consistent improvements on
various challenging datasets demonstrate the superiority of our method. | cs.CV | [
"cs.CV",
"cs.AI"
] |
|||
Differentiable Logic Programming for Distant Supervision | http://arxiv.org/abs/2408.12591v2 | http://arxiv.org/abs/2408.12591v2 | http://arxiv.org/pdf/2408.12591v2 | 2024-08-22 | 2024-08-25 | [
"Akihiro Takemura",
"Katsumi Inoue"
] | [
"",
""
] | We introduce a new method for integrating neural networks with logic
programming in Neural-Symbolic AI (NeSy), aimed at learning with distant
supervision, in which direct labels are unavailable. Unlike prior methods, our
approach does not depend on symbolic solvers for reasoning about missing
labels. Instead, it evaluates logical implications and constraints in a
differentiable manner by embedding both neural network outputs and logic
programs into matrices. This method facilitates more efficient learning under
distant supervision. We evaluated our approach against existing methods while
maintaining a constant volume of training data. The findings indicate that our
method not only matches or exceeds the accuracy of other methods across various
tasks but also speeds up the learning process. These results highlight the
potential of our approach to enhance both accuracy and learning efficiency in
NeSy applications. | Updated Figure 1 and fixed the overlapping caption issue. 11 pages
including the appendix. To be published in ECAI 2024 | cs.AI | [
"cs.AI"
] |
||
xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed
Representations | http://arxiv.org/abs/2408.12590v1 | http://arxiv.org/abs/2408.12590v1 | http://arxiv.org/pdf/2408.12590v1 | 2024-08-22 | 2024-08-22 | [
"Can Qin",
"Congying Xia",
"Krithika Ramakrishnan",
"Michael Ryoo",
"Lifu Tu",
"Yihao Feng",
"Manli Shu",
"Honglu Zhou",
"Anas Awadalla",
"Jun Wang",
"Senthil Purushwalkam",
"Le Xue",
"Yingbo Zhou",
"Huan Wang",
"Silvio Savarese",
"Juan Carlos Niebles",
"Zeyuan Chen",
"Ran Xu",
"Caiming Xiong"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | We present xGen-VideoSyn-1, a text-to-video (T2V) generation model capable of
producing realistic scenes from textual descriptions. Building on recent
advancements, such as OpenAI's Sora, we explore the latent diffusion model
(LDM) architecture and introduce a video variational autoencoder (VidVAE).
VidVAE compresses video data both spatially and temporally, significantly
reducing the length of visual tokens and the computational demands associated
with generating long-sequence videos. To further address the computational
costs, we propose a divide-and-merge strategy that maintains temporal
consistency across video segments. Our Diffusion Transformer (DiT) model
incorporates spatial and temporal self-attention layers, enabling robust
generalization across different timeframes and aspect ratios. We have devised a
data processing pipeline from the very beginning and collected over 13M
high-quality video-text pairs. The pipeline includes multiple steps such as
clipping, text detection, motion estimation, aesthetics scoring, and dense
captioning based on our in-house video-LLM model. Training the VidVAE and DiT
models required approximately 40 and 642 H100 days, respectively. Our model
supports over 14-second 720p video generation in an end-to-end way and
demonstrates competitive performance against state-of-the-art T2V models. | Accepted by ECCV24 AI4VA | cs.CV | [
"cs.CV",
"cs.AI"
] |
||
AI-driven Transformer Model for Fault Prediction in Non-Linear Dynamic
Automotive System | http://arxiv.org/abs/2408.12638v1 | http://arxiv.org/abs/2408.12638v1 | http://arxiv.org/pdf/2408.12638v1 | 2024-08-22 | 2024-08-22 | [
"Priyanka Kumar"
] | [
""
] | Fault detection in automotive engine systems is one of the most promising
research areas. Several works have been done in the field of model-based fault
diagnosis. Many researchers have discovered more advanced statistical methods
and algorithms for better fault detection on any automotive dynamic engine
system. The gas turbines/diesel engines produce highly complex and huge data
which are highly non-linear. So, researchers should come up with an automated
system that is more resilient and robust enough to handle this huge, complex
data in highly non-linear dynamic automotive systems. Here, I present an
AI-based fault classification and prediction model in the diesel engine that
can be applied to any highly non-linear dynamic automotive system. The main
contribution of this paper is the AI-based Transformer fault classification and
prediction model in the diesel engine concerning the worldwide harmonic light
vehicle test procedure (WLTP) driving cycle. This model used 27 input
dimensions, 64 hidden dimensions with 2 layers, and 9 heads to create a
classifier with 12 output heads (one for fault-free data and 11 different fault
types). This model was trained on the UTSA Arc High-Performance Compute (HPC)
cluster with 5 NVIDIA V100 GPUs, 40-core CPUs, and 384GB RAM and achieved 70.01
% accuracy on a held test set. | cs.LG | [
"cs.LG",
"cs.AI"
] |
|||
Building and better understanding vision-language models: insights and
future directions | http://arxiv.org/abs/2408.12637v1 | http://arxiv.org/abs/2408.12637v1 | http://arxiv.org/pdf/2408.12637v1 | 2024-08-22 | 2024-08-22 | [
"Hugo Laurençon",
"Andrés Marafioti",
"Victor Sanh",
"Léo Tronchon"
] | [
"",
"",
"",
""
] | The field of vision-language models (VLMs), which take images and texts as
inputs and output texts, is rapidly evolving and has yet to reach consensus on
several key aspects of the development pipeline, including data, architecture,
and training methods. This paper can be seen as a tutorial for building a VLM.
We begin by providing a comprehensive overview of the current state-of-the-art
approaches, highlighting the strengths and weaknesses of each, addressing the
major challenges in the field, and suggesting promising research directions for
underexplored areas. We then walk through the practical steps to build
Idefics3-8B, a powerful VLM that significantly outperforms its predecessor
Idefics2-8B, while being trained efficiently, exclusively on open datasets, and
using a straightforward pipeline. These steps include the creation of Docmatix,
a dataset for improving document understanding capabilities, which is 240 times
larger than previously available datasets. We release the model along with the
datasets created for its training. | cs.CV | [
"cs.CV",
"cs.AI"
] |
|||
Identifying the Best Arm in the Presence of Global Environment Shifts | http://arxiv.org/abs/2408.12581v1 | http://arxiv.org/abs/2408.12581v1 | http://arxiv.org/pdf/2408.12581v1 | 2024-08-22 | 2024-08-22 | [
"Phurinut Srisawad",
"Juergen Branke",
"Long Tran-Thanh"
] | [
"",
"",
""
] | This paper formulates a new Best-Arm Identification problem in the
non-stationary stochastic bandits setting, where the means of all arms are
shifted in the same way due to a global influence of the environment. The aim
is to identify the unique best arm across environmental change given a fixed
total budget. While this setting can be regarded as a special case of
Adversarial Bandits or Corrupted Bandits, we demonstrate that existing
solutions tailored to those settings do not fully utilise the nature of this
global influence, and thus, do not work well in practice (despite their
theoretical guarantees). To overcome this issue, in this paper we develop a
novel selection policy that is consistent and robust in dealing with global
environmental shifts. We then propose an allocation policy, LinLUCB, which
exploits information about global shifts across all arms in each environment.
Empirical tests depict a significant improvement in our policies against other
existing methods. | Extended version of the paper accepted at the 27th European
Conference on Artificial Intelligence (ECAI 2024); Paper ID: M1125 | cs.LG | [
"cs.LG",
"cs.AI"
] |
||
RuleAlign: Making Large Language Models Better Physicians with
Diagnostic Rule Alignment | http://arxiv.org/abs/2408.12579v1 | http://arxiv.org/abs/2408.12579v1 | http://arxiv.org/pdf/2408.12579v1 | 2024-08-22 | 2024-08-22 | [
"Xiaohan Wang",
"Xiaoyan Yang",
"Yuqi Zhu",
"Yue Shen",
"Jian Wang",
"Peng Wei",
"Lei Liang",
"Jinjie Gu",
"Huajun Chen",
"Ningyu Zhang"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Large Language Models (LLMs) like GPT-4, MedPaLM-2, and Med-Gemini achieve
performance competitively with human experts across various medical benchmarks.
However, they still face challenges in making professional diagnoses akin to
physicians, particularly in efficiently gathering patient information and
reasoning the final diagnosis. To this end, we introduce the RuleAlign
framework, designed to align LLMs with specific diagnostic rules. We develop a
medical dialogue dataset comprising rule-based communications between patients
and physicians and design an alignment learning approach through preference
learning. Experimental results demonstrate the effectiveness of the proposed
approach. We hope that our work can serve as an inspiration for exploring the
potential of LLMs as AI physicians. | Ongoing work | cs.CL | [
"cs.CL",
"cs.AI",
"cs.HC",
"cs.IR",
"cs.LG"
] |
||
A Percolation Model of Emergence: Analyzing Transformers Trained on a
Formal Language | http://arxiv.org/abs/2408.12578v1 | http://arxiv.org/abs/2408.12578v1 | http://arxiv.org/pdf/2408.12578v1 | 2024-08-22 | 2024-08-22 | [
"Ekdeep Singh Lubana",
"Kyogo Kawaguchi",
"Robert P. Dick",
"Hidenori Tanaka"
] | [
"",
"",
"",
""
] | Increase in data, size, or compute can lead to sudden learning of specific
capabilities by a neural network -- a phenomenon often called "emergence".
Beyond scientific understanding, establishing the causal factors underlying
such emergent capabilities is crucial to enable risk regulation frameworks for
AI. In this work, we seek inspiration from study of emergent properties in
other fields and propose a phenomenological definition for the concept in the
context of neural networks. Our definition implicates the acquisition of
specific structures underlying the data-generating process as a cause of sudden
performance growth for specific, narrower tasks. We empirically investigate
this definition by proposing an experimental system grounded in a
context-sensitive formal language and find that Transformers trained to perform
tasks on top of strings from this language indeed exhibit emergent
capabilities. Specifically, we show that once the language's underlying grammar
and context-sensitivity inducing structures are learned by the model,
performance on narrower tasks suddenly begins to improve. We then analogize our
network's learning dynamics with the process of percolation on a bipartite
graph, establishing a formal phase transition model that predicts the shift in
the point of emergence observed in experiment when changing the data structure.
Overall, our experimental and theoretical frameworks yield a step towards
better defining, characterizing, and predicting emergence in neural networks. | Preprint | cs.LG | [
"cs.LG",
"cs.AI"
] |
||
Enhanced Parking Perception by Multi-Task Fisheye Cross-view
Transformers | http://arxiv.org/abs/2408.12575v1 | http://arxiv.org/abs/2408.12575v1 | http://arxiv.org/pdf/2408.12575v1 | 2024-08-22 | 2024-08-22 | [
"Antonyo Musabini",
"Ivan Novikov",
"Sana Soula",
"Christel Leonet",
"Lihao Wang",
"Rachid Benmokhtar",
"Fabian Burger",
"Thomas Boulay",
"Xavier Perrotton"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Current parking area perception algorithms primarily focus on detecting
vacant slots within a limited range, relying on error-prone homographic
projection for both labeling and inference. However, recent advancements in
Advanced Driver Assistance System (ADAS) require interaction with end-users
through comprehensive and intelligent Human-Machine Interfaces (HMIs). These
interfaces should present a complete perception of the parking area going from
distinguishing vacant slots' entry lines to the orientation of other parked
vehicles. This paper introduces Multi-Task Fisheye Cross View Transformers (MT
F-CVT), which leverages features from a four-camera fisheye Surround-view
Camera System (SVCS) with multihead attentions to create a detailed Bird-Eye
View (BEV) grid feature map. Features are processed by both a segmentation
decoder and a Polygon-Yolo based object detection decoder for parking slots and
vehicles. Trained on data labeled using LiDAR, MT F-CVT positions objects
within a 25m x 25m real open-road scenes with an average error of only 20 cm.
Our larger model achieves an F-1 score of 0.89. Moreover the smaller model
operates at 16 fps on an Nvidia Jetson Orin embedded board, with similar
detection results to the larger one. MT F-CVT demonstrates robust
generalization capability across different vehicles and camera rig
configurations. A demo video from an unseen vehicle and camera rig is available
at: https://streamable.com/jjw54x. | 26th Irish Machine Vision and Image Processing Conference,
Data-Driven Autonomy Workshop (matching camera-ready version) | cs.CV | [
"cs.CV",
"cs.AI"
] |
||
MuMA-ToM: Multi-modal Multi-Agent Theory of Mind | http://arxiv.org/abs/2408.12574v2 | http://arxiv.org/abs/2408.12574v2 | http://arxiv.org/pdf/2408.12574v2 | 2024-08-22 | 2024-08-25 | [
"Haojun Shi",
"Suyu Ye",
"Xinyu Fang",
"Chuanyang Jin",
"Leyla Isik",
"Yen-Ling Kuo",
"Tianmin Shu"
] | [
"",
"",
"",
"",
"",
"",
""
] | Understanding people's social interactions in complex real-world scenarios
often relies on intricate mental reasoning. To truly understand how and why
people interact with one another, we must infer the underlying mental states
that give rise to the social interactions, i.e., Theory of Mind reasoning in
multi-agent interactions. Additionally, social interactions are often
multi-modal -- we can watch people's actions, hear their conversations, and/or
read about their past behaviors. For AI systems to successfully and safely
interact with people in real-world environments, they also need to understand
people's mental states as well as their inferences about each other's mental
states based on multi-modal information about their interactions. For this, we
introduce MuMA-ToM, a Multi-modal Multi-Agent Theory of Mind benchmark.
MuMA-ToM is the first multi-modal Theory of Mind benchmark that evaluates
mental reasoning in embodied multi-agent interactions. In MuMA-ToM, we provide
video and text descriptions of people's multi-modal behavior in realistic
household environments. Based on the context, we then ask questions about
people's goals, beliefs, and beliefs about others' goals. We validated MuMA-ToM
in a human experiment and provided a human baseline. We also proposed a novel
multi-modal, multi-agent ToM model, LIMP (Language model-based Inverse
Multi-agent Planning). Our experimental results show that LIMP significantly
outperforms state-of-the-art methods, including large multi-modal models (e.g.,
GPT-4o, Gemini-1.5 Pro) and a recent multi-modal ToM model, BIP-ALM. | Project website: https://scai.cs.jhu.edu/projects/MuMA-ToM/ Code:
https://github.com/SCAI-JHU/MuMA-ToM | cs.AI | [
"cs.AI",
"cs.CL",
"cs.CV",
"cs.LG"
] |
||
Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune
CNNs and Transformers | http://arxiv.org/abs/2408.12568v1 | http://arxiv.org/abs/2408.12568v1 | http://arxiv.org/pdf/2408.12568v1 | 2024-08-22 | 2024-08-22 | [
"Sayed Mohammad Vakilzadeh Hatefi",
"Maximilian Dreyer",
"Reduan Achtibat",
"Thomas Wiegand",
"Wojciech Samek",
"Sebastian Lapuschkin"
] | [
"",
"",
"",
"",
"",
""
] | To solve ever more complex problems, Deep Neural Networks are scaled to
billions of parameters, leading to huge computational costs. An effective
approach to reduce computational requirements and increase efficiency is to
prune unnecessary components of these often over-parameterized networks.
Previous work has shown that attribution methods from the field of eXplainable
AI serve as effective means to extract and prune the least relevant network
components in a few-shot fashion. We extend the current state by proposing to
explicitly optimize hyperparameters of attribution methods for the task of
pruning, and further include transformer-based networks in our analysis. Our
approach yields higher model compression rates of large transformer- and
convolutional architectures (VGG, ResNet, ViT) compared to previous works,
while still attaining high performance on ImageNet classification tasks. Here,
our experiments indicate that transformers have a higher degree of
over-parameterization compared to convolutional neural networks. Code is
available at
$\href{https://github.com/erfanhatefi/Pruning-by-eXplaining-in-PyTorch}{\text{this
https link}}$. | Accepted as a workshop paper at ECCV 2024 31 pages (14 pages
manuscript, 4 pages references, 13 pages appendix) | cs.AI | [
"cs.AI",
"cs.CV",
"cs.LG"
] |
||
ssProp: Energy-Efficient Training for Convolutional Neural Networks with
Scheduled Sparse Back Propagation | http://arxiv.org/abs/2408.12561v1 | http://arxiv.org/abs/2408.12561v1 | http://arxiv.org/pdf/2408.12561v1 | 2024-08-22 | 2024-08-22 | [
"Lujia Zhong",
"Shuo Huang",
"Yonggang Shi"
] | [
"",
"",
""
] | Recently, deep learning has made remarkable strides, especially with
generative modeling, such as large language models and probabilistic diffusion
models. However, training these models often involves significant computational
resources, requiring billions of petaFLOPs. This high resource consumption
results in substantial energy usage and a large carbon footprint, raising
critical environmental concerns. Back-propagation (BP) is a major source of
computational expense during training deep learning models. To advance research
on energy-efficient training and allow for sparse learning on any machine and
device, we propose a general, energy-efficient convolution module that can be
seamlessly integrated into any deep learning architecture. Specifically, we
introduce channel-wise sparsity with additional gradient selection schedulers
during backward based on the assumption that BP is often dense and inefficient,
which can lead to over-fitting and high computational consumption. Our
experiments demonstrate that our approach reduces 40\% computations while
potentially improving model performance, validated on image classification and
generation tasks. This reduction can lead to significant energy savings and a
lower carbon footprint during the research and development phases of
large-scale AI systems. Additionally, our method mitigates over-fitting in a
manner distinct from Dropout, allowing it to be combined with Dropout to
further enhance model performance and reduce computational resource usage.
Extensive experiments validate that our method generalizes to a variety of
datasets and tasks and is compatible with a wide range of deep learning
architectures and modules. Code is publicly available at
https://github.com/lujiazho/ssProp. | Under review | cs.LG | [
"cs.LG",
"cs.AI"
] |
||
Data Quality Antipatterns for Software Analytics | http://arxiv.org/abs/2408.12560v1 | http://arxiv.org/abs/2408.12560v1 | http://arxiv.org/pdf/2408.12560v1 | 2024-08-22 | 2024-08-22 | [
"Aaditya Bhatia",
"Dayi Lin",
"Gopi Krishnan Rajbahadur",
"Bram Adams",
"Ahmed E. Hassan"
] | [
"",
"",
"",
"",
""
] | Background: Data quality is vital in software analytics, particularly for
machine learning (ML) applications like software defect prediction (SDP).
Despite the widespread use of ML in software engineering, the effect of data
quality antipatterns on these models remains underexplored.
Objective: This study develops a taxonomy of ML-specific data quality
antipatterns and assesses their impact on software analytics models'
performance and interpretation.
Methods: We identified eight types and 14 sub-types of ML-specific data
quality antipatterns through a literature review. We conducted experiments to
determine the prevalence of these antipatterns in SDP data (RQ1), assess how
cleaning order affects model performance (RQ2), evaluate the impact of
antipattern removal on performance (RQ3), and examine the consistency of
interpretation from models built with different antipatterns (RQ4).
Results: In our SDP case study, we identified nine antipatterns. Over 90% of
these overlapped at both row and column levels, complicating cleaning
prioritization and risking excessive data removal. The order of cleaning
significantly impacts ML model performance, with neural networks being more
resilient to cleaning order changes than simpler models like logistic
regression. Antipatterns such as Tailed Distributions and Class Overlap show a
statistically significant correlation with performance metrics when other
antipatterns are cleaned. Models built with different antipatterns showed
moderate consistency in interpretation results.
Conclusion: The cleaning order of different antipatterns impacts ML model
performance. Five antipatterns have a statistically significant correlation
with model performance when others are cleaned. Additionally, model
interpretation is moderately affected by different data quality antipatterns. | cs.SE | [
"cs.SE",
"cs.AI"
] |
|||
Modeling Time-Variant Responses of Optical Compressors with Selective
State Space Models | http://arxiv.org/abs/2408.12549v2 | http://arxiv.org/abs/2408.12549v2 | http://arxiv.org/pdf/2408.12549v2 | 2024-08-22 | 2024-08-29 | [
"Riccardo Simionato",
"Stefano Fasciani"
] | [
"",
""
] | This paper presents a method for modeling optical dynamic range compressors
using deep neural networks with Selective State Space models. The proposed
approach surpasses previous methods based on recurrent layers by employing a
Selective State Space block to encode the input audio. It features a refined
technique integrating Feature-wise Linear Modulation and Gated Linear Units to
adjust the network dynamically, conditioning the compression's attack and
release phases according to external parameters. The proposed architecture is
well-suited for low-latency and real-time applications, crucial in live audio
processing. The method has been validated on the analog optical compressors
TubeTech CL 1B and Teletronix LA-2A, which possess distinct characteristics.
Evaluation is performed using quantitative metrics and subjective listening
tests, comparing the proposed method with other state-of-the-art models.
Results show that our black-box modeling methods outperform all others,
achieving accurate emulation of the compression process for both seen and
unseen settings during training. We further show a correlation between this
accuracy and the sampling density of the control parameters in the dataset and
identify settings with fast attack and slow release as the most challenging to
emulate. | Submitted to Journal of the Audio Engineering Society | cs.SD | [
"cs.SD",
"cs.AI",
"eess.AS"
] |
||
Automatic Organ and Pan-cancer Segmentation in Abdomen CT: the FLARE
2023 Challenge | http://arxiv.org/abs/2408.12534v1 | http://arxiv.org/abs/2408.12534v1 | http://arxiv.org/pdf/2408.12534v1 | 2024-08-22 | 2024-08-22 | [
"Jun Ma",
"Yao Zhang",
"Song Gu",
"Cheng Ge",
"Ershuai Wang",
"Qin Zhou",
"Ziyan Huang",
"Pengju Lyu",
"Jian He",
"Bo Wang"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Organ and cancer segmentation in abdomen Computed Tomography (CT) scans is
the prerequisite for precise cancer diagnosis and treatment. Most existing
benchmarks and algorithms are tailored to specific cancer types, limiting their
ability to provide comprehensive cancer analysis. This work presents the first
international competition on abdominal organ and pan-cancer segmentation by
providing a large-scale and diverse dataset, including 4650 CT scans with
various cancer types from over 40 medical centers. The winning team established
a new state-of-the-art with a deep learning-based cascaded framework, achieving
average Dice Similarity Coefficient scores of 92.3% for organs and 64.9% for
lesions on the hidden multi-national testing set. The dataset and code of top
teams are publicly available, offering a benchmark platform to drive further
innovations https://codalab.lisn.upsaclay.fr/competitions/12239. | MICCAI 2024 FLARE Challenge Summary | eess.IV | [
"eess.IV",
"cs.AI",
"cs.CV"
] |
||
PCGRL+: Scaling, Control and Generalization in Reinforcement Learning
Level Generators | http://arxiv.org/abs/2408.12525v1 | http://arxiv.org/abs/2408.12525v1 | http://arxiv.org/pdf/2408.12525v1 | 2024-08-22 | 2024-08-22 | [
"Sam Earle",
"Zehua Jiang",
"Julian Togelius"
] | [
"",
"",
""
] | Procedural Content Generation via Reinforcement Learning (PCGRL) has been
introduced as a means by which controllable designer agents can be trained
based only on a set of computable metrics acting as a proxy for the level's
quality and key characteristics. While PCGRL offers a unique set of affordances
for game designers, it is constrained by the compute-intensive process of
training RL agents, and has so far been limited to generating relatively small
levels. To address this issue of scale, we implement several PCGRL environments
in Jax so that all aspects of learning and simulation happen in parallel on the
GPU, resulting in faster environment simulation; removing the CPU-GPU transfer
of information bottleneck during RL training; and ultimately resulting in
significantly improved training speed. We replicate several key results from
prior works in this new framework, letting models train for much longer than
previously studied, and evaluating their behavior after 1 billion timesteps.
Aiming for greater control for human designers, we introduce randomized level
sizes and frozen "pinpoints" of pivotal game tiles as further ways of
countering overfitting. To test the generalization ability of learned
generators, we evaluate models on large, out-of-distribution map sizes, and
find that partial observation sizes learn more robust design strategies. | 8 pages, 7 figures, 6 tables. Published at IEEE Conference on Games,
2024 | cs.LG | [
"cs.LG",
"cs.AI"
] |
||
Advanced atom-level representations for protein flexibility prediction
utilizing graph neural networks | http://arxiv.org/abs/2408.12519v1 | http://arxiv.org/abs/2408.12519v1 | http://arxiv.org/pdf/2408.12519v1 | 2024-08-22 | 2024-08-22 | [
"Sina Sarparast",
"Aldo Zaimi",
"Maximilian Ebert",
"Michael-Rock Goldsmith"
] | [
"",
"",
"",
""
] | Protein dynamics play a crucial role in many biological processes and drug
interactions. However, measuring, and simulating protein dynamics is
challenging and time-consuming. While machine learning holds promise in
deciphering the determinants of protein dynamics from structural information,
most existing methods for protein representation learning operate at the
residue level, ignoring the finer details of atomic interactions. In this work,
we propose for the first time to use graph neural networks (GNNs) to learn
protein representations at the atomic level and predict B-factors from protein
3D structures. The B-factor reflects the atomic displacement of atoms in
proteins, and can serve as a surrogate for protein flexibility. We compared
different GNN architectures to assess their performance. The Meta-GNN model
achieves a correlation coefficient of 0.71 on a large and diverse test set of
over 4k proteins (17M atoms) from the Protein Data Bank (PDB), outperforming
previous methods by a large margin. Our work demonstrates the potential of
representations learned by GNNs for protein flexibility prediction and other
related tasks. | cs.LG | [
"cs.LG",
"cs.AI"
] |
|||
The Russian-focused embedders' exploration: ruMTEB benchmark and Russian
embedding model design | http://arxiv.org/abs/2408.12503v1 | http://arxiv.org/abs/2408.12503v1 | http://arxiv.org/pdf/2408.12503v1 | 2024-08-22 | 2024-08-22 | [
"Artem Snegirev",
"Maria Tikhonova",
"Anna Maksimova",
"Alena Fenogenova",
"Alexander Abramov"
] | [
"",
"",
"",
"",
""
] | Embedding models play a crucial role in Natural Language Processing (NLP) by
creating text embeddings used in various tasks such as information retrieval
and assessing semantic text similarity. This paper focuses on research related
to embedding models in the Russian language. It introduces a new
Russian-focused embedding model called ru-en-RoSBERTa and the ruMTEB benchmark,
the Russian version extending the Massive Text Embedding Benchmark (MTEB). Our
benchmark includes seven categories of tasks, such as semantic textual
similarity, text classification, reranking, and retrieval. The research also
assesses a representative set of Russian and multilingual models on the
proposed benchmark. The findings indicate that the new model achieves results
that are on par with state-of-the-art models in Russian. We release the model
ru-en-RoSBERTa, and the ruMTEB framework comes with open-source code,
integration into the original framework and a public leaderboard. | cs.CL | [
"cs.CL",
"cs.AI"
] |
|||
MEDCO: Medical Education Copilots Based on A Multi-Agent Framework | http://arxiv.org/abs/2408.12496v1 | http://arxiv.org/abs/2408.12496v1 | http://arxiv.org/pdf/2408.12496v1 | 2024-08-22 | 2024-08-22 | [
"Hao Wei",
"Jianing Qiu",
"Haibao Yu",
"Wu Yuan"
] | [
"",
"",
"",
""
] | Large language models (LLMs) have had a significant impact on diverse
research domains, including medicine and healthcare. However, the potential of
LLMs as copilots in medical education remains underexplored. Current
AI-assisted educational tools are limited by their solitary learning approach
and inability to simulate the multi-disciplinary and interactive nature of
actual medical training. To address these limitations, we propose MEDCO
(Medical EDucation COpilots), a novel multi-agent-based copilot system
specially developed to emulate real-world medical training environments. MEDCO
incorporates three primary agents: an agentic patient, an expert doctor, and a
radiologist, facilitating a multi-modal and interactive learning environment.
Our framework emphasizes the learning of proficient question-asking skills,
multi-disciplinary collaboration, and peer discussions between students. Our
experiments show that simulated virtual students who underwent training with
MEDCO not only achieved substantial performance enhancements comparable to
those of advanced models, but also demonstrated human-like learning behaviors
and improvements, coupled with an increase in the number of learning samples.
This work contributes to medical education by introducing a copilot that
implements an interactive and collaborative learning approach. It also provides
valuable insights into the effectiveness of AI-integrated training paradigms. | ECCV 2024 Workshop | cs.AI | [
"cs.AI",
"cs.MA"
] |
||
GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender
Bias in Large Language Models | http://arxiv.org/abs/2408.12494v1 | http://arxiv.org/abs/2408.12494v1 | http://arxiv.org/pdf/2408.12494v1 | 2024-08-22 | 2024-08-22 | [
"Kunsheng Tang",
"Wenbo Zhou",
"Jie Zhang",
"Aishan Liu",
"Gelei Deng",
"Shuai Li",
"Peigui Qi",
"Weiming Zhang",
"Tianwei Zhang",
"Nenghai Yu"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Large language models (LLMs) have exhibited remarkable capabilities in
natural language generation, but they have also been observed to magnify
societal biases, particularly those related to gender. In response to this
issue, several benchmarks have been proposed to assess gender bias in LLMs.
However, these benchmarks often lack practical flexibility or inadvertently
introduce biases. To address these shortcomings, we introduce GenderCARE, a
comprehensive framework that encompasses innovative Criteria, bias Assessment,
Reduction techniques, and Evaluation metrics for quantifying and mitigating
gender bias in LLMs. To begin, we establish pioneering criteria for gender
equality benchmarks, spanning dimensions such as inclusivity, diversity,
explainability, objectivity, robustness, and realisticity. Guided by these
criteria, we construct GenderPair, a novel pair-based benchmark designed to
assess gender bias in LLMs comprehensively. Our benchmark provides standardized
and realistic evaluations, including previously overlooked gender groups such
as transgender and non-binary individuals. Furthermore, we develop effective
debiasing techniques that incorporate counterfactual data augmentation and
specialized fine-tuning strategies to reduce gender bias in LLMs without
compromising their overall performance. Extensive experiments demonstrate a
significant reduction in various gender bias benchmarks, with reductions
peaking at over 90% and averaging above 35% across 17 different LLMs.
Importantly, these reductions come with minimal variability in mainstream
language tasks, remaining below 2%. By offering a realistic assessment and
tailored reduction of gender biases, we hope that our GenderCARE can represent
a significant step towards achieving fairness and equity in LLMs. More details
are available at https://github.com/kstanghere/GenderCARE-ccs24. | cs.CL | [
"cs.CL",
"cs.AI"
] |
|||
AI in radiological imaging of soft-tissue and bone tumours: a systematic
review evaluating against CLAIM and FUTURE-AI guidelines | http://arxiv.org/abs/2408.12491v1 | http://arxiv.org/abs/2408.12491v1 | http://arxiv.org/pdf/2408.12491v1 | 2024-08-22 | 2024-08-22 | [
"Douwe J. Spaanderman",
"Matthew Marzetti",
"Xinyi Wan",
"Andrew F. Scarsbrook",
"Philip Robinson",
"Edwin H. G. Oei",
"Jacob J. Visser",
"Robert Hemke",
"Kirsten van Langevelde",
"David F. Hanff",
"Geert J. L. H. van Leenders",
"Cornelis Verhoef",
"Dirk J. Gruühagen",
"Wiro J. Niessen",
"Stefan Klein",
"Martijn P. A. Starmans"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Soft-tissue and bone tumours (STBT) are rare, diagnostically challenging
lesions with variable clinical behaviours and treatment approaches. This
systematic review provides an overview of Artificial Intelligence (AI) methods
using radiological imaging for diagnosis and prognosis of these tumours,
highlighting challenges in clinical translation, and evaluating study alignment
with the Checklist for AI in Medical Imaging (CLAIM) and the FUTURE-AI
international consensus guidelines for trustworthy and deployable AI to promote
the clinical translation of AI methods. The review covered literature from
several bibliographic databases, including papers published before 17/07/2024.
Original research in peer-reviewed journals focused on radiology-based AI for
diagnosing or prognosing primary STBT was included. Exclusion criteria were
animal, cadaveric, or laboratory studies, and non-English papers. Abstracts
were screened by two of three independent reviewers for eligibility. Eligible
papers were assessed against guidelines by one of three independent reviewers.
The search identified 15,015 abstracts, from which 325 articles were included
for evaluation. Most studies performed moderately on CLAIM, averaging a score
of 28.9$\pm$7.5 out of 53, but poorly on FUTURE-AI, averaging 5.1$\pm$2.1 out
of 30. Imaging-AI tools for STBT remain at the proof-of-concept stage,
indicating significant room for improvement. Future efforts by AI developers
should focus on design (e.g. define unmet clinical need, intended clinical
setting and how AI would be integrated in clinical workflow), development (e.g.
build on previous work, explainability), evaluation (e.g. evaluating and
addressing biases, evaluating AI against best practices), and data
reproducibility and availability (making documented code and data publicly
available). Following these recommendations could improve clinical translation
of AI methods. | 23 pages, 6 figures, 6 supplementary figures | cs.AI | [
"cs.AI",
"cs.LG"
] |
||
Not All Samples Should Be Utilized Equally: Towards Understanding and
Improving Dataset Distillation | http://arxiv.org/abs/2408.12483v1 | http://arxiv.org/abs/2408.12483v1 | http://arxiv.org/pdf/2408.12483v1 | 2024-08-22 | 2024-08-22 | [
"Shaobo Wang",
"Yantai Yang",
"Qilong Wang",
"Kaixin Li",
"Linfeng Zhang",
"Junchi Yan"
] | [
"",
"",
"",
"",
"",
""
] | Dataset Distillation (DD) aims to synthesize a small dataset capable of
performing comparably to the original dataset. Despite the success of numerous
DD methods, theoretical exploration of this area remains unaddressed. In this
paper, we take an initial step towards understanding various matching-based DD
methods from the perspective of sample difficulty. We begin by empirically
examining sample difficulty, measured by gradient norm, and observe that
different matching-based methods roughly correspond to specific difficulty
tendencies. We then extend the neural scaling laws of data pruning to DD to
theoretically explain these matching-based methods. Our findings suggest that
prioritizing the synthesis of easier samples from the original dataset can
enhance the quality of distilled datasets, especially in low IPC
(image-per-class) settings. Based on our empirical observations and theoretical
analysis, we introduce the Sample Difficulty Correction (SDC) approach,
designed to predominantly generate easier samples to achieve higher dataset
quality. Our SDC can be seamlessly integrated into existing methods as a plugin
with minimal code adjustments. Experimental results demonstrate that adding SDC
generates higher-quality distilled datasets across 7 distillation methods and 6
datasets. | cs.CV | [
"cs.CV",
"cs.AI"
] |
|||
Predicting Solar Energy Generation with Machine Learning based on AQI
and Weather Features | http://arxiv.org/abs/2408.12476v2 | http://arxiv.org/abs/2408.12476v2 | http://arxiv.org/pdf/2408.12476v2 | 2024-08-22 | 2024-08-23 | [
"Arjun Shah",
"Varun Viswanath",
"Kashish Gandhi",
"Dr. Nilesh Madhukar Patil"
] | [
"",
"",
"",
""
] | This paper addresses the pressing need for an accurate solar energy
prediction model, which is crucial for efficient grid integration. We explore
the influence of the Air Quality Index and weather features on solar energy
generation, employing advanced Machine Learning and Deep Learning techniques.
Our methodology uses time series modeling and makes novel use of power
transform normalization and zero-inflated modeling. Various Machine Learning
algorithms and Conv2D Long Short-Term Memory model based Deep Learning models
are applied to these transformations for precise predictions. Results
underscore the effectiveness of our approach, demonstrating enhanced prediction
accuracy with Air Quality Index and weather features. We achieved a 0.9691
$R^2$ Score, 0.18 MAE, 0.10 RMSE with Conv2D Long Short-Term Memory model,
showcasing the power transform technique's innovation in enhancing time series
forecasting for solar energy generation. Such results help our research
contribute valuable insights to the synergy between Air Quality Index, weather
features, and Deep Learning techniques for solar energy prediction. | 10 pages, 11 figures | 10.21203/rs.3.rs-3178713/v1 | cs.LG | [
"cs.LG",
"cs.AI"
] |