instruction
stringclasses 1
value | input
stringlengths 260
2.07k
| output
stringclasses 10
values |
---|---|---|
What field is the article from? | Title: PBWR: Parametric Building Wireframe Reconstruction from Aerial LiDAR Point Clouds
Abstract: In this paper, we present an end-to-end 3D building wireframe reconstruction
method to regress edges directly from aerial LiDAR point clouds.Our method,
named Parametric Building Wireframe Reconstruction (PBWR), takes aerial LiDAR
point clouds and initial edge entities as input, and fully uses self-attention
mechanism of transformers to regress edge parameters without any intermediate
steps such as corner prediction. We propose an edge non-maximum suppression
(E-NMS) module based on edge similarityto remove redundant edges. Additionally,
a dedicated edge loss function is utilized to guide the PBWR in regressing
edges parameters, where simple use of edge distance loss isn't suitable. In our
experiments, we demonstrate state-of-the-art results on the Building3D dataset,
achieving an improvement of approximately 36% in entry-level dataset edge
accuracy and around 42% improvement in the Tallinn dataset. | Computer Vision |
What field is the article from? | Title: Removing NSFW Concepts from Vision-and-Language Models for Text-to-Image Retrieval and Generation
Abstract: Vision-and-Language models such as CLIP have demonstrated remarkable
effectiveness across a wide range of tasks. However, these models are typically
trained on web-scale data, which can introduce inappropriate content and lead
to the development of unsafe and biased behavior. This, in turn, hampers their
applicability in sensitive and trustworthy contexts and could raise significant
concern in their adoption. To overcome these limitations, we introduce a
methodology to make Vision-and-Language models safer by removing their
sensitivity to not-safe-for-work concepts. We show how this can be done by
distilling from a large language model which converts between safe and unsafe
sentences and which is fine-tuned starting from just 100 manually-curated
pairs. We conduct extensive experiments on the resulting embedding space for
both retrieval and text-to-image generation, where we show that our model can
also be properly employed with pre-trained image generators. Our source code
and trained models are available at: https://github.com/aimagelab/safe-clip. | Computer Vision |
What field is the article from? | Title: Confidant: Customizing Transformer-based LLMs via Collaborative Edge Training
Abstract: Transformer-based large language models (LLMs) have demonstrated impressive
capabilities in a variety of natural language processing (NLP) tasks.
Nonetheless, it is challenging to deploy and fine-tune LLMs on mobile edge
devices with limited computing, memory, and energy budgets. In this paper, we
propose Confidant, a multi-backend collaborative training framework for
customizing state-of-the-art LLMs on commodity mobile devices like smartphones.
Confidant partitions an LLM into several sub-models so that each fits into a
mobile device's memory. A pipeline parallel training mechanism is further
developed to ensure fast and efficient distributed training. In addition, we
propose a novel backend scheduler to allocate different attention heads to
heterogeneous compute hardware, including mobile CPU and GPUs, to maximize the
compute resource utilization on each edge device. Our preliminary experimental
results show that Confidant achieves at most 45.3% memory reduction and 8.03x
inference speedup in practical settings. | Machine Learning |
What field is the article from? | Title: VERVE: Template-based ReflectiVE Rewriting for MotiVational IntErviewing
Abstract: Reflective listening is a fundamental skill that counselors must acquire to
achieve proficiency in motivational interviewing (MI). It involves responding
in a manner that acknowledges and explores the meaning of what the client has
expressed in the conversation. In this work, we introduce the task of
counseling response rewriting, which transforms non-reflective statements into
reflective responses. We introduce VERVE, a template-based rewriting system
with paraphrase-augmented training and adaptive template updating. VERVE first
creates a template by identifying and filtering out tokens that are not
relevant to reflections and constructs a reflective response using the
template. Paraphrase-augmented training allows the model to learn less-strict
fillings of masked spans, and adaptive template updating helps discover
effective templates for rewriting without significantly removing the original
content. Using both automatic and human evaluations, we compare our method
against text rewriting baselines and show that our framework is effective in
turning non-reflective statements into more reflective responses while
achieving a good content preservation-reflection style trade-off. | Computational Linguistics |
What field is the article from? | Title: Learning Uniform Clusters on Hypersphere for Deep Graph-level Clustering
Abstract: Graph clustering has been popularly studied in recent years. However, most
existing graph clustering methods focus on node-level clustering, i.e.,
grouping nodes in a single graph into clusters. In contrast, graph-level
clustering, i.e., grouping multiple graphs into clusters, remains largely
unexplored. Graph-level clustering is critical in a variety of real-world
applications, such as, properties prediction of molecules and community
analysis in social networks. However, graph-level clustering is challenging due
to the insufficient discriminability of graph-level representations, and the
insufficient discriminability makes deep clustering be more likely to obtain
degenerate solutions (cluster collapse). To address the issue, we propose a
novel deep graph-level clustering method called Uniform Deep Graph Clustering
(UDGC). UDGC assigns instances evenly to different clusters and then scatters
those clusters on unit hypersphere, leading to a more uniform cluster-level
distribution and a slighter cluster collapse. Specifically, we first propose
Augmentation-Consensus Optimal Transport (ACOT) for generating uniformly
distributed and reliable pseudo labels for partitioning clusters. Then we adopt
contrastive learning to scatter those clusters. Besides, we propose Center
Alignment Optimal Transport (CAOT) for guiding the model to learn better
parameters, which further promotes the cluster performance. Our empirical study
on eight well-known datasets demonstrates that UDGC significantly outperforms
the state-of-the-art models. | Machine Learning |
What field is the article from? | Title: AnyHome: Open-Vocabulary Generation of Structured and Textured 3D Homes
Abstract: We introduce AnyHome, a framework that translates open-vocabulary
descriptions, ranging from simple labels to elaborate paragraphs, into
well-structured and textured 3D indoor scenes at a house-scale. Inspired by
cognition theories, AnyHome employs an amodal structured representation to
capture 3D spatial cues from textual narratives and then uses egocentric
inpainting to enrich these scenes. To this end, we begin by using specially
designed template prompts for Large Language Models (LLMs), which enable
precise control over the textual input. We then utilize intermediate
representations to maintain the spatial structure's consistency, ensuring that
the 3D scenes align closely with the textual description. Then, we apply a
Score Distillation Sampling process to refine the placement of objects. Lastly,
an egocentric inpainting process is incorporated to enhance the realism and
appearance of the scenes. AnyHome stands out due to its hierarchical structured
representation combined with the versatility of open-vocabulary text
interpretation. This allows for extensive customization of indoor scenes at
various levels of granularity. We demonstrate that AnyHome can reliably
generate a range of diverse indoor scenes, characterized by their detailed
spatial structures and textures, all corresponding to the free-form textual
inputs. | Computer Vision |
What field is the article from? | Title: Towards General-Purpose Speech Abilities for Large Language Models Using Unpaired Data
Abstract: In this work, we extend the instruction-tuned Llama-2 model with end-to-end
general-purpose speech processing and reasoning abilities while maintaining the
wide range of LLM capabilities, without using any carefully curated paired
data. The proposed model can utilize audio prompts as a replacement for text
and sustain a conversation. Such a model also has extended cross-modal
capabilities such as being able to perform speech question answering, speech
translation, and audio summarization amongst many other closed and open-domain
tasks. This is unlike prior approaches in speech, in which LLMs are extended to
handle audio for a limited number of pre-designated tasks. Experiments show
that our end-to-end approach is on par with or outperforms a cascaded system
(speech recognizer + LLM) in terms of modeling the response to a prompt.
Furthermore, unlike a cascade, our approach shows the ability to interchange
text and audio modalities and utilize the prior context in a conversation to
provide better results. | Computational Linguistics |
What field is the article from? | Title: SPLAIN: Augmenting Cybersecurity Warnings with Reasons and Data
Abstract: Effective cyber threat recognition and prevention demand comprehensible
forecasting systems, as prior approaches commonly offer limited and,
ultimately, unconvincing information. We introduce Simplified Plaintext
Language (SPLAIN), a natural language generator that converts warning data into
user-friendly cyber threat explanations. SPLAIN is designed to generate clear,
actionable outputs, incorporating hierarchically organized explanatory details
about input data and system functionality. Given the inputs of individual
sensor-induced forecasting signals and an overall warning from a fusion module,
SPLAIN queries each signal for information on contributing sensors and data
signals. This collected data is processed into a coherent English explanation,
encompassing forecasting, sensing, and data elements for user review. SPLAIN's
template-based approach ensures consistent warning structure and vocabulary.
SPLAIN's hierarchical output structure allows each threat and its components to
be expanded to reveal underlying explanations on demand. Our conclusions
emphasize the need for designers to specify the "how" and "why" behind cyber
warnings, advocate for simple structured templates in generating consistent
explanations, and recognize that direct causal links in Machine Learning
approaches may not always be identifiable, requiring some explanations to focus
on general methodologies, such as model and training data. | Computational Linguistics |
What field is the article from? | Title: Verified Compositional Neuro-Symbolic Control for Stochastic Systems with Temporal Logic Tasks
Abstract: Several methods have been proposed recently to learn neural network (NN)
controllers for autonomous agents, with unknown and stochastic dynamics, tasked
with complex missions captured by Linear Temporal Logic (LTL). Due to the
sample-inefficiency of the majority of these works, compositional learning
methods have been proposed decomposing the LTL specification into smaller
sub-tasks. Then, separate controllers are learned and composed to satisfy the
original task. A key challenge within these approaches is that they often lack
safety guarantees or the provided guarantees are impractical. This paper aims
to address this challenge. Particularly, we consider autonomous systems with
unknown and stochastic dynamics and LTL-encoded tasks. We assume that the
system is equipped with a finite set of base skills modeled by trained NN
feedback controllers. Our goal is to check if there exists a temporal
composition of the trained NN controllers - and if so, to compute it - that
will yield a composite system behavior that satisfies the assigned LTL task
with probability one. We propose a new approach that relies on a novel
integration of automata theory and data-driven reachability analysis tools for
NN-controlled stochastic systems. The resulting neuro-symbolic controller
allows the agent to generate safe behaviors for unseen complex temporal logic
tasks in a zero-shot fashion by leveraging its base skills. We show correctness
of the proposed method and we provide conditions under which it is complete. To
the best of our knowledge, this is the first work that designs verified
temporal compositions of NN controllers for unknown and stochastic systems.
Finally, we provide extensive numerical simulations and hardware experiments on
robot navigation tasks to demonstrate the proposed method. | Robotics |
What field is the article from? | Title: Othello is Solved
Abstract: The game of Othello is one of the world's most complex and popular games that
has yet to be computationally solved. Othello has roughly ten octodecillion (10
to the 58th power) possible game records and ten octillion (10 to the 28th
power) possible game positions. The challenge of solving Othello, determining
the outcome of a game with no mistake made by either player, has long been a
grand challenge in computer science. This paper announces a significant
milestone: Othello is now solved. It is computationally proved that perfect
play by both players lead to a draw. Strong Othello software has long been
built using heuristically designed search techniques. Solving a game provides a
solution that enables the software to play the game perfectly. | Artificial Intelligence |
What field is the article from? | Title: A Novel Energy based Model Mechanism for Multi-modal Aspect-Based Sentiment Analysis
Abstract: Multi-modal aspect-based sentiment analysis (MABSA) has recently attracted
increasing attention. The span-based extraction methods, such as FSUIE,
demonstrate strong performance in sentiment analysis due to their joint
modeling of input sequences and target labels. However, previous methods still
have certain limitations: (i) They ignore the difference in the focus of visual
information between different analysis targets (aspect or sentiment). (ii)
Combining features from uni-modal encoders directly may not be sufficient to
eliminate the modal gap and can cause difficulties in capturing the image-text
pairwise relevance. (iii) Existing span-based methods for MABSA ignore the
pairwise relevance of target span boundaries. To tackle these limitations, we
propose a novel framework called DQPSA for multi-modal sentiment analysis.
Specifically, our model contains a Prompt as Dual Query (PDQ) module that uses
the prompt as both a visual query and a language query to extract prompt-aware
visual information and strengthen the pairwise relevance between visual
information and the analysis target. Additionally, we introduce an Energy-based
Pairwise Expert (EPE) module that models the boundaries pairing of the analysis
target from the perspective of an Energy-based Model. This expert predicts
aspect or sentiment span based on pairwise stability. Experiments on three
widely used benchmarks demonstrate that DQPSA outperforms previous approaches
and achieves a new state-of-the-art performance. | Artificial Intelligence |
What field is the article from? | Title: Multi-EuP: The Multilingual European Parliament Dataset for Analysis of Bias in Information Retrieval
Abstract: We present Multi-EuP, a new multilingual benchmark dataset, comprising 22K
multi-lingual documents collected from the European Parliament, spanning 24
languages. This dataset is designed to investigate fairness in a multilingual
information retrieval (IR) context to analyze both language and demographic
bias in a ranking context. It boasts an authentic multilingual corpus,
featuring topics translated into all 24 languages, as well as cross-lingual
relevance judgments. Furthermore, it offers rich demographic information
associated with its documents, facilitating the study of demographic bias. We
report the effectiveness of Multi-EuP for benchmarking both monolingual and
multilingual IR. We also conduct a preliminary experiment on language bias
caused by the choice of tokenization strategy. | Computational Linguistics |
What field is the article from? | Title: Communication Cost Reduction for Subgraph Counting under Local Differential Privacy via Hash Functions
Abstract: We suggest the use of hash functions to cut down the communication costs when
counting subgraphs under edge local differential privacy. While various
algorithms exist for computing graph statistics, including the count of
subgraphs, under the edge local differential privacy, many suffer with high
communication costs, making them less efficient for large graphs. Though data
compression is a typical approach in differential privacy, its application in
local differential privacy requires a form of compression that every node can
reproduce. In our study, we introduce linear congruence hashing. With a
sampling rate of $s$, our method can cut communication costs by a factor of
$s^2$, albeit at the cost of increasing variance in the published graph
statistic by a factor of $s$. The experimental results indicate that, when
matched for communication costs, our method achieves a reduction in the
$\ell_2$-error for triangle counts by up to 1000 times compared to the
performance of leading algorithms. | Cryptography and Security |
What field is the article from? | Title: ACL Anthology Helper: A Tool to Retrieve and Manage Literature from ACL Anthology
Abstract: The ACL Anthology is an online repository that serves as a comprehensive
collection of publications in the field of natural language processing (NLP)
and computational linguistics (CL). This paper presents a tool called ``ACL
Anthology Helper''. It automates the process of parsing and downloading papers
along with their meta-information, which are then stored in a local MySQL
database. This allows for efficient management of the local papers using a wide
range of operations, including "where," "group," "order," and more. By
providing over 20 operations, this tool significantly enhances the retrieval of
literature based on specific conditions. Notably, this tool has been
successfully utilised in writing a survey paper (Tang et al.,2022a). By
introducing the ACL Anthology Helper, we aim to enhance researchers' ability to
effectively access and organise literature from the ACL Anthology. This tool
offers a convenient solution for researchers seeking to explore the ACL
Anthology's vast collection of publications while allowing for more targeted
and efficient literature retrieval. | Computational Linguistics |
What field is the article from? | Title: Proposal-Contrastive Pretraining for Object Detection from Fewer Data
Abstract: The use of pretrained deep neural networks represents an attractive way to
achieve strong results with few data available. When specialized in dense
problems such as object detection, learning local rather than global
information in images has proven to be more efficient. However, for
unsupervised pretraining, the popular contrastive learning requires a large
batch size and, therefore, a lot of resources. To address this problem, we are
interested in transformer-based object detectors that have recently gained
traction in the community with good performance and with the particularity of
generating many diverse object proposals.
In this work, we present Proposal Selection Contrast (ProSeCo), a novel
unsupervised overall pretraining approach that leverages this property. ProSeCo
uses the large number of object proposals generated by the detector for
contrastive learning, which allows the use of a smaller batch size, combined
with object-level features to learn local information in the images. To improve
the effectiveness of the contrastive loss, we introduce the object location
information in the selection of positive examples to take into account multiple
overlapping object proposals. When reusing pretrained backbone, we advocate for
consistency in learning local information between the backbone and the
detection head.
We show that our method outperforms state of the art in unsupervised
pretraining for object detection on standard and novel benchmarks in learning
with fewer data. | Computer Vision |
What field is the article from? | Title: AI Use in Manuscript Preparation for Academic Journals
Abstract: The emergent abilities of Large Language Models (LLMs), which power tools
like ChatGPT and Bard, have produced both excitement and worry about how AI
will impact academic writing. In response to rising concerns about AI use,
authors of academic publications may decide to voluntarily disclose any AI
tools they use to revise their manuscripts, and journals and conferences could
begin mandating disclosure and/or turn to using detection services, as many
teachers have done with student writing in class settings. Given these looming
possibilities, we investigate whether academics view it as necessary to report
AI use in manuscript preparation and how detectors react to the use of AI in
academic writing. | Computers and Society |
What field is the article from? | Title: How much can change in a year? Revisiting Evaluation in Multi-Agent Reinforcement Learning
Abstract: Establishing sound experimental standards and rigour is important in any
growing field of research. Deep Multi-Agent Reinforcement Learning (MARL) is
one such nascent field. Although exciting progress has been made, MARL has
recently come under scrutiny for replicability issues and a lack of
standardised evaluation methodology, specifically in the cooperative setting.
Although protocols have been proposed to help alleviate the issue, it remains
important to actively monitor the health of the field. In this work, we extend
the database of evaluation methodology previously published by containing
meta-data on MARL publications from top-rated conferences and compare the
findings extracted from this updated database to the trends identified in their
work. Our analysis shows that many of the worrying trends in performance
reporting remain. This includes the omission of uncertainty quantification, not
reporting all relevant evaluation details and a narrowing of algorithmic
development classes. Promisingly, we do observe a trend towards more difficult
scenarios in SMAC-v1, which if continued into SMAC-v2 will encourage novel
algorithmic development. Our data indicate that replicability needs to be
approached more proactively by the MARL community to ensure trust in the field
as we move towards exciting new frontiers. | Artificial Intelligence |
What field is the article from? | Title: Predictive Chemistry Augmented with Text Retrieval
Abstract: This paper focuses on using natural language descriptions to enhance
predictive models in the chemistry field. Conventionally, chemoinformatics
models are trained with extensive structured data manually extracted from the
literature. In this paper, we introduce TextReact, a novel method that directly
augments predictive chemistry with texts retrieved from the literature.
TextReact retrieves text descriptions relevant for a given chemical reaction,
and then aligns them with the molecular representation of the reaction. This
alignment is enhanced via an auxiliary masked LM objective incorporated in the
predictor training. We empirically validate the framework on two chemistry
tasks: reaction condition recommendation and one-step retrosynthesis. By
leveraging text retrieval, TextReact significantly outperforms state-of-the-art
chemoinformatics models trained solely on molecular data. | Computational Linguistics |
What field is the article from? | Title: Robust Adversarial Attacks Detection for Deep Learning based Relative Pose Estimation for Space Rendezvous
Abstract: Research on developing deep learning techniques for autonomous spacecraft
relative navigation challenges is continuously growing in recent years.
Adopting those techniques offers enhanced performance. However, such approaches
also introduce heightened apprehensions regarding the trustability and security
of such deep learning methods through their susceptibility to adversarial
attacks. In this work, we propose a novel approach for adversarial attack
detection for deep neural network-based relative pose estimation schemes based
on the explainability concept. We develop for an orbital rendezvous scenario an
innovative relative pose estimation technique adopting our proposed
Convolutional Neural Network (CNN), which takes an image from the chaser's
onboard camera and outputs accurately the target's relative position and
rotation. We perturb seamlessly the input images using adversarial attacks that
are generated by the Fast Gradient Sign Method (FGSM). The adversarial attack
detector is then built based on a Long Short Term Memory (LSTM) network which
takes the explainability measure namely SHapley Value from the CNN-based pose
estimator and flags the detection of adversarial attacks when acting.
Simulation results show that the proposed adversarial attack detector achieves
a detection accuracy of 99.21%. Both the deep relative pose estimator and
adversarial attack detector are then tested on real data captured from our
laboratory-designed setup. The experimental results from our
laboratory-designed setup demonstrate that the proposed adversarial attack
detector achieves an average detection accuracy of 96.29%. | Computer Vision |
What field is the article from? | Title: Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal Imagery
Abstract: This paper explores post-disaster analytics using multimodal deep learning
models trained with curriculum learning method. Studying post-disaster
analytics is important as it plays a crucial role in mitigating the impact of
disasters by providing timely and accurate insights into the extent of damage
and the allocation of resources. We propose a curriculum learning strategy to
enhance the performance of multimodal deep learning models. Curriculum learning
emulates the progressive learning sequence in human education by training deep
learning models on increasingly complex data. Our primary objective is to
develop a curriculum-trained multimodal deep learning model, with a particular
focus on visual question answering (VQA) capable of jointly processing image
and text data, in conjunction with semantic segmentation for disaster analytics
using the
FloodNet\footnote{https://github.com/BinaLab/FloodNet-Challenge-EARTHVISION2021}
dataset. To achieve this, U-Net model is used for semantic segmentation and
image encoding. A custom built text classifier is used for visual question
answering. Existing curriculum learning methods rely on manually defined
difficulty functions. We introduce a novel curriculum learning approach termed
Dynamic Task and Weight Prioritization (DATWEP), which leverages a
gradient-based method to automatically decide task difficulty during curriculum
learning training, thereby eliminating the need for explicit difficulty
computation. The integration of DATWEP into our multimodal model shows
improvement on VQA performance. Source code is available at
https://github.com/fualsan/DATWEP. | Computer Vision |
What field is the article from? | Title: VT-Former: A Transformer-based Vehicle Trajectory Prediction Approach For Intelligent Highway Transportation Systems
Abstract: Enhancing roadway safety and traffic management has become an essential focus
area for a broad range of modern cyber-physical systems and intelligent
transportation systems. Vehicle Trajectory Prediction is a pivotal element
within numerous applications for highway and road safety. These applications
encompass a wide range of use cases, spanning from traffic management and
accident prevention to enhancing work-zone safety and optimizing energy
conservation. The ability to implement intelligent management in this context
has been greatly advanced by the developments in the field of Artificial
Intelligence (AI), alongside the increasing deployment of surveillance cameras
across road networks. In this paper, we introduce a novel transformer-based
approach for vehicle trajectory prediction for highway safety and surveillance,
denoted as VT-Former. In addition to utilizing transformers to capture
long-range temporal patterns, a new Graph Attentive Tokenization (GAT) module
has been proposed to capture intricate social interactions among vehicles.
Combining these two core components culminates in a precise approach for
vehicle trajectory prediction. Our study on three benchmark datasets with three
different viewpoints demonstrates the State-of-The-Art (SoTA) performance of
VT-Former in vehicle trajectory prediction and its generalizability and
robustness. We also evaluate VT-Former's efficiency on embedded boards and
explore its potential for vehicle anomaly detection as a sample application,
showcasing its broad applicability. | Computer Vision |
What field is the article from? | Title: A Simple and Scalable Representation for Graph Generation
Abstract: Recently, there has been a surge of interest in employing neural networks for
graph generation, a fundamental statistical learning problem with critical
applications like molecule design and community analysis. However, most
approaches encounter significant limitations when generating large-scale
graphs. This is due to their requirement to output the full adjacency matrices
whose size grows quadratically with the number of nodes. In response to this
challenge, we introduce a new, simple, and scalable graph representation named
gap encoded edge list (GEEL) that has a small representation size that aligns
with the number of edges. In addition, GEEL significantly reduces the
vocabulary size by incorporating the gap encoding and bandwidth restriction
schemes. GEEL can be autoregressively generated with the incorporation of node
positional encoding, and we further extend GEEL to deal with attributed graphs
by designing a new grammar. Our findings reveal that the adoption of this
compact representation not only enhances scalability but also bolsters
performance by simplifying the graph generation process. We conduct a
comprehensive evaluation across ten non-attributed and two molecular graph
generation tasks, demonstrating the effectiveness of GEEL. | Machine Learning |
What field is the article from? | Title: Incidental Polysemanticity
Abstract: Polysemantic neurons (neurons that activate for a set of unrelated features)
have been seen as a significant obstacle towards interpretability of
task-optimized deep networks, with implications for AI safety. The classic
origin story of polysemanticity is that the data contains more "features" than
neurons, such that learning to perform a task forces the network to co-allocate
multiple unrelated features to the same neuron, endangering our ability to
understand the network's internal processing. In this work, we present a second
and non-mutually exclusive origin story of polysemanticity. We show that
polysemanticity can arise incidentally, even when there are ample neurons to
represent all features in the data, using a combination of theory and
experiments. This second type of polysemanticity occurs because random
initialization can, by chance alone, initially assign multiple features to the
same neuron, and the training dynamics then strengthen such overlap. Due to its
origin, we term this \textit{incidental polysemanticity}. | Machine Learning |
What field is the article from? | Title: Evaluating Gender Bias in the Translation of Gender-Neutral Languages into English
Abstract: Machine Translation (MT) continues to improve in quality and adoption, yet
the inadvertent perpetuation of gender bias remains a significant concern.
Despite numerous studies into gender bias in translations from gender-neutral
languages such as Turkish into more strongly gendered languages like English,
there are no benchmarks for evaluating this phenomenon or for assessing
mitigation strategies. To address this gap, we introduce GATE X-E, an extension
to the GATE (Rarrick et al., 2023) corpus, that consists of human translations
from Turkish, Hungarian, Finnish, and Persian into English. Each translation is
accompanied by feminine, masculine, and neutral variants for each possible
gender interpretation. The dataset, which contains between 1250 and 1850
instances for each of the four language pairs, features natural sentences with
a wide range of sentence lengths and domains, challenging translation rewriters
on various linguistic phenomena. Additionally, we present an English gender
rewriting solution built on GPT-3.5 Turbo and use GATE X-E to evaluate it. We
open source our contributions to encourage further research on gender
debiasing. | Computational Linguistics |
What field is the article from? | Title: ProTIP: Progressive Tool Retrieval Improves Planning
Abstract: Large language models (LLMs) are increasingly employed for complex multi-step
planning tasks, where the tool retrieval (TR) step is crucial for achieving
successful outcomes. Two prevalent approaches for TR are single-step retrieval,
which utilizes the complete query, and sequential retrieval using task
decomposition (TD), where a full query is segmented into discrete atomic
subtasks. While single-step retrieval lacks the flexibility to handle
"inter-tool dependency," the TD approach necessitates maintaining "subtask-tool
atomicity alignment," as the toolbox can evolve dynamically. To address these
limitations, we introduce the Progressive Tool retrieval to Improve Planning
(ProTIP) framework. ProTIP is a lightweight, contrastive learning-based
framework that implicitly performs TD without the explicit requirement of
subtask labels, while simultaneously maintaining subtask-tool atomicity. On the
ToolBench dataset, ProTIP outperforms the ChatGPT task decomposition-based
approach by a remarkable margin, achieving a 24% improvement in Recall@K=10 for
TR and a 41% enhancement in tool accuracy for plan generation. | Information Retrieval |
What field is the article from? | Title: PROFL: A Privacy-Preserving Federated Learning Method with Stringent Defense Against Poisoning Attacks
Abstract: Federated Learning (FL) faces two major issues: privacy leakage and poisoning
attacks, which may seriously undermine the reliability and security of the
system. Overcoming them simultaneously poses a great challenge. This is because
privacy protection policies prohibit access to users' local gradients to avoid
privacy leakage, while Byzantine-robust methods necessitate access to these
gradients to defend against poisoning attacks. To address these problems, we
propose a novel privacy-preserving Byzantine-robust FL framework PROFL. PROFL
is based on the two-trapdoor additional homomorphic encryption algorithm and
blinding techniques to ensure the data privacy of the entire FL process. During
the defense process, PROFL first utilize secure Multi-Krum algorithm to remove
malicious gradients at the user level. Then, according to the Pauta criterion,
we innovatively propose a statistic-based privacy-preserving defense algorithm
to eliminate outlier interference at the feature level and resist impersonation
poisoning attacks with stronger concealment. Detailed theoretical analysis
proves the security and efficiency of the proposed method. We conducted
extensive experiments on two benchmark datasets, and PROFL improved accuracy by
39% to 75% across different attack settings compared to similar
privacy-preserving robust methods, demonstrating its significant advantage in
robustness. | Cryptography and Security |
What field is the article from? | Title: LLM-State: Expandable State Representation for Long-horizon Task Planning in the Open World
Abstract: This work addresses the problem of long-horizon task planning with the Large
Language Model (LLM) in an open-world household environment. Existing works
fail to explicitly track key objects and attributes, leading to erroneous
decisions in long-horizon tasks, or rely on highly engineered state features
and feedback, which is not generalizable. We propose a novel, expandable state
representation that provides continuous expansion and updating of object
attributes from the LLM's inherent capabilities for context understanding and
historical action reasoning. Our proposed representation maintains a
comprehensive record of an object's attributes and changes, enabling robust
retrospective summary of the sequence of actions leading to the current state.
This allows enhanced context understanding for decision-making in task
planning. We validate our model through experiments across simulated and
real-world task planning scenarios, demonstrating significant improvements over
baseline methods in a variety of tasks requiring long-horizon state tracking
and reasoning. | Robotics |
What field is the article from? | Title: PETA: Evaluating the Impact of Protein Transfer Learning with Sub-word Tokenization on Downstream Applications
Abstract: Large protein language models are adept at capturing the underlying
evolutionary information in primary structures, offering significant practical
value for protein engineering. Compared to natural language models, protein
amino acid sequences have a smaller data volume and a limited combinatorial
space. Choosing an appropriate vocabulary size to optimize the pre-trained
model is a pivotal issue. Moreover, despite the wealth of benchmarks and
studies in the natural language community, there remains a lack of a
comprehensive benchmark for systematically evaluating protein language model
quality. Given these challenges, PETA trained language models with 14 different
vocabulary sizes under three tokenization methods. It conducted thousands of
tests on 33 diverse downstream datasets to assess the models' transfer learning
capabilities, incorporating two classification heads and three random seeds to
mitigate potential biases. Extensive experiments indicate that vocabulary sizes
between 50 and 200 optimize the model, whereas sizes exceeding 800
detrimentally affect the model's representational performance. Our code, model
weights and datasets are available at
https://github.com/ginnm/ProteinPretraining. | Computational Linguistics |
What field is the article from? | Title: Is This the Subspace You Are Looking for? An Interpretability Illusion for Subspace Activation Patching
Abstract: Mechanistic interpretability aims to understand model behaviors in terms of
specific, interpretable features, often hypothesized to manifest as
low-dimensional subspaces of activations. Specifically, recent studies have
explored subspace interventions (such as activation patching) as a way to
simultaneously manipulate model behavior and attribute the features behind it
to given subspaces.
In this work, we demonstrate that these two aims diverge, potentially leading
to an illusory sense of interpretability. Counterintuitively, even if a
subspace intervention makes the model's output behave as if the value of a
feature was changed, this effect may be achieved by activating a dormant
parallel pathway leveraging another subspace that is causally disconnected from
model outputs. We demonstrate this phenomenon in a distilled mathematical
example, in two real-world domains (the indirect object identification task and
factual recall), and present evidence for its prevalence in practice. In the
context of factual recall, we further show a link to rank-1 fact editing,
providing a mechanistic explanation for previous work observing an
inconsistency between fact editing performance and fact localization.
However, this does not imply that activation patching of subspaces is
intrinsically unfit for interpretability. To contextualize our findings, we
also show what a success case looks like in a task (indirect object
identification) where prior manual circuit analysis informs an understanding of
the location of a feature. We explore the additional evidence needed to argue
that a patched subspace is faithful. | Machine Learning |
What field is the article from? | Title: Diffusion-TTA: Test-time Adaptation of Discriminative Models via Generative Feedback
Abstract: The advancements in generative modeling, particularly the advent of diffusion
models, have sparked a fundamental question: how can these models be
effectively used for discriminative tasks? In this work, we find that
generative models can be great test-time adapters for discriminative models.
Our method, Diffusion-TTA, adapts pre-trained discriminative models such as
image classifiers, segmenters and depth predictors, to each unlabelled example
in the test set using generative feedback from a diffusion model. We achieve
this by modulating the conditioning of the diffusion model using the output of
the discriminative model. We then maximize the image likelihood objective by
backpropagating the gradients to discriminative model's parameters. We show
Diffusion-TTA significantly enhances the accuracy of various large-scale
pre-trained discriminative models, such as, ImageNet classifiers, CLIP models,
image pixel labellers and image depth predictors. Diffusion-TTA outperforms
existing test-time adaptation methods, including TTT-MAE and TENT, and
particularly shines in online adaptation setups, where the discriminative model
is continually adapted to each example in the test set. We provide access to
code, results, and visualizations on our website:
https://diffusion-tta.github.io/. | Computer Vision |
What field is the article from? | Title: SeRO: Self-Supervised Reinforcement Learning for Recovery from Out-of-Distribution Situations
Abstract: Robotic agents trained using reinforcement learning have the problem of
taking unreliable actions in an out-of-distribution (OOD) state. Agents can
easily become OOD in real-world environments because it is almost impossible
for them to visit and learn the entire state space during training.
Unfortunately, unreliable actions do not ensure that agents perform their
original tasks successfully. Therefore, agents should be able to recognize
whether they are in OOD states and learn how to return to the learned state
distribution rather than continue to take unreliable actions. In this study, we
propose a novel method for retraining agents to recover from OOD situations in
a self-supervised manner when they fall into OOD states. Our in-depth
experimental results demonstrate that our method substantially improves the
agent's ability to recover from OOD situations in terms of sample efficiency
and restoration of the performance for the original tasks. Moreover, we show
that our method can retrain the agent to recover from OOD situations even when
in-distribution states are difficult to visit through exploration. | Machine Learning |
What field is the article from? | Title: FigStep: Jailbreaking Large Vision-language Models via Typographic Visual Prompts
Abstract: Ensuring the safety of artificial intelligence-generated content (AIGC) is a
longstanding topic in the artificial intelligence (AI) community, and the
safety concerns associated with Large Language Models (LLMs) have been widely
investigated. Recently, large vision-language models (VLMs) represent an
unprecedented revolution, as they are built upon LLMs but can incorporate
additional modalities (e.g., images). However, the safety of VLMs lacks
systematic evaluation, and there may be an overconfidence in the safety
guarantees provided by their underlying LLMs. In this paper, to demonstrate
that introducing additional modality modules leads to unforeseen AI safety
issues, we propose FigStep, a straightforward yet effective jailbreaking
algorithm against VLMs. Instead of feeding textual harmful instructions
directly, FigStep converts the harmful content into images through typography
to bypass the safety alignment within the textual module of the VLMs, inducing
VLMs to output unsafe responses that violate common AI safety policies. In our
evaluation, we manually review 46,500 model responses generated by 3 families
of the promising open-source VLMs, i.e., LLaVA, MiniGPT4, and CogVLM (a total
of 6 VLMs). The experimental results show that FigStep can achieve an average
attack success rate of 82.50% on 500 harmful queries in 10 topics. Moreover, we
demonstrate that the methodology of FigStep can even jailbreak GPT-4V, which
already leverages an OCR detector to filter harmful queries. Above all, our
work reveals that VLMs are vulnerable to jailbreaking attacks, which highlights
the necessity of novel safety alignments between visual and textual modalities. | Cryptography and Security |
What field is the article from? | Title: LLMaAA: Making Large Language Models as Active Annotators
Abstract: Prevalent supervised learning methods in natural language processing (NLP)
are notoriously data-hungry, which demand large amounts of high-quality
annotated data. In practice, acquiring such data is a costly endeavor.
Recently, the superior few-shot performance of large language models (LLMs) has
propelled the development of dataset generation, where the training data are
solely synthesized from LLMs. However, such an approach usually suffers from
low-quality issues, and requires orders of magnitude more labeled data to
achieve satisfactory performance. To fully exploit the potential of LLMs and
make use of massive unlabeled data, we propose LLMaAA, which takes LLMs as
annotators and puts them into an active learning loop to determine what to
annotate efficiently. To learn robustly with pseudo labels, we optimize both
the annotation and training processes: (1) we draw k-NN examples from a small
demonstration pool as in-context examples, and (2) we adopt the example
reweighting technique to assign training samples with learnable weights.
Compared with previous approaches, LLMaAA features both efficiency and
reliability. We conduct experiments and analysis on two classic NLP tasks,
named entity recognition and relation extraction. With LLMaAA, task-specific
models trained from LLM-generated labels can outperform the teacher within only
hundreds of annotated examples, which is much more cost-effective than other
baselines. | Computational Linguistics |
What field is the article from? | Title: Teaching Robots to Build Simulations of Themselves
Abstract: Simulation enables robots to plan and estimate the outcomes of prospective
actions without the need to physically execute them. We introduce a
self-supervised learning framework to enable robots model and predict their
morphology, kinematics and motor control using only brief raw video data,
eliminating the need for extensive real-world data collection and kinematic
priors. By observing their own movements, akin to humans watching their
reflection in a mirror, robots learn an ability to simulate themselves and
predict their spatial motion for various tasks. Our results demonstrate that
this self-learned simulation not only enables accurate motion planning but also
allows the robot to detect abnormalities and recover from damage. | Robotics |
What field is the article from? | Title: Can We Utilize Pre-trained Language Models within Causal Discovery Algorithms?
Abstract: Scaling laws have allowed Pre-trained Language Models (PLMs) into the field
of causal reasoning. Causal reasoning of PLM relies solely on text-based
descriptions, in contrast to causal discovery which aims to determine the
causal relationships between variables utilizing data. Recently, there has been
current research regarding a method that mimics causal discovery by aggregating
the outcomes of repetitive causal reasoning, achieved through specifically
designed prompts. It highlights the usefulness of PLMs in discovering cause and
effect, which is often limited by a lack of data, especially when dealing with
multiple variables. Conversely, the characteristics of PLMs which are that PLMs
do not analyze data and they are highly dependent on prompt design leads to a
crucial limitation for directly using PLMs in causal discovery. Accordingly,
PLM-based causal reasoning deeply depends on the prompt design and carries out
the risk of overconfidence and false predictions in determining causal
relationships. In this paper, we empirically demonstrate the aforementioned
limitations of PLM-based causal reasoning through experiments on
physics-inspired synthetic data. Then, we propose a new framework that
integrates prior knowledge obtained from PLM with a causal discovery algorithm.
This is accomplished by initializing an adjacency matrix for causal discovery
and incorporating regularization using prior knowledge. Our proposed framework
not only demonstrates improved performance through the integration of PLM and
causal discovery but also suggests how to leverage PLM-extracted prior
knowledge with existing causal discovery algorithms. | Artificial Intelligence |
What field is the article from? | Title: A Safer Vision-based Autonomous Planning System for Quadrotor UAVs with Dynamic Obstacle Trajectory Prediction and Its Application with LLMs
Abstract: For intelligent quadcopter UAVs, a robust and reliable autonomous planning
system is crucial. Most current trajectory planning methods for UAVs are
suitable for static environments but struggle to handle dynamic obstacles,
which can pose challenges and even dangers to flight. To address this issue,
this paper proposes a vision-based planning system that combines tracking and
trajectory prediction of dynamic obstacles to achieve efficient and reliable
autonomous flight. We use a lightweight object detection algorithm to identify
dynamic obstacles and then use Kalman Filtering to track and estimate their
motion states. During the planning phase, we not only consider static obstacles
but also account for the potential movements of dynamic obstacles. For
trajectory generation, we use a B-spline-based trajectory search algorithm,
which is further optimized with various constraints to enhance safety and
alignment with the UAV's motion characteristics. We conduct experiments in both
simulation and real-world environments, and the results indicate that our
approach can successfully detect and avoid obstacles in dynamic environments in
real-time, offering greater reliability compared to existing approaches.
Furthermore, with the advancements in Natural Language Processing (NLP)
technology demonstrating exceptional zero-shot generalization capabilities,
more user-friendly human-machine interactions have become feasible, and this
study also explores the integration of autonomous planning systems with Large
Language Models (LLMs). | Robotics |
What field is the article from? | Title: Bespoke Solvers for Generative Flow Models
Abstract: Diffusion or flow-based models are powerful generative paradigms that are
notoriously hard to sample as samples are defined as solutions to
high-dimensional Ordinary or Stochastic Differential Equations (ODEs/SDEs)
which require a large Number of Function Evaluations (NFE) to approximate well.
Existing methods to alleviate the costly sampling process include model
distillation and designing dedicated ODE solvers. However, distillation is
costly to train and sometimes can deteriorate quality, while dedicated solvers
still require relatively large NFE to produce high quality samples. In this
paper we introduce "Bespoke solvers", a novel framework for constructing custom
ODE solvers tailored to the ODE of a given pre-trained flow model. Our approach
optimizes an order consistent and parameter-efficient solver (e.g., with 80
learnable parameters), is trained for roughly 1% of the GPU time required for
training the pre-trained model, and significantly improves approximation and
generation quality compared to dedicated solvers. For example, a Bespoke solver
for a CIFAR10 model produces samples with Fr\'echet Inception Distance (FID) of
2.73 with 10 NFE, and gets to 1% of the Ground Truth (GT) FID (2.59) for this
model with only 20 NFE. On the more challenging ImageNet-64$\times$64, Bespoke
samples at 2.2 FID with 10 NFE, and gets within 2% of GT FID (1.71) with 20
NFE. | Machine Learning |
What field is the article from? | Title: Large Language Models and Explainable Law: a Hybrid Methodology
Abstract: The paper advocates for LLMs to enhance the accessibility, usage and
explainability of rule-based legal systems, contributing to a democratic and
stakeholder-oriented view of legal technology. A methodology is developed to
explore the potential use of LLMs for translating the explanations produced by
rule-based systems, from high-level programming languages to natural language,
allowing all users a fast, clear, and accessible interaction with such
technologies. The study continues by building upon these explanations to
empower laypeople with the ability to execute complex juridical tasks on their
own, using a Chain of Prompts for the autonomous legal comparison of different
rule-based inferences, applied to the same factual case. | Artificial Intelligence |
What field is the article from? | Title: Towards Exploratory Reformulation of Constraint Models
Abstract: It is well established that formulating an effective constraint model of a
problem of interest is crucial to the efficiency with which it can subsequently
be solved. Following from the observation that it is difficult, if not
impossible, to know a priori which of a set of candidate models will perform
best in practice, we envisage a system that explores the space of models
through a process of reformulation from an initial model, guided by performance
on a set of training instances from the problem class under consideration. We
plan to situate this system in a refinement-based approach, where a user writes
a constraint specification describing a problem above the level of abstraction
at which many modelling decisions are made. In this position paper we set out
our plan for an exploratory reformulation system, and discuss progress made so
far. | Artificial Intelligence |
What field is the article from? | Title: The Internet of Responsibilities-Connecting Human Responsibilities using Big Data and Blockchain
Abstract: Accountability in the workplace is critically important and remains a
challenging problem, especially with respect to workplace safety management. In
this paper, we introduce a novel notion, the Internet of Responsibilities, for
accountability management. Our method sorts through the list of
responsibilities with respect to hazardous positions. The positions are
interconnected using directed acyclic graphs (DAGs) indicating the hierarchy of
responsibilities in the organization. In addition, the system detects and
collects responsibilities, and represents risk areas in terms of the positions
of the responsibility nodes. Finally, an automatic reminder and assignment
system is used to enforce a strict responsibility control without human
intervention. Using blockchain technology, we further extend our system with
the capability to store, recover and encrypt responsibility data. We show that
through the application of the Internet of Responsibility network model driven
by Big Data, enterprise and government agencies can attain a highly secured and
safe workplace. Therefore, our model offers a combination of interconnected
responsibilities, accountability, monitoring, and safety which is crucial for
the protection of employees and the success of organizations. | Computers and Society |
What field is the article from? | Title: Colour versus Shape Goal Misgeneralization in Reinforcement Learning: A Case Study
Abstract: We explore colour versus shape goal misgeneralization originally demonstrated
by Di Langosco et al. (2022) in the Procgen Maze environment, where, given an
ambiguous choice, the agents seem to prefer generalization based on colour
rather than shape. After training over 1,000 agents in a simplified version of
the environment and evaluating them on over 10 million episodes, we conclude
that the behaviour can be attributed to the agents learning to detect the goal
object through a specific colour channel. This choice is arbitrary.
Additionally, we show how, due to underspecification, the preferences can
change when retraining the agents using exactly the same procedure except for
using a different random seed for the training run. Finally, we demonstrate the
existence of outliers in out-of-distribution behaviour based on training random
seed alone. | Machine Learning |
What field is the article from? | Title: Singular Value Penalization and Semantic Data Augmentation for Fully Test-Time Adaptation
Abstract: Fully test-time adaptation (FTTA) adapts a model that is trained on a source
domain to a target domain during the testing phase, where the two domains
follow different distributions and source data is unavailable during the
training phase. Existing methods usually adopt entropy minimization to reduce
the uncertainty of target prediction results, and improve the FTTA performance
accordingly. However, they fail to ensure the diversity in target prediction
results. Recent domain adaptation study has shown that maximizing the sum of
singular values of prediction results can simultaneously enhance their
confidence (discriminability) and diversity. However, during the training
phase, larger singular values usually take up a dominant position in loss
maximization. This results in the model being more inclined to enhance
discriminability for easily distinguishable classes, and the improvement in
diversity is insufficiently effective. Furthermore, the adaptation and
prediction in FTTA only use data from the current batch, which may lead to the
risk of overfitting. To address the aforementioned issues, we propose
maximizing the sum of singular values while minimizing their variance. This
enables the model's focus toward the smaller singular values, enhancing
discriminability between more challenging classes and effectively increasing
the diversity of prediction results. Moreover, we incorporate data from the
previous batch to realize semantic data augmentation for the current batch,
reducing the risk of overfitting. Extensive experiments on benchmark datasets
show our proposed approach outperforms some compared state-of-the-art FTTA
methods. | Artificial Intelligence |
What field is the article from? | Title: Building the Future of Responsible AI: A Reference Architecture for Designing Large Language Model based Agents
Abstract: Large language models (LLMs) have been widely recognised as transformative
artificial generative intelligence (AGI) technologies due to their capabilities
to understand and generate content, including plans with reasoning
capabilities. Foundation model based agents derive their autonomy from the
capabilities of foundation models, which enable them to autonomously break down
a given goal into a set of manageable tasks and orchestrate task execution to
meet the goal. Despite the huge efforts put into building foundation model
based autonomous agents, the architecture design of the agents has not yet been
systematically explored. Also, while there are significant benefits of using
autonomous agents for planning and execution, there are serious considerations
regarding responsible AI related software quality attributes, such as security
and accountability. Therefore, this paper presents a pattern-oriented reference
architecture that serves as architecture design guidance and enables
responsible-AI-by-design when designing foundation model based autonomous
agents. We evaluate the completeness and utility of the proposed reference
architecture by mapping it to the architecture of two real-world agents. | Artificial Intelligence |
What field is the article from? | Title: AI-enhanced Auto-correction of Programming Exercises: How Effective is GPT-3.5?
Abstract: Timely formative feedback is considered as one of the most important drivers
for effective learning. Delivering timely and individualized feedback is
particularly challenging in large classes in higher education. Recently Large
Language Models such as GPT-3 became available to the public that showed
promising results on various tasks such as code generation and code
explanation. This paper investigates the potential of AI in providing
personalized code correction and generating feedback. Based on existing student
submissions of two different real-world assignments, the correctness of the
AI-aided e-assessment as well as the characteristics such as fault
localization, correctness of hints, and code style suggestions of the generated
feedback are investigated. The results show that 73 % of the submissions were
correctly identified as either correct or incorrect. In 59 % of these cases,
GPT-3.5 also successfully generated effective and high-quality feedback.
Additionally, GPT-3.5 exhibited weaknesses in its evaluation, including
localization of errors that were not the actual errors, or even hallucinated
errors. Implications and potential new usage scenarios are discussed. | Computers and Society |
What field is the article from? | Title: Multi-dimensional data refining strategy for effective fine-tuning LLMs
Abstract: Data is a cornerstone for fine-tuning large language models, yet acquiring
suitable data remains challenging. Challenges encompassed data scarcity,
linguistic diversity, and domain-specific content. This paper presents lessons
learned while crawling and refining data tailored for fine-tuning Vietnamese
language models. Crafting such a dataset, while accounting for linguistic
intricacies and striking a balance between inclusivity and accuracy, demands
meticulous planning. Our paper presents a multidimensional strategy including
leveraging existing datasets in the English language and developing customized
data-crawling scripts with the assistance of generative AI tools. A fine-tuned
LLM model for the Vietnamese language, which was produced using resultant
datasets, demonstrated good performance while generating Vietnamese news
articles from prompts. The study offers practical solutions and guidance for
future fine-tuning models in languages like Vietnamese. | Computational Linguistics |
What field is the article from? | Title: An energy-based comparative analysis of common approaches to text classification in the Legal domain
Abstract: Most Machine Learning research evaluates the best solutions in terms of
performance. However, in the race for the best performing model, many important
aspects are often overlooked when, on the contrary, they should be carefully
considered. In fact, sometimes the gaps in performance between different
approaches are neglectable, whereas factors such as production costs, energy
consumption, and carbon footprint must take into consideration. Large Language
Models (LLMs) are extensively adopted to address NLP problems in academia and
industry. In this work, we present a detailed quantitative comparison of LLM
and traditional approaches (e.g. SVM) on the LexGLUE benchmark, which takes
into account both performance (standard indices) and alternative metrics such
as timing, power consumption and cost, in a word: the carbon-footprint. In our
analysis, we considered the prototyping phase (model selection by
training-validation-test iterations) and in-production phases separately, since
they follow different implementation procedures and also require different
resources. The results indicate that very often, the simplest algorithms
achieve performance very close to that of large LLMs but with very low power
consumption and lower resource demands. The results obtained could suggest
companies to include additional evaluations in the choice of Machine Learning
(ML) solutions. | Computational Linguistics |
What field is the article from? | Title: PortfolioMentor: Multimodal Generative AI Companion for Learning and Crafting Interactive Digital Art Portfolios
Abstract: Digital art portfolios serve as impactful mediums for artists to convey their
visions, weaving together visuals, audio, interactions, and narratives.
However, without technical backgrounds, design students often find it
challenging to translate creative ideas into tangible codes and designs, given
the lack of tailored resources for the non-technical, academic support in art
schools, and a comprehensive guiding tool throughout the mentally demanding
process. Recognizing the role of companionship in code learning and leveraging
generative AI models' capabilities in supporting creative tasks, we present
PortfolioMentor, a coding companion chatbot for IDEs. This tool guides and
collaborates with students through proactive suggestions and responsible Q&As
for learning, inspiration, and support. In detail, the system starts with the
understanding of the task and artist's visions, follows the co-creation of
visual illustrations, audio or music suggestions and files, click-scroll
effects for interactions, and creative vision conceptualization, and finally
synthesizes these facets into a polished interactive digital portfolio. | Human-Computer Interaction |
What field is the article from? | Title: CoDi-2: In-Context, Interleaved, and Interactive Any-to-Any Generation
Abstract: We present CoDi-2, a versatile and interactive Multimodal Large Language
Model (MLLM) that can follow complex multimodal interleaved instructions,
conduct in-context learning (ICL), reason, chat, edit, etc., in an any-to-any
input-output modality paradigm. By aligning modalities with language for both
encoding and generation, CoDi-2 empowers Large Language Models (LLMs) to not
only understand complex modality-interleaved instructions and in-context
examples, but also autoregressively generate grounded and coherent multimodal
outputs in the continuous feature space. To train CoDi-2, we build a
large-scale generation dataset encompassing in-context multimodal instructions
across text, vision, and audio. CoDi-2 demonstrates a wide range of zero-shot
capabilities for multimodal generation, such as in-context learning, reasoning,
and compositionality of any-to-any modality generation through multi-round
interactive conversation. CoDi-2 surpasses previous domain-specific models on
tasks such as subject-driven image generation, vision transformation, and audio
editing. CoDi-2 signifies a substantial breakthrough in developing a
comprehensive multimodal foundation model adept at interpreting in-context
language-vision-audio interleaved instructions and producing multimodal
outputs. | Computer Vision |
What field is the article from? | Title: Deciphering Digital Detectives: Understanding LLM Behaviors and Capabilities in Multi-Agent Mystery Games
Abstract: In this study, we explore the application of Large Language Models (LLMs) in
"Jubensha" (Chinese murder mystery role-playing games), a novel area in
AI-driven gaming. We introduce the first Chinese dataset specifically for
Jubensha, including character scripts and game rules, to foster AI agent
development in this complex narrative environment. Our work also presents a
unique multi-agent interaction framework using LLMs, allowing AI agents to
autonomously engage in the game, enhancing the dynamics of Jubensha gameplay.
To evaluate these AI agents, we developed specialized methods targeting their
mastery of case information and reasoning skills. Furthermore, we incorporated
the latest advancements in in-context learning to improve the agents'
performance in critical aspects like information gathering, murderer detection,
and logical reasoning. The experimental results validate the effectiveness of
our proposed methods. This work aims to offer a fresh perspective on
understanding LLM capabilities and establish a new benchmark for evaluating
large language model-based agents to researchers in the field. | Artificial Intelligence |
What field is the article from? | Title: ExFake: Towards an Explainable Fake News Detection Based on Content and Social Context Information
Abstract: ExFake is an explainable fake news detection system based on content and
context-level information. It is concerned with the veracity analysis of online
posts based on their content, social context (i.e., online users' credibility
and historical behaviour), and data coming from trusted entities such as
fact-checking websites and named entities. Unlike state-of-the-art systems, an
Explainable AI (XAI) assistant is also adopted to help online social networks
(OSN) users develop good reflexes when faced with any doubted information that
spreads on social networks. The trustworthiness of OSN users is also addressed
by assigning a credibility score to OSN users, as OSN users are one of the main
culprits for spreading fake news. Experimental analysis on a real-world dataset
demonstrates that ExFake significantly outperforms other baseline methods for
fake news detection. | Computational Linguistics |
What field is the article from? | Title: Potential Societal Biases of ChatGPT in Higher Education: A Scoping Review
Abstract: ChatGPT and other Generative Artificial Intelligence (GAI) models tend to
inherit and even amplify prevailing societal biases as they are trained on
large amounts of existing data. Given the increasing usage of ChatGPT and other
GAI by students, faculty members, and staff in higher education institutions
(HEIs), there is an urgent need to examine the ethical issues involved such as
its potential biases. In this scoping review, we clarify the ways in which
biases related to GAI in higher education settings have been discussed in
recent academic publications and identify what type of potential biases are
commonly reported in this body of literature. We searched for academic articles
written in English, Chinese, and Japanese across four main databases concerned
with GAI usage in higher education and bias. Our findings show that while there
is an awareness of potential biases around large language models (LLMs) and
GAI, the majority of articles touch on ``bias'' at a relatively superficial
level. Few identify what types of bias may occur under what circumstances.
Neither do they discuss the possible implications for the higher education,
staff, faculty members, or students. There is a notable lack of empirical work
at this point, and we call for higher education researchers and AI experts to
conduct more research in this area. | Computers and Society |
What field is the article from? | Title: ConTex-Human: Free-View Rendering of Human from a Single Image with Texture-Consistent Synthesis
Abstract: In this work, we propose a method to address the challenge of rendering a 3D
human from a single image in a free-view manner. Some existing approaches could
achieve this by using generalizable pixel-aligned implicit fields to
reconstruct a textured mesh of a human or by employing a 2D diffusion model as
guidance with the Score Distillation Sampling (SDS) method, to lift the 2D
image into 3D space. However, a generalizable implicit field often results in
an over-smooth texture field, while the SDS method tends to lead to a
texture-inconsistent novel view with the input image. In this paper, we
introduce a texture-consistent back view synthesis module that could transfer
the reference image content to the back view through depth and text-guided
attention injection. Moreover, to alleviate the color distortion that occurs in
the side region, we propose a visibility-aware patch consistency regularization
for texture mapping and refinement combined with the synthesized back view
texture. With the above techniques, we could achieve high-fidelity and
texture-consistent human rendering from a single image. Experiments conducted
on both real and synthetic data demonstrate the effectiveness of our method and
show that our approach outperforms previous baseline methods. | Computer Vision |
What field is the article from? | Title: Army of Thieves: Enhancing Black-Box Model Extraction via Ensemble based sample selection
Abstract: Machine Learning (ML) models become vulnerable to Model Stealing Attacks
(MSA) when they are deployed as a service. In such attacks, the deployed model
is queried repeatedly to build a labelled dataset. This dataset allows the
attacker to train a thief model that mimics the original model. To maximize
query efficiency, the attacker has to select the most informative subset of
data points from the pool of available data. Existing attack strategies utilize
approaches like Active Learning and Semi-Supervised learning to minimize costs.
However, in the black-box setting, these approaches may select sub-optimal
samples as they train only one thief model. Depending on the thief model's
capacity and the data it was pretrained on, the model might even select noisy
samples that harm the learning process. In this work, we explore the usage of
an ensemble of deep learning models as our thief model. We call our attack Army
of Thieves(AOT) as we train multiple models with varying complexities to
leverage the crowd's wisdom. Based on the ensemble's collective decision,
uncertain samples are selected for querying, while the most confident samples
are directly included in the training data. Our approach is the first one to
utilize an ensemble of thief models to perform model extraction. We outperform
the base approaches of existing state-of-the-art methods by at least 3% and
achieve a 21% higher adversarial sample transferability than previous work for
models trained on the CIFAR-10 dataset. | Machine Learning |
What field is the article from? | Title: Fast Sampling generative model for Ultrasound image reconstruction
Abstract: Image reconstruction from radio-frequency data is pivotal in ultrafast plane
wave ultrasound imaging. Unlike the conventional delay-and-sum (DAS) technique,
which relies on somewhat imprecise assumptions, deep learning-based methods
perform image reconstruction by training on paired data, leading to a notable
enhancement in image quality. Nevertheless, these strategies often exhibit
limited generalization capabilities. Recently, denoising diffusion models have
become the preferred paradigm for image reconstruction tasks. However, their
reliance on an iterative sampling procedure results in prolonged generation
time. In this paper, we propose a novel sampling framework that concurrently
enforces data consistency of ultrasound signals and data-driven priors. By
leveraging the advanced diffusion model, the generation of high-quality images
is substantially expedited. Experimental evaluations on an in-vivo dataset
indicate that our approach with a single plane wave surpasses DAS with spatial
coherent compounding of 75 plane waves. | Computer Vision |
What field is the article from? | Title: Deep-Dispatch: A Deep Reinforcement Learning-Based Vehicle Dispatch Algorithm for Advanced Air Mobility
Abstract: Near future air taxi operations with electric vertical take-off and landing
(eVTOL) aircraft will be constrained by the need for frequent recharging of
eVTOLs, limited takeoff and landing pads in vertiports, and subject to
time-varying demand and electricity prices, making the eVTOL dispatch problem
unique and particularly challenging to solve. Previously, we have developed
optimization models to address this problem. Such optimization models however
suffer from prohibitively high computational run times when the scale of the
problem increases, making them less practical for real world implementation. To
overcome this issue, we have developed two deep reinforcement learning-based
eVTOL dispatch algorithms, namely single-agent and multi-agent deep Q-learning
eVTOL dispatch algorithms, where the objective is to maximize operating profit.
An eVTOL-based passenger transportation simulation environment was built to
assess the performance of our algorithms across $36$ numerical cases with
varying number of eVTOLs, vertiports, and demand. The results indicate that the
multi-agent eVTOL dispatch algorithm can closely approximate the optimal
dispatch policy with significantly less computational expenses compared to the
benchmark optimization model. The multi-agent algorithm was found to outperform
the single-agent counterpart with respect to both profits generated and
training time. | Artificial Intelligence |
What field is the article from? | Title: Portuguese FAQ for Financial Services
Abstract: Scarcity of domain-specific data in the Portuguese financial domain has
disfavored the development of Natural Language Processing (NLP) applications.
To address this limitation, the present study advocates for the utilization of
synthetic data generated through data augmentation techniques. The
investigation focuses on the augmentation of a dataset sourced from the Central
Bank of Brazil FAQ, employing techniques that vary in semantic similarity.
Supervised and unsupervised tasks are conducted to evaluate the impact of
augmented data on both low and high semantic similarity scenarios.
Additionally, the resultant dataset will be publicly disseminated on the
Hugging Face Datasets platform, thereby enhancing accessibility and fostering
broader engagement within the NLP research community. | Computational Linguistics |
What field is the article from? | Title: Emergent Communication for Rules Reasoning
Abstract: Research on emergent communication between deep-learning-based agents has
received extensive attention due to its inspiration for linguistics and
artificial intelligence. However, previous attempts have hovered around
emerging communication under perception-oriented environmental settings, that
forces agents to describe low-level perceptual features intra image or symbol
contexts. In this work, inspired by the classic human reasoning test (namely
Raven's Progressive Matrix), we propose the Reasoning Game, a
cognition-oriented environment that encourages agents to reason and communicate
high-level rules, rather than perceived low-level contexts. Moreover, we
propose 1) an unbiased dataset (namely rule-RAVEN) as a benchmark to avoid
overfitting, 2) and a two-stage curriculum agent training method as a baseline
for more stable convergence in the Reasoning Game, where contexts and semantics
are bilaterally drifting. Experimental results show that, in the Reasoning
Game, a semantically stable and compositional language emerges to solve
reasoning problems. The emerged language helps agents apply the extracted rules
to the generalization of unseen context attributes, and to the transfer between
different context attributes or even tasks. | Artificial Intelligence |
What field is the article from? | Title: Nepotistically Trained Generative-AI Models Collapse
Abstract: Trained on massive amounts of human-generated content, AI (artificial
intelligence) image synthesis is capable of reproducing semantically coherent
images that match the visual appearance of its training data. We show that when
retrained on even small amounts of their own creation, these generative-AI
models produce highly distorted images. We also show that this distortion
extends beyond the text prompts used in retraining, and that once poisoned, the
models struggle to fully heal even after retraining on only real images. | Artificial Intelligence |
What field is the article from? | Title: Successor Heads: Recurring, Interpretable Attention Heads In The Wild
Abstract: In this work we present successor heads: attention heads that increment
tokens with a natural ordering, such as numbers, months, and days. For example,
successor heads increment 'Monday' into 'Tuesday'. We explain the successor
head behavior with an approach rooted in mechanistic interpretability, the
field that aims to explain how models complete tasks in human-understandable
terms. Existing research in this area has found interpretable language model
components in small toy models. However, results in toy models have not yet led
to insights that explain the internals of frontier models and little is
currently understood about the internal operations of large language models. In
this paper, we analyze the behavior of successor heads in large language models
(LLMs) and find that they implement abstract representations that are common to
different architectures. They form in LLMs with as few as 31 million
parameters, and at least as many as 12 billion parameters, such as GPT-2,
Pythia, and Llama-2. We find a set of 'mod-10 features' that underlie how
successor heads increment in LLMs across different architectures and sizes. We
perform vector arithmetic with these features to edit head behavior and provide
insights into numeric representations within LLMs. Additionally, we study the
behavior of successor heads on natural language data, identifying interpretable
polysemanticity in a Pythia successor head. | Machine Learning |
What field is the article from? | Title: AV-Lip-Sync+: Leveraging AV-HuBERT to Exploit Multimodal Inconsistency for Video Deepfake Detection
Abstract: Multimodal manipulations (also known as audio-visual deepfakes) make it
difficult for unimodal deepfake detectors to detect forgeries in multimedia
content. To avoid the spread of false propaganda and fake news, timely
detection is crucial. The damage to either modality (i.e., visual or audio) can
only be discovered through multi-modal models that can exploit both pieces of
information simultaneously. Previous methods mainly adopt uni-modal video
forensics and use supervised pre-training for forgery detection. This study
proposes a new method based on a multi-modal self-supervised-learning (SSL)
feature extractor to exploit inconsistency between audio and visual modalities
for multi-modal video forgery detection. We use the transformer-based SSL
pre-trained Audio-Visual HuBERT (AV-HuBERT) model as a visual and acoustic
feature extractor and a multi-scale temporal convolutional neural network to
capture the temporal correlation between the audio and visual modalities. Since
AV-HuBERT only extracts visual features from the lip region, we also adopt
another transformer-based video model to exploit facial features and capture
spatial and temporal artifacts caused during the deepfake generation process.
Experimental results show that our model outperforms all existing models and
achieves new state-of-the-art performance on the FakeAVCeleb and DeepfakeTIMIT
datasets. | Computer Vision |
What field is the article from? | Title: Three Dogmas, a Puzzle and its Solution
Abstract: Modern Logics, as formulated notably by Frege, Russell and Tarski involved
basic assumptions about Natural Languages in general and Indo-European
Languages in particular, which are contested by Linguists. Based upon those
assumptions, formal Languages were designed to overcome what Logicians claimed
to be 'defects' of Natural Language. In this paper we show that those
assumptions contradict basic principles of Arabic. More specifically: The
Logicians ideas, that within Natural Language words refer to objects,
'ToBe'-constructions represent identity statements, Indefinite Descriptions
must be replaced by existential quantifiers to form meaningful Sentences and
Symbols can have no interpretation-independent meanings, are all falsified
using undisputed principles of Arabic. The here presented falsification serves
two purposes. First, it is used as a factual basis for the rejection of
approaches adopting Semantic axioms of Mathematical Logics as Models for
meaning of Arabic Syntax. Second, it shows a way to approach the important
computational problem: Satisfiability (SAT). The described way is based upon
the realization that parsing Arabic utilizes the existence of
'meaning-particles' within Syntax to efficiently recognize words, phrases and
Sentences. Similar meaning-particles are shown to exist in 3CNF formulas,
which, when properly handled within the machinery of 3SAT-Solvers, enable
structural conditions to be imposed on formulas, sufficient alone to guarantee
the efficient production of non-exponentially sized Free Binary Decision
Diagrams (FBDDs). We show, why known exponential Lower Bounds on sizes of FBDDs
do not contradict our results and reveal practical evidence, obtained for
multiplication circuits, supporting our claims. | Computational Linguistics |
What field is the article from? | Title: Communication-Efficient Heterogeneous Federated Learning with Generalized Heavy-Ball Momentum
Abstract: Federated Learning (FL) is the state-of-the-art approach for learning from
decentralized data in privacy-constrained scenarios. As the current literature
reports, the main problems associated with FL refer to system and statistical
challenges: the former ones demand for efficient learning from edge devices,
including lowering communication bandwidth and frequency, while the latter
require algorithms robust to non-iidness. State-of-art approaches either
guarantee convergence at increased communication cost or are not sufficiently
robust to handle extreme heterogeneous local distributions. In this work we
propose a novel generalization of the heavy-ball momentum, and present FedHBM
to effectively address statistical heterogeneity in FL without introducing any
communication overhead. We conduct extensive experimentation on common FL
vision and NLP datasets, showing that our FedHBM algorithm empirically yields
better model quality and higher convergence speed w.r.t. the state-of-art,
especially in pathological non-iid scenarios. While being designed for
cross-silo settings, we show how FedHBM is applicable in moderate-to-high
cross-device scenarios, and how good model initializations (e.g. pre-training)
can be exploited for prompt acceleration. Extended experimentation on
large-scale real-world federated datasets further corroborates the
effectiveness of our approach for real-world FL applications. | Machine Learning |
What field is the article from? | Title: RGB-X Object Detection via Scene-Specific Fusion Modules
Abstract: Multimodal deep sensor fusion has the potential to enable autonomous vehicles
to visually understand their surrounding environments in all weather
conditions. However, existing deep sensor fusion methods usually employ
convoluted architectures with intermingled multimodal features, requiring large
coregistered multimodal datasets for training. In this work, we present an
efficient and modular RGB-X fusion network that can leverage and fuse
pretrained single-modal models via scene-specific fusion modules, thereby
enabling joint input-adaptive network architectures to be created using small,
coregistered multimodal datasets. Our experiments demonstrate the superiority
of our method compared to existing works on RGB-thermal and RGB-gated datasets,
performing fusion using only a small amount of additional parameters. Our code
is available at https://github.com/dsriaditya999/RGBXFusion. | Computer Vision |
What field is the article from? | Title: HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts
Abstract: By routing input tokens to only a few split experts, Sparse
Mixture-of-Experts has enabled efficient training of large language models.
Recent findings suggest that fixing the routers can achieve competitive
performance by alleviating the collapsing problem, where all experts eventually
learn similar representations. However, this strategy has two key limitations:
(i) the policy derived from random routers might be sub-optimal, and (ii) it
requires extensive resources during training and evaluation, leading to limited
efficiency gains. This work introduces \HyperRout, which dynamically generates
the router's parameters through a fixed hypernetwork and trainable embeddings
to achieve a balance between training the routers and freezing them to learn an
improved routing policy. Extensive experiments across a wide range of tasks
demonstrate the superior performance and efficiency gains of \HyperRouter
compared to existing routing methods. Our implementation is publicly available
at {\url{{https://github.com/giangdip2410/HyperRouter}}}. | Machine Learning |
What field is the article from? | Title: Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMs
Abstract: Large language models (LLMs) encapsulate a vast amount of factual information
within their pre-trained weights, as evidenced by their ability to answer
diverse questions across different domains. However, this knowledge is
inherently limited, relying heavily on the characteristics of the training
data. Consequently, using external datasets to incorporate new information or
refine the capabilities of LLMs on previously seen information poses a
significant challenge. In this study, we compare two common approaches:
fine-tuning and retrieval-augmented generation (RAG). We evaluate both
approaches on a variety of knowledge-intensive tasks across different topics.
Our findings reveal that while fine-tuning offers some improvement, RAG
consistently outperforms it, both for existing knowledge encountered during
training and entirely new knowledge. Moreover, we find that LLMs struggle to
learn new factual information through fine-tuning, and that exposing them to
numerous variations of the same fact during training could alleviate this
problem. | Artificial Intelligence |
What field is the article from? | Title: TARGET: Template-Transferable Backdoor Attack Against Prompt-based NLP Models via GPT4
Abstract: Prompt-based learning has been widely applied in many low-resource NLP tasks
such as few-shot scenarios. However, this paradigm has been shown to be
vulnerable to backdoor attacks. Most of the existing attack methods focus on
inserting manually predefined templates as triggers in the pre-training phase
to train the victim model and utilize the same triggers in the downstream task
to perform inference, which tends to ignore the transferability and
stealthiness of the templates. In this work, we propose a novel approach of
TARGET (Template-trAnsfeRable backdoor attack aGainst prompt-basEd NLP models
via GPT4), which is a data-independent attack method. Specifically, we first
utilize GPT4 to reformulate manual templates to generate tone-strong and normal
templates, and the former are injected into the model as a backdoor trigger in
the pre-training phase. Then, we not only directly employ the above templates
in the downstream task, but also use GPT4 to generate templates with similar
tone to the above templates to carry out transferable attacks. Finally we have
conducted extensive experiments on five NLP datasets and three BERT series
models, with experimental results justifying that our TARGET method has better
attack performance and stealthiness compared to the two-external baseline
methods on direct attacks, and in addition achieves satisfactory attack
capability in the unseen tone-similar templates. | Computational Linguistics |
What field is the article from? | Title: Large-Scale Multi-Robot Coverage Path Planning via Local Search
Abstract: We study graph-based Multi-Robot Coverage Path Planning (MCPP) that aims to
compute coverage paths for multiple robots to cover all vertices of a given 2D
grid terrain graph $G$. Existing graph-based MCPP algorithms first compute a
tree cover on $G$ -- a forest of multiple trees that cover all vertices -- and
then employ the Spanning Tree Coverage (STC) paradigm to generate coverage
paths on the decomposed graph $D$ of the terrain graph $G$ by circumnavigating
the edges of the computed trees, aiming to optimize the makespan (i.e., the
maximum coverage path cost among all robots). In this paper, we take a
different approach by exploring how to systematically search for good coverage
paths directly on $D$. We introduce a new algorithmic framework, called
LS-MCPP, which leverages a local search to operate directly on $D$. We propose
a novel standalone paradigm, Extended-STC (ESTC), that extends STC to achieve
complete coverage for MCPP on any decomposed graphs, even those resulting from
incomplete terrain graphs. Furthermore, we demonstrate how to integrate ESTC
with three novel types of neighborhood operators into our framework to
effectively guide its search process. Our extensive experiments demonstrate the
effectiveness of LS-MCPP, consistently improving the initial solution returned
by two state-of-the-art baseline algorithms that compute suboptimal tree covers
on $G$, with a notable reduction in makespan by up to 35.7\% and 30.3\%,
respectively. Moreover, LS-MCPP consistently matches or surpasses the results
of optimal tree cover computation, achieving these outcomes with orders of
magnitude faster runtime, thereby showcasing its significant benefits for
large-scale real-world coverage tasks. | Robotics |
What field is the article from? | Title: Goal-conditioned Offline Planning from Curious Exploration
Abstract: Curiosity has established itself as a powerful exploration strategy in deep
reinforcement learning. Notably, leveraging expected future novelty as
intrinsic motivation has been shown to efficiently generate exploratory
trajectories, as well as a robust dynamics model. We consider the challenge of
extracting goal-conditioned behavior from the products of such unsupervised
exploration techniques, without any additional environment interaction. We find
that conventional goal-conditioned reinforcement learning approaches for
extracting a value function and policy fall short in this difficult offline
setting. By analyzing the geometry of optimal goal-conditioned value functions,
we relate this issue to a specific class of estimation artifacts in learned
values. In order to mitigate their occurrence, we propose to combine
model-based planning over learned value landscapes with a graph-based value
aggregation scheme. We show how this combination can correct both local and
global artifacts, obtaining significant improvements in zero-shot goal-reaching
performance across diverse simulated environments. | Machine Learning |
What field is the article from? | Title: A Multi-In-Single-Out Network for Video Frame Interpolation without Optical Flow
Abstract: In general, deep learning-based video frame interpolation (VFI) methods have
predominantly focused on estimating motion vectors between two input frames and
warping them to the target time. While this approach has shown impressive
performance for linear motion between two input frames, it exhibits limitations
when dealing with occlusions and nonlinear movements. Recently, generative
models have been applied to VFI to address these issues. However, as VFI is not
a task focused on generating plausible images, but rather on predicting
accurate intermediate frames between two given frames, performance limitations
still persist. In this paper, we propose a multi-in-single-out (MISO) based VFI
method that does not rely on motion vector estimation, allowing it to
effectively model occlusions and nonlinear motion. Additionally, we introduce a
novel motion perceptual loss that enables MISO-VFI to better capture the
spatio-temporal correlations within the video frames. Our MISO-VFI method
achieves state-of-the-art results on VFI benchmarks Vimeo90K, Middlebury, and
UCF101, with a significant performance gap compared to existing approaches. | Computer Vision |
What field is the article from? | Title: Learning Fair Division from Bandit Feedback
Abstract: This work addresses learning online fair division under uncertainty, where a
central planner sequentially allocates items without precise knowledge of
agents' values or utilities. Departing from conventional online algorithm, the
planner here relies on noisy, estimated values obtained after allocating items.
We introduce wrapper algorithms utilizing \textit{dual averaging}, enabling
gradual learning of both the type distribution of arriving items and agents'
values through bandit feedback. This approach enables the algorithms to
asymptotically achieve optimal Nash social welfare in linear Fisher markets
with agents having additive utilities. We establish regret bounds in Nash
social welfare and empirically validate the superior performance of our
proposed algorithms across synthetic and empirical datasets. | Machine Learning |
What field is the article from? | Title: The unreasonable effectiveness of AI CADe polyp detectors to generalize to new countries
Abstract: $\textbf{Background and aims}$: Artificial Intelligence (AI) Computer-Aided
Detection (CADe) is commonly used for polyp detection, but data seen in
clinical settings can differ from model training. Few studies evaluate how well
CADe detectors perform on colonoscopies from countries not seen during
training, and none are able to evaluate performance without collecting
expensive and time-intensive labels.
$\textbf{Methods}$: We trained a CADe polyp detector on Israeli colonoscopy
videos (5004 videos, 1106 hours) and evaluated on Japanese videos (354 videos,
128 hours) by measuring the True Positive Rate (TPR) versus false alarms per
minute (FAPM). We introduce a colonoscopy dissimilarity measure called "MAsked
mediCal Embedding Distance" (MACE) to quantify differences between
colonoscopies, without labels. We evaluated CADe on all Japan videos and on
those with the highest MACE.
$\textbf{Results}$: MACE correctly quantifies that narrow-band imaging (NBI)
and chromoendoscopy (CE) frames are less similar to Israel data than Japan
whitelight (bootstrapped z-test, |z| > 690, p < $10^{-8}$ for both). Despite
differences in the data, CADe performance on Japan colonoscopies was
non-inferior to Israel ones without additional training (TPR at 0.5 FAPM: 0.957
and 0.972 for Israel and Japan; TPR at 1.0 FAPM: 0.972 and 0.989 for Israel and
Japan; superiority test t > 45.2, p < $10^{-8}$). Despite not being trained on
NBI or CE, TPR on those subsets were non-inferior to Japan overall
(non-inferiority test t > 47.3, p < $10^{-8}$, $\delta$ = 1.5% for both).
$\textbf{Conclusion}$: Differences that prevent CADe detectors from
performing well in non-medical settings do not degrade the performance of our
AI CADe polyp detector when applied to data from a new country. MACE can help
medical AI models internationalize by identifying the most "dissimilar" data on
which to evaluate models. | Machine Learning |
What field is the article from? | Title: Stable Diffusion For Aerial Object Detection
Abstract: Aerial object detection is a challenging task, in which one major obstacle
lies in the limitations of large-scale data collection and the long-tail
distribution of certain classes. Synthetic data offers a promising solution,
especially with recent advances in diffusion-based methods like stable
diffusion (SD). However, the direct application of diffusion methods to aerial
domains poses unique challenges: stable diffusion's optimization for rich
ground-level semantics doesn't align with the sparse nature of aerial objects,
and the extraction of post-synthesis object coordinates remains problematic. To
address these challenges, we introduce a synthetic data augmentation framework
tailored for aerial images. It encompasses sparse-to-dense region of interest
(ROI) extraction to bridge the semantic gap, fine-tuning the diffusion model
with low-rank adaptation (LORA) to circumvent exhaustive retraining, and
finally, a Copy-Paste method to compose synthesized objects with backgrounds,
providing a nuanced approach to aerial object detection through synthetic data. | Computer Vision |
What field is the article from? | Title: Step by Step to Fairness: Attributing Societal Bias in Task-oriented Dialogue Systems
Abstract: Recent works have shown considerable improvements in task-oriented dialogue
(TOD) systems by utilizing pretrained large language models (LLMs) in an
end-to-end manner. However, the biased behavior of each component in a TOD
system and the error propagation issue in the end-to-end framework can lead to
seriously biased TOD responses. Existing works of fairness only focus on the
total bias of a system. In this paper, we propose a diagnosis method to
attribute bias to each component of a TOD system. With the proposed attribution
method, we can gain a deeper understanding of the sources of bias.
Additionally, researchers can mitigate biased model behavior at a more granular
level. We conduct experiments to attribute the TOD system's bias toward three
demographic axes: gender, age, and race. Experimental results show that the
bias of a TOD system usually comes from the response generation model. | Computational Linguistics |
What field is the article from? | Title: Continual Learning of Unsupervised Monocular Depth from Videos
Abstract: Spatial scene understanding, including monocular depth estimation, is an
important problem in various applications, such as robotics and autonomous
driving. While improvements in unsupervised monocular depth estimation have
potentially allowed models to be trained on diverse crowdsourced videos, this
remains underexplored as most methods utilize the standard training protocol,
wherein the models are trained from scratch on all data after new data is
collected. Instead, continual training of models on sequentially collected data
would significantly reduce computational and memory costs. Nevertheless, naive
continual training leads to catastrophic forgetting, where the model
performance deteriorates on older domains as it learns on newer domains,
highlighting the trade-off between model stability and plasticity. While
several techniques have been proposed to address this issue in image
classification, the high-dimensional and spatiotemporally correlated outputs of
depth estimation make it a distinct challenge. To the best of our knowledge, no
framework or method currently exists focusing on the problem of continual
learning in depth estimation. Thus, we introduce a framework that captures the
challenges of continual unsupervised depth estimation (CUDE), and define the
necessary metrics to evaluate model performance. We propose a rehearsal-based
dual-memory method, MonoDepthCL, which utilizes spatiotemporal consistency for
continual learning in depth estimation, even when the camera intrinsics are
unknown. | Computer Vision |
What field is the article from? | Title: Benchmarking Continual Learning from Cognitive Perspectives
Abstract: Continual learning addresses the problem of continuously acquiring and
transferring knowledge without catastrophic forgetting of old concepts. While
humans achieve continual learning via diverse neurocognitive mechanisms, there
is a mismatch between cognitive properties and evaluation methods of continual
learning models. First, the measurement of continual learning models mostly
relies on evaluation metrics at a micro-level, which cannot characterize
cognitive capacities of the model. Second, the measurement is method-specific,
emphasizing model strengths in one aspect while obscuring potential weaknesses
in other respects. To address these issues, we propose to integrate model
cognitive capacities and evaluation metrics into a unified evaluation paradigm.
We first characterize model capacities via desiderata derived from cognitive
properties supporting human continual learning. The desiderata concern (1)
adaptability in varying lengths of task sequence; (2) sensitivity to dynamic
task variations; and (3) efficiency in memory usage and training time
consumption. Then we design evaluation protocols for each desideratum to assess
cognitive capacities of recent continual learning models. Experimental results
show that no method we consider has satisfied all the desiderata and is still
far away from realizing truly continual learning. Although some methods exhibit
some degree of adaptability and efficiency, no method is able to identify task
relationships when encountering dynamic task variations, or achieve a trade-off
in learning similarities and differences between tasks. Inspired by these
results, we discuss possible factors that influence model performance in these
desiderata and provide guidance for the improvement of continual learning
models. | Machine Learning |
What field is the article from? | Title: Academic competitions
Abstract: Academic challenges comprise effective means for (i) advancing the state of
the art, (ii) putting in the spotlight of a scientific community specific
topics and problems, as well as (iii) closing the gap for under represented
communities in terms of accessing and participating in the shaping of research
fields. Competitions can be traced back for centuries and their achievements
have had great influence in our modern world. Recently, they (re)gained
popularity, with the overwhelming amounts of data that is being generated in
different domains, as well as the need of pushing the barriers of existing
methods, and available tools to handle such data. This chapter provides a
survey of academic challenges in the context of machine learning and related
fields. We review the most influential competitions in the last few years and
analyze challenges per area of knowledge. The aims of scientific challenges,
their goals, major achievements and expectations for the next few years are
reviewed. | Machine Learning |
What field is the article from? | Title: Beyond MLE: Convex Learning for Text Generation
Abstract: Maximum likelihood estimation (MLE) is a statistical method used to estimate
the parameters of a probability distribution that best explain the observed
data. In the context of text generation, MLE is often used to train generative
language models, which can then be used to generate new text. However, we argue
that MLE is not always necessary and optimal, especially for closed-ended text
generation tasks like machine translation. In these tasks, the goal of model is
to generate the most appropriate response, which does not necessarily require
it to estimate the entire data distribution with MLE. To this end, we propose a
novel class of training objectives based on convex functions, which enables
text generation models to focus on highly probable outputs without having to
estimate the entire data distribution. We investigate the theoretical
properties of the optimal predicted distribution when applying convex functions
to the loss, demonstrating that convex functions can sharpen the optimal
distribution, thereby enabling the model to better capture outputs with high
probabilities. Experiments on various text generation tasks and models show the
effectiveness of our approach. It enables autoregressive models to bridge the
gap between greedy and beam search, and facilitates the learning of
non-autoregressive models with a maximum improvement of 9+ BLEU points.
Moreover, our approach also exhibits significant impact on large language
models (LLMs), substantially enhancing their generative capability on various
tasks. Source code is available at
\url{https://github.com/ictnlp/Convex-Learning}. | Computational Linguistics |
What field is the article from? | Title: From Learning Management System to Affective Tutoring system: a preliminary study
Abstract: In this study, we investigate the combination of indicators, including
performance, behavioral engagement, and emotional engagement, to identify
students experiencing difficulties. We analyzed data from two primary sources:
digital traces extracted from th e Learning Management System (LMS) and images
captured by students' webcams. The digital traces provided insights into
students' interactions with the educational content, while the images were
utilized to analyze their emotional expressions during learnin g activities. By
utilizing real data collected from students at a French engineering school,
recorded during the 2022 2023 academic year, we observed a correlation between
positive emotional states and improved academic outcomes. These preliminary
findings support the notion that emotions play a crucial role in
differentiating between high achieving and low achieving students. | Computers and Society |
What field is the article from? | Title: tmn at #SMM4H 2023: Comparing Text Preprocessing Techniques for Detecting Tweets Self-reporting a COVID-19 Diagnosis
Abstract: The paper describes a system developed for Task 1 at SMM4H 2023. The goal of
the task is to automatically distinguish tweets that self-report a COVID-19
diagnosis (for example, a positive test, clinical diagnosis, or
hospitalization) from those that do not. We investigate the use of different
techniques for preprocessing tweets using four transformer-based models. The
ensemble of fine-tuned language models obtained an F1-score of 84.5%, which is
4.1% higher than the average value. | Computational Linguistics |
What field is the article from? | Title: Green Edge AI: A Contemporary Survey
Abstract: Artificial intelligence (AI) technologies have emerged as pivotal enablers
across a multitude of industries, including consumer electronics, healthcare,
and manufacturing, largely due to their resurgence over the past decade. The
transformative power of AI is primarily derived from the utilization of deep
neural networks (DNNs), which require extensive data for training and
substantial computational resources for processing. Consequently, DNN models
are typically trained and deployed on resource-rich cloud servers. However, due
to potential latency issues associated with cloud communications, deep learning
(DL) workflows are increasingly being transitioned to wireless edge networks
near end-user devices (EUDs). This shift is designed to support
latency-sensitive applications and has given rise to a new paradigm of edge AI,
which will play a critical role in upcoming 6G networks to support ubiquitous
AI applications. Despite its potential, edge AI faces substantial challenges,
mostly due to the dichotomy between the resource limitations of wireless edge
networks and the resource-intensive nature of DL. Specifically, the acquisition
of large-scale data, as well as the training and inference processes of DNNs,
can rapidly deplete the battery energy of EUDs. This necessitates an
energy-conscious approach to edge AI to ensure both optimal and sustainable
performance. In this paper, we present a contemporary survey on green edge AI.
We commence by analyzing the principal energy consumption components of edge AI
systems to identify the fundamental design principles of green edge AI. Guided
by these principles, we then explore energy-efficient design methodologies for
the three critical tasks in edge AI systems, including training data
acquisition, edge training, and edge inference. Finally, we underscore
potential future research directions to further enhance the energy efficiency
of edge AI. | Artificial Intelligence |
What field is the article from? | Title: CDR-Adapter: Learning Adapters to Dig Out More Transferring Ability for Cross-Domain Recommendation Models
Abstract: Data sparsity and cold-start problems are persistent challenges in
recommendation systems. Cross-domain recommendation (CDR) is a promising
solution that utilizes knowledge from the source domain to improve the
recommendation performance in the target domain. Previous CDR approaches have
mainly followed the Embedding and Mapping (EMCDR) framework, which involves
learning a mapping function to facilitate knowledge transfer. However, these
approaches necessitate re-engineering and re-training the network structure to
incorporate transferrable knowledge, which can be computationally expensive and
may result in catastrophic forgetting of the original knowledge. In this paper,
we present a scalable and efficient paradigm to address data sparsity and
cold-start issues in CDR, named CDR-Adapter, by decoupling the original
recommendation model from the mapping function, without requiring
re-engineering the network structure. Specifically, CDR-Adapter is a novel
plug-and-play module that employs adapter modules to align feature
representations, allowing for flexible knowledge transfer across different
domains and efficient fine-tuning with minimal training costs. We conducted
extensive experiments on the benchmark dataset, which demonstrated the
effectiveness of our approach over several state-of-the-art CDR approaches. | Information Retrieval |
What field is the article from? | Title: A Virtual Reality Training System for Automotive Engines Assembly and Disassembly
Abstract: Automotive engine assembly and disassembly are common and crucial programs in
the automotive industry. Traditional education trains students to learn
automotive engine assembly and disassembly in lecture courses and then to
operate with physical engines, which are generally low effectiveness and high
cost. In this work, we developed a multi-layer structured Virtual Reality (VR)
system to provide students with training in automotive engine (Buick Verano)
assembly and disassembly. We designed the VR training system with The VR
training system is designed to have several major features, including
replaceable engine parts and reusable tools, friendly user interfaces and
guidance, and bottom-up designed multi-layer architecture, which can be
extended to various engine models. The VR system is evaluated with controlled
experiments of two groups of students. The results demonstrate that our VR
training system provides remarkable usability in terms of effectiveness and
efficiency. Currently, our VR system has been demonstrated and employed in the
courses of Chinese colleges to train students in automotive engine assembly and
disassembly. A free-to-use executable file (Microsoft Windows) and open-source
code are available at https://github.com/LadissonLai/SUSTech_VREngine for
facilitating the development of VR systems in the automotive industry. Finally,
a video describing the operations in our VR training system is available at
https://www.youtube.com/watch?v=yZe4YTwwAC4 | Human-Computer Interaction |
What field is the article from? | Title: Facial Emotion Recognition Under Mask Coverage Using a Data Augmentation Technique
Abstract: Identifying human emotions using AI-based computer vision systems, when
individuals wear face masks, presents a new challenge in the current Covid-19
pandemic. In this study, we propose a facial emotion recognition system capable
of recognizing emotions from individuals wearing different face masks. A novel
data augmentation technique was utilized to improve the performance of our
model using four mask types for each face image. We evaluated the effectiveness
of four convolutional neural networks, Alexnet, Squeezenet, Resnet50 and
VGGFace2 that were trained using transfer learning. The experimental findings
revealed that our model works effectively in multi-mask mode compared to
single-mask mode. The VGGFace2 network achieved the highest accuracy rate, with
97.82% for the person-dependent mode and 74.21% for the person-independent mode
using the JAFFE dataset. However, we evaluated our proposed model using the
UIBVFED dataset. The Resnet50 has demonstrated superior performance, with
accuracies of 73.68% for the person-dependent mode and 59.57% for the
person-independent mode. Moreover, we employed metrics such as precision,
sensitivity, specificity, AUC, F1 score, and confusion matrix to measure our
system's efficiency in detail. Additionally, the LIME algorithm was used to
visualize CNN's decision-making strategy. | Computer Vision |
What field is the article from? | Title: Extending Machine Learning-Based Early Sepsis Detection to Different Demographics
Abstract: Sepsis requires urgent diagnosis, but research is predominantly focused on
Western datasets. In this study, we perform a comparative analysis of two
ensemble learning methods, LightGBM and XGBoost, using the public eICU-CRD
dataset and a private South Korean St. Mary's Hospital's dataset. Our analysis
reveals the effectiveness of these methods in addressing healthcare data
imbalance and enhancing sepsis detection. Specifically, LightGBM shows a slight
edge in computational efficiency and scalability. The study paves the way for
the broader application of machine learning in critical care, thereby expanding
the reach of predictive analytics in healthcare globally. | Machine Learning |
What field is the article from? | Title: DeepArt: A Benchmark to Advance Fidelity Research in AI-Generated Content
Abstract: This paper explores the image synthesis capabilities of GPT-4, a leading
multi-modal large language model. We establish a benchmark for evaluating the
fidelity of texture features in images generated by GPT-4, comprising manually
painted pictures and their AI-generated counterparts. The contributions of this
study are threefold: First, we provide an in-depth analysis of the fidelity of
image synthesis features based on GPT-4, marking the first such study on this
state-of-the-art model. Second, the quantitative and qualitative experiments
fully reveals the limitations of the GPT-4 model in image synthesis. Third, we
have compiled a unique benchmark of manual drawings and corresponding
GPT-4-generated images, introducing a new task to advance fidelity research in
AI-generated content (AIGC). The dataset will be available after being
accepted: \url{https://github.com/rickwang28574/DeepArt}. We hope this study
will fuel knowledge, scholarship, and innovation, inspiring uses that transform
how we discover and understand the world of art and promote the development of
AIGC while retaining respect for art. | Computer Vision |
What field is the article from? | Title: FedTherapist: Mental Health Monitoring with User-Generated Linguistic Expressions on Smartphones via Federated Learning
Abstract: Psychiatrists diagnose mental disorders via the linguistic use of patients.
Still, due to data privacy, existing passive mental health monitoring systems
use alternative features such as activity, app usage, and location via mobile
devices. We propose FedTherapist, a mobile mental health monitoring system that
utilizes continuous speech and keyboard input in a privacy-preserving way via
federated learning. We explore multiple model designs by comparing their
performance and overhead for FedTherapist to overcome the complex nature of
on-device language model training on smartphones. We further propose a
Context-Aware Language Learning (CALL) methodology to effectively utilize
smartphones' large and noisy text for mental health signal sensing. Our
IRB-approved evaluation of the prediction of self-reported depression, stress,
anxiety, and mood from 46 participants shows higher accuracy of FedTherapist
compared with the performance with non-language features, achieving 0.15 AUROC
improvement and 8.21% MAE reduction. | Computational Linguistics |
What field is the article from? | Title: Honesty Is the Best Policy: Defining and Mitigating AI Deception
Abstract: Deceptive agents are a challenge for the safety, trustworthiness, and
cooperation of AI systems. We focus on the problem that agents might deceive in
order to achieve their goals (for instance, in our experiments with language
models, the goal of being evaluated as truthful). There are a number of
existing definitions of deception in the literature on game theory and symbolic
AI, but there is no overarching theory of deception for learning agents in
games. We introduce a formal definition of deception in structural causal
games, grounded in the philosophy literature, and applicable to real-world
machine learning systems. Several examples and results illustrate that our
formal definition aligns with the philosophical and commonsense meaning of
deception. Our main technical result is to provide graphical criteria for
deception. We show, experimentally, that these results can be used to mitigate
deception in reinforcement learning agents and language models. | Artificial Intelligence |
What field is the article from? | Title: Uncovering Prototypical Knowledge for Weakly Open-Vocabulary Semantic Segmentation
Abstract: This paper studies the problem of weakly open-vocabulary semantic
segmentation (WOVSS), which learns to segment objects of arbitrary classes
using mere image-text pairs. Existing works turn to enhance the vanilla vision
transformer by introducing explicit grouping recognition, i.e., employing
several group tokens/centroids to cluster the image tokens and perform the
group-text alignment. Nevertheless, these methods suffer from a granularity
inconsistency regarding the usage of group tokens, which are aligned in the
all-to-one v.s. one-to-one manners during the training and inference phases,
respectively. We argue that this discrepancy arises from the lack of elaborate
supervision for each group token. To bridge this granularity gap, this paper
explores explicit supervision for the group tokens from the prototypical
knowledge. To this end, this paper proposes the non-learnable prototypical
regularization (NPR) where non-learnable prototypes are estimated from source
features to serve as supervision and enable contrastive matching of the group
tokens. This regularization encourages the group tokens to segment objects with
less redundancy and capture more comprehensive semantic regions, leading to
increased compactness and richness. Based on NPR, we propose the prototypical
guidance segmentation network (PGSeg) that incorporates multi-modal
regularization by leveraging prototypical sources from both images and texts at
different levels, progressively enhancing the segmentation capability with
diverse prototypical patterns. Experimental results show that our proposed
method achieves state-of-the-art performance on several benchmark datasets. The
source code is available at https://github.com/Ferenas/PGSeg. | Computer Vision |
What field is the article from? | Title: Towards Accurate Differential Diagnosis with Large Language Models
Abstract: An accurate differential diagnosis (DDx) is a cornerstone of medical care,
often reached through an iterative process of interpretation that combines
clinical history, physical examination, investigations and procedures.
Interactive interfaces powered by Large Language Models (LLMs) present new
opportunities to both assist and automate aspects of this process. In this
study, we introduce an LLM optimized for diagnostic reasoning, and evaluate its
ability to generate a DDx alone or as an aid to clinicians. 20 clinicians
evaluated 302 challenging, real-world medical cases sourced from the New
England Journal of Medicine (NEJM) case reports. Each case report was read by
two clinicians, who were randomized to one of two assistive conditions: either
assistance from search engines and standard medical resources, or LLM
assistance in addition to these tools. All clinicians provided a baseline,
unassisted DDx prior to using the respective assistive tools. Our LLM for DDx
exhibited standalone performance that exceeded that of unassisted clinicians
(top-10 accuracy 59.1% vs 33.6%, [p = 0.04]). Comparing the two assisted study
arms, the DDx quality score was higher for clinicians assisted by our LLM
(top-10 accuracy 51.7%) compared to clinicians without its assistance (36.1%)
(McNemar's Test: 45.7, p < 0.01) and clinicians with search (44.4%) (4.75, p =
0.03). Further, clinicians assisted by our LLM arrived at more comprehensive
differential lists than those without its assistance. Our study suggests that
our LLM for DDx has potential to improve clinicians' diagnostic reasoning and
accuracy in challenging cases, meriting further real-world evaluation for its
ability to empower physicians and widen patients' access to specialist-level
expertise. | Computers and Society |
What field is the article from? | Title: Introducing SSBD+ Dataset with a Convolutional Pipeline for detecting Self-Stimulatory Behaviours in Children using raw videos
Abstract: Conventionally, evaluation for the diagnosis of Autism spectrum disorder is
done by a trained specialist through questionnaire-based formal assessments and
by observation of behavioral cues under various settings to capture the early
warning signs of autism. These evaluation techniques are highly subjective and
their accuracy relies on the experience of the specialist. In this regard,
machine learning-based methods for automated capturing of early signs of autism
from the recorded videos of the children is a promising alternative. In this
paper, the authors propose a novel pipelined deep learning architecture to
detect certain self-stimulatory behaviors that help in the diagnosis of autism
spectrum disorder (ASD). The authors also supplement their tool with an
augmented version of the Self Stimulatory Behavior Dataset (SSBD) and also
propose a new label in SSBD Action detection: no-class. The deep learning model
with the new dataset is made freely available for easy adoption to the
researchers and developers community. An overall accuracy of around 81% was
achieved from the proposed pipeline model that is targeted for real-time and
hands-free automated diagnosis. All of the source code, data, licenses of use,
and other relevant material is made freely available in
https://github.com/sarl-iiitb/ | Computer Vision |
What field is the article from? | Title: RT-Trajectory: Robotic Task Generalization via Hindsight Trajectory Sketches
Abstract: Generalization remains one of the most important desiderata for robust robot
learning systems. While recently proposed approaches show promise in
generalization to novel objects, semantic concepts, or visual distribution
shifts, generalization to new tasks remains challenging. For example, a
language-conditioned policy trained on pick-and-place tasks will not be able to
generalize to a folding task, even if the arm trajectory of folding is similar
to pick-and-place. Our key insight is that this kind of generalization becomes
feasible if we represent the task through rough trajectory sketches. We propose
a policy conditioning method using such rough trajectory sketches, which we
call RT-Trajectory, that is practical, easy to specify, and allows the policy
to effectively perform new tasks that would otherwise be challenging to
perform. We find that trajectory sketches strike a balance between being
detailed enough to express low-level motion-centric guidance while being coarse
enough to allow the learned policy to interpret the trajectory sketch in the
context of situational visual observations. In addition, we show how trajectory
sketches can provide a useful interface to communicate with robotic policies:
they can be specified through simple human inputs like drawings or videos, or
through automated methods such as modern image-generating or
waypoint-generating methods. We evaluate RT-Trajectory at scale on a variety of
real-world robotic tasks, and find that RT-Trajectory is able to perform a
wider range of tasks compared to language-conditioned and goal-conditioned
policies, when provided the same training data. | Robotics |
What field is the article from? | Title: The Transient Nature of Emergent In-Context Learning in Transformers
Abstract: Transformer neural networks can exhibit a surprising capacity for in-context
learning (ICL) despite not being explicitly trained for it. Prior work has
provided a deeper understanding of how ICL emerges in transformers, e.g.
through the lens of mechanistic interpretability, Bayesian inference, or by
examining the distributional properties of training data. However, in each of
these cases, ICL is treated largely as a persistent phenomenon; namely, once
ICL emerges, it is assumed to persist asymptotically. Here, we show that the
emergence of ICL during transformer training is, in fact, often transient. We
train transformers on synthetic data designed so that both ICL and in-weights
learning (IWL) strategies can lead to correct predictions. We find that ICL
first emerges, then disappears and gives way to IWL, all while the training
loss decreases, indicating an asymptotic preference for IWL. The transient
nature of ICL is observed in transformers across a range of model sizes and
datasets, raising the question of how much to "overtrain" transformers when
seeking compact, cheaper-to-run models. We find that L2 regularization may
offer a path to more persistent ICL that removes the need for early stopping
based on ICL-style validation tasks. Finally, we present initial evidence that
ICL transience may be caused by competition between ICL and IWL circuits. | Machine Learning |
What field is the article from? | Title: Educating for AI Cybersecurity Work and Research: Ethics, Systems Thinking, and Communication Requirements
Abstract: The present study explored managerial and instructor perceptions of their
freshly employed cybersecurity workers' or students' preparedness to work
effectively in a changing cybersecurity environment that includes AI tools.
Specifically, we related perceptions of technical preparedness to ethical,
systems thinking, and communication skills. We found that managers and
professors perceive preparedness to use AI tools in cybersecurity to be
significantly associated with all three non-technical skill sets. Most
important, ethics is a clear leader in the network of relationships. Contrary
to expectations that ethical concerns are left behind in the rush to adopt the
most advanced AI tools in security, both higher education instructors and
managers appreciate their role and see them closely associated with technical
prowess. Another significant finding is that professors over-estimate students'
preparedness for ethical, system thinking, and communication abilities compared
to IT managers' perceptions of their newly employed IT workers. | Computers and Society |
What field is the article from? | Title: DoGE: Domain Reweighting with Generalization Estimation
Abstract: The coverage and composition of the pretraining data corpus significantly
impacts the generalization ability of large language models. Conventionally,
the pretraining corpus is composed of various source domains (e.g. CommonCrawl,
Wikipedia, Github etc.) according to certain sampling probabilities (domain
weights). However, current methods lack a principled way to optimize domain
weights for ultimate goal for generalization. We propose DOmain reweighting
with Generalization Estimation (DoGE), where we reweigh the sampling
probability from each domain based on its contribution to the final
generalization objective assessed by a gradient-based generalization estimation
function. First, we train a small-scale proxy model with a min-max optimization
to obtain the reweighted domain weights. At each step, the domain weights are
updated to maximize the overall generalization gain by mirror descent. Finally
we use the obtained domain weights to train a larger scale full-size language
model. On SlimPajama-6B dataset, with universal generalization objective, DoGE
achieves better average perplexity and zero-shot reasoning accuracy. On
out-of-domain generalization tasks, DoGE reduces perplexity on the target
domain by a large margin. We further apply a parameter-selection scheme which
improves the efficiency of generalization estimation. | Machine Learning |
What field is the article from? | Title: Probing LLMs for Joint Encoding of Linguistic Categories
Abstract: Large Language Models (LLMs) exhibit impressive performance on a range of NLP
tasks, due to the general-purpose linguistic knowledge acquired during
pretraining. Existing model interpretability research (Tenney et al., 2019)
suggests that a linguistic hierarchy emerges in the LLM layers, with lower
layers better suited to solving syntactic tasks and higher layers employed for
semantic processing. Yet, little is known about how encodings of different
linguistic phenomena interact within the models and to what extent processing
of linguistically-related categories relies on the same, shared model
representations. In this paper, we propose a framework for testing the joint
encoding of linguistic categories in LLMs. Focusing on syntax, we find evidence
of joint encoding both at the same (related part-of-speech (POS) classes) and
different (POS classes and related syntactic dependency relations) levels of
linguistic hierarchy. Our cross-lingual experiments show that the same patterns
hold across languages in multilingual LLMs. | Computational Linguistics |
What field is the article from? | Title: Formulating Discrete Probability Flow Through Optimal Transport
Abstract: Continuous diffusion models are commonly acknowledged to display a
deterministic probability flow, whereas discrete diffusion models do not. In
this paper, we aim to establish the fundamental theory for the probability flow
of discrete diffusion models. Specifically, we first prove that the continuous
probability flow is the Monge optimal transport map under certain conditions,
and also present an equivalent evidence for discrete cases. In view of these
findings, we are then able to define the discrete probability flow in line with
the principles of optimal transport. Finally, drawing upon our newly
established definitions, we propose a novel sampling method that surpasses
previous discrete diffusion models in its ability to generate more certain
outcomes. Extensive experiments on the synthetic toy dataset and the CIFAR-10
dataset have validated the effectiveness of our proposed discrete probability
flow. Code is released at:
https://github.com/PangzeCheung/Discrete-Probability-Flow. | Machine Learning |
What field is the article from? | Title: Genixer: Empowering Multimodal Large Language Models as a Powerful Data Generator
Abstract: Large Language Models (LLMs) excel in understanding human instructions,
driving the development of Multimodal LLMs (MLLMs) with instruction tuning.
However, acquiring high-quality multimodal instruction tuning data poses a
significant challenge. Previous approaches relying on GPT-4 for data generation
proved expensive and exhibited unsatisfactory performance for certain tasks. To
solve this, we present Genixer, an innovative data generation pipeline
producing high-quality multimodal instruction tuning data for various tasks.
Genixer collects datasets for ten prevalent multimodal tasks and designs
instruction templates to transform these datasets into instruction-tuning data.
It then trains pretrained MLLMs to generate task-specific instruction data and
proposes an effective data filtering strategy to ensure high quality. To
evaluate Genixer, a base MLLM model, Kakapo, is built and achieves SoTA
performance in image captioning and visual question answering (VQA) tasks
across multiple datasets. Experimental results show that filtered data from
Genixer continually improves Kakapo for image captioning and VQA tasks. For the
SoTA Shikra MLLM model on the image-region-related tasks, e.g., region caption
and detection, Genixer also successfully generates corresponding data and
improves its performance. Genixer opens avenues for generating high-quality
multimodal instruction data for diverse tasks, enabling innovative applications
across domains. The code and models will be released soon. | Computer Vision |
What field is the article from? | Title: Clinical Decision Support System for Unani Medicine Practitioners
Abstract: Like other fields of Traditional Medicines, Unani Medicines have been found
as an effective medical practice for ages. It is still widely used in the
subcontinent, particularly in Pakistan and India. However, Unani Medicines
Practitioners are lacking modern IT applications in their everyday clinical
practices. An Online Clinical Decision Support System may address this
challenge to assist apprentice Unani Medicines practitioners in their
diagnostic processes. The proposed system provides a web-based interface to
enter the patient's symptoms, which are then automatically analyzed by our
system to generate a list of probable diseases. The system allows practitioners
to choose the most likely disease and inform patients about the associated
treatment options remotely. The system consists of three modules: an Online
Clinical Decision Support System, an Artificial Intelligence Inference Engine,
and a comprehensive Unani Medicines Database. The system employs advanced AI
techniques such as Decision Trees, Deep Learning, and Natural Language
Processing. For system development, the project team used a technology stack
that includes React, FastAPI, and MySQL. Data and functionality of the
application is exposed using APIs for integration and extension with similar
domain applications. The novelty of the project is that it addresses the
challenge of diagnosing diseases accurately and efficiently in the context of
Unani Medicines principles. By leveraging the power of technology, the proposed
Clinical Decision Support System has the potential to ease access to healthcare
services and information, reduce cost, boost practitioner and patient
satisfaction, improve speed and accuracy of the diagnostic process, and provide
effective treatments remotely. The application will be useful for Unani
Medicines Practitioners, Patients, Government Drug Regulators, Software
Developers, and Medical Researchers. | Artificial Intelligence |
What field is the article from? | Title: Separate-and-Enhance: Compositional Finetuning for Text2Image Diffusion Models
Abstract: Despite recent significant strides achieved by diffusion-based Text-to-Image
(T2I) models, current systems are still less capable of ensuring decent
compositional generation aligned with text prompts, particularly for the
multi-object generation. This work illuminates the fundamental reasons for such
misalignment, pinpointing issues related to low attention activation scores and
mask overlaps. While previous research efforts have individually tackled these
issues, we assert that a holistic approach is paramount. Thus, we propose two
novel objectives, the Separate loss and the Enhance loss, that reduce object
mask overlaps and maximize attention scores, respectively. Our method diverges
from conventional test-time-adaptation techniques, focusing on finetuning
critical parameters, which enhances scalability and generalizability.
Comprehensive evaluations demonstrate the superior performance of our model in
terms of image realism, text-image alignment, and adaptability, notably
outperforming prominent baselines. Ultimately, this research paves the way for
T2I diffusion models with enhanced compositional capacities and broader
applicability. The project webpage is available at
https://zpbao.github.io/projects/SepEn/. | Computer Vision |
What field is the article from? | Title: Image Clustering Conditioned on Text Criteria
Abstract: Classical clustering methods do not provide users with direct control of the
clustering results, and the clustering results may not be consistent with the
relevant criterion that a user has in mind. In this work, we present a new
methodology for performing image clustering based on user-specified text
criteria by leveraging modern vision-language models and large language models.
We call our method Image Clustering Conditioned on Text Criteria (IC|TC), and
it represents a different paradigm of image clustering. IC|TC requires a
minimal and practical degree of human intervention and grants the user
significant control over the clustering results in return. Our experiments show
that IC|TC can effectively cluster images with various criteria, such as human
action, physical location, or the person's mood, while significantly
outperforming baselines. | Computer Vision |