arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
sequencelengths 1
389
| abstract
stringlengths 96
3.09k
| categories
sequencelengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2305.05216 | 2023-05-09T07:24:26Z | Dataset of a parameterized U-bend flow for Deep Learning Applications | [
"Jens Decke",
"Olaf Wünsch",
"Bernhard Sick"
] | This dataset contains 10,000 fluid flow and heat transfer simulations in
U-bend shapes. Each of them is described by 28 design parameters, which are
processed with the help of Computational Fluid Dynamics methods. The dataset
provides a comprehensive benchmark for investigating various problems and
methods from the field of design optimization. For these investigations
supervised, semi-supervised and unsupervised deep learning approaches can be
employed. One unique feature of this dataset is that each shape can be
represented by three distinct data types including design parameter and
objective combinations, five different resolutions of 2D images from the
geometry and the solution variables of the numerical simulation, as well as a
representation using the cell values of the numerical mesh. This third
representation enables considering the specific data structure of numerical
simulations for deep learning approaches. The source code and the container
used to generate the data are published as part of this work. | [
"physics.flu-dyn",
"cs.LG"
] | false |
2305.05239 | 2023-05-09T08:00:23Z | Learnable Behavior Control: Breaking Atari Human World Records via
Sample-Efficient Behavior Selection | [
"Jiajun Fan",
"Yuzheng Zhuang",
"Yuecheng Liu",
"Jianye Hao",
"Bin Wang",
"Jiangcheng Zhu",
"Hao Wang",
"Shu-Tao Xia"
] | The exploration problem is one of the main challenges in deep reinforcement
learning (RL). Recent promising works tried to handle the problem with
population-based methods, which collect samples with diverse behaviors derived
from a population of different exploratory policies. Adaptive policy selection
has been adopted for behavior control. However, the behavior selection space is
largely limited by the predefined policy population, which further limits
behavior diversity. In this paper, we propose a general framework called
Learnable Behavioral Control (LBC) to address the limitation, which a) enables
a significantly enlarged behavior selection space via formulating a hybrid
behavior mapping from all policies; b) constructs a unified learnable process
for behavior selection. We introduce LBC into distributed off-policy
actor-critic methods and achieve behavior control via optimizing the selection
of the behavior mappings with bandit-based meta-controllers. Our agents have
achieved 10077.52% mean human normalized score and surpassed 24 human world
records within 1B training frames in the Arcade Learning Environment, which
demonstrates our significant state-of-the-art (SOTA) performance without
degrading the sample efficiency. | [
"cs.LG",
"cs.AI"
] | false |
2305.05355 | 2023-05-09T11:43:31Z | Turning Privacy-preserving Mechanisms against Federated Learning | [
"Marco Arazzi",
"Mauro Conti",
"Antonino Nocera",
"Stjepan Picek"
] | Recently, researchers have successfully employed Graph Neural Networks (GNNs)
to build enhanced recommender systems due to their capability to learn patterns
from the interaction between involved entities. In addition, previous studies
have investigated federated learning as the main solution to enable a native
privacy-preserving mechanism for the construction of global GNN models without
collecting sensitive data into a single computation unit. Still, privacy issues
may arise as the analysis of local model updates produced by the federated
clients can return information related to sensitive local data. For this
reason, experts proposed solutions that combine federated learning with
Differential Privacy strategies and community-driven approaches, which involve
combining data from neighbor clients to make the individual local updates less
dependent on local sensitive data. In this paper, we identify a crucial
security flaw in such a configuration, and we design an attack capable of
deceiving state-of-the-art defenses for federated learning. The proposed attack
includes two operating modes, the first one focusing on convergence inhibition
(Adversarial Mode), and the second one aiming at building a deceptive rating
injection on the global federated model (Backdoor Mode). The experimental
results show the effectiveness of our attack in both its modes, returning on
average 60% performance detriment in all the tests on Adversarial Mode and
fully effective backdoors in 93% of cases for the tests performed on Backdoor
Mode. | [
"cs.LG",
"cs.CR"
] | false |
2305.05469 | 2023-05-09T14:15:55Z | Graph Neural Networks for Airfoil Design | [
"Florent Bonnet"
] | The study of partial differential equations (PDE) through the framework of
deep learning emerged a few years ago leading to the impressive approximations
of simple dynamics. Graph neural networks (GNN) turned out to be very useful in
those tasks by allowing the treatment of unstructured data often encountered in
the field of numerical resolutions of PDE. However, the resolutions of harder
PDE such as Navier-Stokes equations are still a challenging task and most of
the work done on the latter concentrate either on simulating the flow around
simple geometries or on qualitative results that looks physical for design
purpose. In this study, we try to leverage the work done on deep learning for
PDE and GNN by proposing an adaptation of a known architecture in order to
tackle the task of approximating the solution of the two-dimensional
steady-state incompressible Navier-Stokes equations over different airfoil
geometries. In addition to that, we test our model not only on its performance
over the volume but also on its performance to approximate surface quantities
such as the wall shear stress or the isostatic pressure leading to the
inference of global coefficients such as the lift and the drag of our airfoil
in order to allow design exploration. This work takes place in a longer project
that aims to approximate three dimensional steady-state solutions over
industrial geometries. | [
"cs.LG",
"physics.flu-dyn"
] | false |
2305.05562 | 2023-05-09T15:48:34Z | SkelEx and BoundEx: Natural Visualization of ReLU Neural Networks | [
"Pawel Pukowski",
"Haiping Lu"
] | Despite their limited interpretability, weights and biases are still the most
popular encoding of the functions learned by ReLU Neural Networks (ReLU NNs).
That is why we introduce SkelEx, an algorithm to extract a skeleton of the
membership functions learned by ReLU NNs, making those functions easier to
interpret and analyze. To the best of our knowledge, this is the first work
that considers linear regions from the perspective of critical points. As a
natural follow-up, we also introduce BoundEx, which is the first analytical
method known to us to extract the decision boundary from the realization of a
ReLU NN. Both of those methods introduce very natural visualization tool for
ReLU NNs trained on low-dimensional data. | [
"cs.LG",
"cs.AI"
] | false |
2305.05642 | 2023-05-09T17:41:50Z | A duality framework for generalization analysis of random feature models
and two-layer neural networks | [
"Hongrui Chen",
"Jihao Long",
"Lei Wu"
] | We consider the problem of learning functions in the $\mathcal{F}_{p,\pi}$
and Barron spaces, which are natural function spaces that arise in the
high-dimensional analysis of random feature models (RFMs) and two-layer neural
networks. Through a duality analysis, we reveal that the approximation and
estimation of these spaces can be considered equivalent in a certain sense.
This enables us to focus on the easier problem of approximation and estimation
when studying the generalization of both models. The dual equivalence is
established by defining an information-based complexity that can effectively
control estimation errors. Additionally, we demonstrate the flexibility of our
duality framework through comprehensive analyses of two concrete applications.
The first application is to study learning functions in $\mathcal{F}_{p,\pi}$
with RFMs. We prove that the learning does not suffer from the curse of
dimensionality as long as $p>1$, implying RFMs can work beyond the kernel
regime. Our analysis extends existing results [CMM21] to the noisy case and
removes the requirement of overparameterization.
The second application is to investigate the learnability of reproducing
kernel Hilbert space (RKHS) under the $L^\infty$ metric. We derive both lower
and upper bounds of the minimax estimation error by using the spectrum of the
associated kernel. We then apply these bounds to dot-product kernels and
analyze how they scale with the input dimension. Our results suggest that
learning with ReLU (random) features is generally intractable in terms of
reaching high uniform accuracy. | [
"stat.ML",
"cs.LG"
] | false |
2305.05708 | 2023-05-09T18:35:38Z | Language models can generate molecules, materials, and protein binding
sites directly in three dimensions as XYZ, CIF, and PDB files | [
"Daniel Flam-Shepherd",
"Alán Aspuru-Guzik"
] | Language models are powerful tools for molecular design. Currently, the
dominant paradigm is to parse molecular graphs into linear string
representations that can easily be trained on. This approach has been very
successful, however, it is limited to chemical structures that can be
completely represented by a graph -- like organic molecules -- while materials
and biomolecular structures like protein binding sites require a more complete
representation that includes the relative positioning of their atoms in space.
In this work, we show how language models, without any architecture
modifications, trained using next-token prediction -- can generate novel and
valid structures in three dimensions from various substantially different
distributions of chemical structures. In particular, we demonstrate that
language models trained directly on sequences derived directly from chemical
file formats like XYZ files, Crystallographic Information files (CIFs), or
Protein Data Bank files (PDBs) can directly generate molecules, crystals, and
protein binding sites in three dimensions. Furthermore, despite being trained
on chemical file sequences -- language models still achieve performance
comparable to state-of-the-art models that use graph and graph-derived string
representations, as well as other domain-specific 3D generative models. In
doing so, we demonstrate that it is not necessary to use simplified molecular
representations to train chemical language models -- that they are powerful
generative models capable of directly exploring chemical space in three
dimensions for very different structures. | [
"cs.LG",
"q-bio.QM"
] | false |
2305.05722 | 2023-05-09T19:14:01Z | Enhancing Clinical Predictive Modeling through Model Complexity-Driven
Class Proportion Tuning for Class Imbalanced Data: An Empirical Study on
Opioid Overdose Prediction | [
"Yinan Liu",
"Xinyu Dong",
"Weimin Lyu",
"Richard N. Rosenthal",
"Rachel Wong",
"Tengfei Ma",
"Fusheng Wang"
] | Class imbalance problems widely exist in the medical field and heavily
deteriorates performance of clinical predictive models. Most techniques to
alleviate the problem rebalance class proportions and they predominantly assume
the rebalanced proportions should be a function of the original data and
oblivious to the model one uses. This work challenges this prevailing
assumption and proposes that links the optimal class proportions to the model
complexity, thereby tuning the class proportions per model. Our experiments on
the opioid overdose prediction problem highlight the performance gain of tuning
class proportions. Rigorous regression analysis also confirms the advantages of
the theoretical framework proposed and the statistically significant
correlation between the hyperparameters controlling the model complexity and
the optimal class proportions. | [
"cs.LG",
"stat.AP"
] | false |
2305.05740 | 2023-05-09T19:33:52Z | Message Passing Neural Networks for Traffic Forecasting | [
"Arian Prabowo",
"Hao Xue",
"Wei Shao",
"Piotr Koniusz",
"Flora D. Salim"
] | A road network, in the context of traffic forecasting, is typically modeled
as a graph where the nodes are sensors that measure traffic metrics (such as
speed) at that location. Traffic forecasting is interesting because it is
complex as the future speed of a road is dependent on a number of different
factors. Therefore, to properly forecast traffic, we need a model that is
capable of capturing all these different factors. A factor that is missing from
the existing works is the node interactions factor. Existing works fail to
capture the inter-node interactions because none are using the message-passing
flavor of GNN, which is the one best suited to capture the node interactions
This paper presents a plausible scenario in road traffic where node
interactions are important and argued that the most appropriate GNN flavor to
capture node interactions is message-passing. Results from real-world data show
the superiority of the message-passing flavor for traffic forecasting. An
additional experiment using synthetic data shows that the message-passing
flavor can capture inter-node interaction better than other flavors. | [
"cs.LG",
"cs.SI"
] | false |
2305.05778 | 2023-05-09T21:48:44Z | Multi-Object Self-Supervised Depth Denoising | [
"Claudius Kienle",
"David Petri"
] | Depth cameras are frequently used in robotic manipulation, e.g. for visual
servoing. The quality of small and compact depth cameras is though often not
sufficient for depth reconstruction, which is required for precise tracking in
and perception of the robot's working space. Based on the work of Shabanov et
al. (2021), in this work, we present a self-supervised multi-object depth
denoising pipeline, that uses depth maps of higher-quality sensors as
close-to-ground-truth supervisory signals to denoise depth maps coming from a
lower-quality sensor. We display a computationally efficient way to align sets
of two frame pairs in space and retrieve a frame-based multi-object mask, in
order to receive a clean labeled dataset to train a denoising neural network
on. The implementation of our presented work can be found at
https://github.com/alr-internship/self-supervised-depth-denoising. | [
"cs.LG",
"cs.RO",
"68T07",
"I.2.10"
] | false |
2305.05779 | 2023-05-09T21:57:15Z | Learning to Parallelize with OpenMP by Augmented Heterogeneous AST
Representation | [
"Le Chen",
"Quazi Ishtiaque Mahmud",
"Hung Phan",
"Nesreen K. Ahmed",
"Ali Jannesari"
] | Detecting parallelizable code regions is a challenging task, even for
experienced developers. Numerous recent studies have explored the use of
machine learning for code analysis and program synthesis, including
parallelization, in light of the success of machine learning in natural
language processing. However, applying machine learning techniques to
parallelism detection presents several challenges, such as the lack of an
adequate dataset for training, an effective code representation with rich
information, and a suitable machine learning model to learn the latent features
of code for diverse analyses. To address these challenges, we propose a novel
graph-based learning approach called Graph2Par that utilizes a heterogeneous
augmented abstract syntax tree (Augmented-AST) representation for code. The
proposed approach primarily focused on loop-level parallelization with OpenMP.
Moreover, we create an OMP\_Serial dataset with 18598 parallelizable and 13972
non-parallelizable loops to train the machine learning models. Our results show
that our proposed approach achieves the accuracy of parallelizable code region
detection with 85\% accuracy and outperforms the state-of-the-art token-based
machine learning approach. These results indicate that our approach is
competitive with state-of-the-art tools and capable of handling loops with
complex structures that other tools may overlook. | [
"cs.LG",
"cs.SE"
] | false |
2305.05792 | 2023-05-09T22:49:55Z | Testing for Overfitting | [
"James Schmidt"
] | High complexity models are notorious in machine learning for overfitting, a
phenomenon in which models well represent data but fail to generalize an
underlying data generating process. A typical procedure for circumventing
overfitting computes empirical risk on a holdout set and halts once (or flags
that/when) it begins to increase. Such practice often helps in outputting a
well-generalizing model, but justification for why it works is primarily
heuristic.
We discuss the overfitting problem and explain why standard asymptotic and
concentration results do not hold for evaluation with training data. We then
proceed to introduce and argue for a hypothesis test by means of which both
model performance may be evaluated using training data, and overfitting
quantitatively defined and detected. We rely on said concentration bounds which
guarantee that empirical means should, with high probability, approximate their
true mean to conclude that they should approximate each other. We stipulate
conditions under which this test is valid, describe how the test may be used
for identifying overfitting, articulate a further nuance according to which
distributional shift may be flagged, and highlight an alternative notion of
learning which usefully captures generalization in the absence of uniform PAC
guarantees. | [
"stat.ML",
"cs.LG"
] | false |
2305.05150 | 2023-05-09T03:30:06Z | Physics-informed neural network for seismic wave inversion in layered
semi-infinite domain | [
"Pu Ren",
"Chengping Rao",
"Hao Sun",
"Yang Liu"
] | Estimating the material distribution of Earth's subsurface is a challenging
task in seismology and earthquake engineering. The recent development of
physics-informed neural network (PINN) has shed new light on seismic inversion.
In this paper, we present a PINN framework for seismic wave inversion in
layered (1D) semi-infinite domain. The absorbing boundary condition is
incorporated into the network as a soft regularizer for avoiding excessive
computation. In specific, we design a lightweight network to learn the unknown
material distribution and a deep neural network to approximate solution
variables. The entire network is end-to-end and constrained by both sparse
measurement data and the underlying physical laws (i.e., governing equations
and initial/boundary conditions). Various experiments have been conducted to
validate the effectiveness of our proposed approach for inverse modeling of
seismic wave propagation in 1D semi-infinite domain. | [
"physics.geo-ph",
"cs.LG",
"cs.NA",
"math.NA"
] | false |
2305.05159 | 2023-05-09T04:03:40Z | Latent Interactive A2C for Improved RL in Open Many-Agent Systems | [
"Keyang He",
"Prashant Doshi",
"Bikramjit Banerjee"
] | There is a prevalence of multiagent reinforcement learning (MARL) methods
that engage in centralized training. But, these methods involve obtaining
various types of information from the other agents, which may not be feasible
in competitive or adversarial settings. A recent method, the interactive
advantage actor critic (IA2C), engages in decentralized training coupled with
decentralized execution, aiming to predict the other agents' actions from
possibly noisy observations. In this paper, we present the latent IA2C that
utilizes an encoder-decoder architecture to learn a latent representation of
the hidden state and other agents' actions. Our experiments in two domains --
each populated by many agents -- reveal that the latent IA2C significantly
improves sample efficiency by reducing variance and converging faster.
Additionally, we introduce open versions of these domains where the agent
population may change over time, and evaluate on these instances as well. | [
"cs.LG",
"cs.AI",
"cs.MA"
] | false |
2305.05163 | 2023-05-09T04:19:10Z | Cooperating Graph Neural Networks with Deep Reinforcement Learning for
Vaccine Prioritization | [
"Lu Ling",
"Washim Uddin Mondal",
"Satish V",
"Ukkusuri"
] | This study explores the vaccine prioritization strategy to reduce the overall
burden of the pandemic when the supply is limited. Existing methods conduct
macro-level or simplified micro-level vaccine distribution by assuming the
homogeneous behavior within subgroup populations and lacking mobility dynamics
integration. Directly applying these models for micro-level vaccine allocation
leads to sub-optimal solutions due to the lack of behavioral-related details.
To address the issue, we first incorporate the mobility heterogeneity in
disease dynamics modeling and mimic the disease evolution process using a
Trans-vaccine-SEIR model. Then we develop a novel deep reinforcement learning
to seek the optimal vaccine allocation strategy for the high-degree
spatial-temporal disease evolution system. The graph neural network is used to
effectively capture the structural properties of the mobility contact network
and extract the dynamic disease features. In our evaluation, the proposed
framework reduces 7% - 10% of infections and deaths than the baseline
strategies. Extensive evaluation shows that the proposed framework is robust to
seek the optimal vaccine allocation with diverse mobility patterns in the
micro-level disease evolution system. In particular, we find the optimal
vaccine allocation strategy in the transit usage restriction scenario is
significantly more effective than restricting cross-zone mobility for the top
10% age-based and income-based zones. These results provide valuable insights
for areas with limited vaccines and low logistic efficacy. | [
"q-bio.PE",
"cs.AI",
"cs.LG"
] | false |
2305.05172 | 2023-05-09T04:53:57Z | Logic for Explainable AI | [
"Adnan Darwiche"
] | A central quest in explainable AI relates to understanding the decisions made
by (learned) classifiers. There are three dimensions of this understanding that
have been receiving significant attention in recent years. The first dimension
relates to characterizing conditions on instances that are necessary and
sufficient for decisions, therefore providing abstractions of instances that
can be viewed as the "reasons behind decisions." The next dimension relates to
characterizing minimal conditions that are sufficient for a decision, therefore
identifying maximal aspects of the instance that are irrelevant to the
decision. The last dimension relates to characterizing minimal conditions that
are necessary for a decision, therefore identifying minimal perturbations to
the instance that yield alternate decisions. We discuss in this tutorial a
comprehensive, semantical and computational theory of explainability along
these dimensions which is based on some recent developments in symbolic logic.
The tutorial will also discuss how this theory is particularly applicable to
non-symbolic classifiers such as those based on Bayesian networks, decision
trees, random forests and some types of neural networks. | [
"cs.AI",
"cs.LG",
"cs.LO"
] | false |
2305.05238 | 2023-05-09T08:00:10Z | Architectural Vision for Quantum Computing in the Edge-Cloud Continuum | [
"Alireza Furutanpey",
"Johanna Barzen",
"Marvin Bechtold",
"Schahram Dustdar",
"Frank Leymann",
"Philipp Raith",
"Felix Truger"
] | Quantum processing units (QPUs) are currently exclusively available from
cloud vendors. However, with recent advancements, hosting QPUs is soon possible
everywhere. Existing work has yet to draw from research in edge computing to
explore systems exploiting mobile QPUs, or how hybrid applications can benefit
from distributed heterogeneous resources. Hence, this work presents an
architecture for Quantum Computing in the edge-cloud continuum. We discuss the
necessity, challenges, and solution approaches for extending existing work on
classical edge computing to integrate QPUs. We describe how warm-starting
allows defining workflows that exploit the hierarchical resources spread across
the continuum. Then, we introduce a distributed inference engine with hybrid
classical-quantum neural networks (QNNs) to aid system designers in
accommodating applications with complex requirements that incur the highest
degree of heterogeneity. We propose solutions focusing on classical layer
partitioning and quantum circuit cutting to demonstrate the potential of
utilizing classical and quantum computation across the continuum. To evaluate
the importance and feasibility of our vision, we provide a proof of concept
that exemplifies how extending a classical partition method to integrate
quantum circuits can improve the solution quality. Specifically, we implement a
split neural network with optional hybrid QNN predictors. Our results show that
extending classical methods with QNNs is viable and promising for future work. | [
"quant-ph",
"cs.DC",
"cs.LG"
] | false |
2305.05247 | 2023-05-09T08:12:44Z | Leveraging Generative AI Models for Synthetic Data Generation in
Healthcare: Balancing Research and Privacy | [
"Aryan Jadon",
"Shashank Kumar"
] | The widespread adoption of electronic health records and digital healthcare
data has created a demand for data-driven insights to enhance patient outcomes,
diagnostics, and treatments. However, using real patient data presents privacy
and regulatory challenges, including compliance with HIPAA and GDPR. Synthetic
data generation, using generative AI models like GANs and VAEs offers a
promising solution to balance valuable data access and patient privacy
protection. In this paper, we examine generative AI models for creating
realistic, anonymized patient data for research and training, explore synthetic
data applications in healthcare, and discuss its benefits, challenges, and
future research directions. Synthetic data has the potential to revolutionize
healthcare by providing anonymized patient data while preserving privacy and
enabling versatile applications. | [
"cs.LG",
"cs.AI",
"cs.CR"
] | false |
2305.05566 | 2023-05-09T15:55:19Z | SMAClite: A Lightweight Environment for Multi-Agent Reinforcement
Learning | [
"Adam Michalski",
"Filippos Christianos",
"Stefano V. Albrecht"
] | There is a lack of standard benchmarks for Multi-Agent Reinforcement Learning
(MARL) algorithms. The Starcraft Multi-Agent Challenge (SMAC) has been widely
used in MARL research, but is built on top of a heavy, closed-source computer
game, StarCraft II. Thus, SMAC is computationally expensive and requires
knowledge and the use of proprietary tools specific to the game for any
meaningful alteration or contribution to the environment. We introduce SMAClite
-- a challenge based on SMAC that is both decoupled from Starcraft II and
open-source, along with a framework which makes it possible to create new
content for SMAClite without any special knowledge. We conduct experiments to
show that SMAClite is equivalent to SMAC, by training MARL algorithms on
SMAClite and reproducing SMAC results. We then show that SMAClite outperforms
SMAC in both runtime speed and memory. | [
"cs.LG",
"cs.AI",
"cs.MA"
] | false |
2305.05601 | 2023-05-09T16:50:36Z | Deep Learning and Geometric Deep Learning: an introduction for
mathematicians and physicists | [
"R. Fioresi",
"F. Zanchetta"
] | In this expository paper we want to give a brief introduction, with few key
references for further reading, to the inner functioning of the new and
successfull algorithms of Deep Learning and Geometric Deep Learning with a
focus on Graph Neural Networks. We go over the key ingredients for these
algorithms: the score and loss function and we explain the main steps for the
training of a model. We do not aim to give a complete and exhaustive treatment,
but we isolate few concepts to give a fast introduction to the subject. We
provide some appendices to complement our treatment discussing Kullback-Leibler
divergence, regression, Multi-layer Perceptrons and the Universal Approximation
Theorem. | [
"cs.LG",
"math-ph",
"math.MP"
] | false |
2305.05611 | 2023-05-09T17:04:50Z | Metric Space Magnitude and Generalisation in Neural Networks | [
"Rayna Andreeva",
"Katharina Limbeck",
"Bastian Rieck",
"Rik Sarkar"
] | Deep learning models have seen significant successes in numerous
applications, but their inner workings remain elusive. The purpose of this work
is to quantify the learning process of deep neural networks through the lens of
a novel topological invariant called magnitude. Magnitude is an isometry
invariant; its properties are an active area of research as it encodes many
known invariants of a metric space. We use magnitude to study the internal
representations of neural networks and propose a new method for determining
their generalisation capabilities. Moreover, we theoretically connect magnitude
dimension and the generalisation error, and demonstrate experimentally that the
proposed framework can be a good indicator of the latter. | [
"cs.LG",
"math.GT",
"stat.ML"
] | false |
2305.05675 | 2023-05-09T13:07:03Z | UAdam: Unified Adam-Type Algorithmic Framework for Non-Convex Stochastic
Optimization | [
"Yiming Jiang",
"Jinlan Liu",
"Dongpo Xu",
"Danilo P. Mandic"
] | Adam-type algorithms have become a preferred choice for optimisation in the
deep learning setting, however, despite success, their convergence is still not
well understood. To this end, we introduce a unified framework for Adam-type
algorithms (called UAdam). This is equipped with a general form of the
second-order moment, which makes it possible to include Adam and its variants
as special cases, such as NAdam, AMSGrad, AdaBound, AdaFom, and Adan. This is
supported by a rigorous convergence analysis of UAdam in the non-convex
stochastic setting, showing that UAdam converges to the neighborhood of
stationary points with the rate of $\mathcal{O}(1/T)$. Furthermore, the size of
neighborhood decreases as $\beta$ increases. Importantly, our analysis only
requires the first-order momentum factor to be close enough to 1, without any
restrictions on the second-order momentum factor. Theoretical results also show
that vanilla Adam can converge by selecting appropriate hyperparameters, which
provides a theoretical guarantee for the analysis, applications, and further
developments of the whole class of Adam-type algorithms. | [
"cs.LG",
"cs.NA",
"math.NA",
"math.OC"
] | false |
2305.05750 | 2023-05-09T20:08:30Z | A Systematic Literature Review on Hardware Reliability Assessment
Methods for Deep Neural Networks | [
"Mohammad Hasan Ahmadilivani",
"Mahdi Taheri",
"Jaan Raik",
"Masoud Daneshtalab",
"Maksim Jenihhin"
] | Artificial Intelligence (AI) and, in particular, Machine Learning (ML) have
emerged to be utilized in various applications due to their capability to learn
how to solve complex problems. Over the last decade, rapid advances in ML have
presented Deep Neural Networks (DNNs) consisting of a large number of neurons
and layers. DNN Hardware Accelerators (DHAs) are leveraged to deploy DNNs in
the target applications. Safety-critical applications, where hardware
faults/errors would result in catastrophic consequences, also benefit from
DHAs. Therefore, the reliability of DNNs is an essential subject of research.
In recent years, several studies have been published accordingly to assess the
reliability of DNNs. In this regard, various reliability assessment methods
have been proposed on a variety of platforms and applications. Hence, there is
a need to summarize the state of the art to identify the gaps in the study of
the reliability of DNNs. In this work, we conduct a Systematic Literature
Review (SLR) on the reliability assessment methods of DNNs to collect relevant
research works as much as possible, present a categorization of them, and
address the open challenges. Through this SLR, three kinds of methods for
reliability assessment of DNNs are identified including Fault Injection (FI),
Analytical, and Hybrid methods. Since the majority of works assess the DNN
reliability by FI, we characterize different approaches and platforms of the FI
method comprehensively. Moreover, Analytical and Hybrid methods are propounded.
Thus, different reliability assessment methods for DNNs have been elaborated on
their conducted DNN platforms and reliability evaluation metrics. Finally, we
highlight the advantages and disadvantages of the identified methods and
address the open challenges in the research area. | [
"cs.LG",
"cs.AI",
"cs.AR"
] | false |
2305.05780 | 2023-05-09T21:58:54Z | Enhancing Gappy Speech Audio Signals with Generative Adversarial
Networks | [
"Deniss Strods",
"Alan F. Smeaton"
] | Gaps, dropouts and short clips of corrupted audio are a common problem and
particularly annoying when they occur in speech. This paper uses machine
learning to regenerate gaps of up to 320ms in an audio speech signal. Audio
regeneration is translated into image regeneration by transforming audio into a
Mel-spectrogram and using image in-painting to regenerate the gaps. The full
Mel-spectrogram is then transferred back to audio using the Parallel-WaveGAN
vocoder and integrated into the audio stream. Using a sample of 1300 spoken
audio clips of between 1 and 10 seconds taken from the publicly-available
LJSpeech dataset our results show regeneration of audio gaps in close to real
time using GANs with a GPU equipped system. As expected, the smaller the gap in
the audio, the better the quality of the filled gaps. On a gap of 240ms the
average mean opinion score (MOS) for the best performing models was 3.737, on a
scale of 1 (worst) to 5 (best) which is sufficient for a human to perceive as
close to uninterrupted human speech. | [
"cs.SD",
"cs.LG",
"eess.AS"
] | false |
2305.06158 | 2023-05-09T09:14:28Z | EdgeNet : Encoder-decoder generative Network for Auction Design in
E-commerce Online Advertising | [
"Guangyuan Shen",
"Shengjie Sun",
"Dehong Gao",
"Libin Yang",
"Yongping Shi",
"Wei Ning"
] | We present a new encoder-decoder generative network dubbed EdgeNet, which
introduces a novel encoder-decoder framework for data-driven auction design in
online e-commerce advertising. We break the neural auction paradigm of
Generalized-Second-Price(GSP), and improve the utilization efficiency of data
while ensuring the economic characteristics of the auction mechanism.
Specifically, EdgeNet introduces a transformer-based encoder to better capture
the mutual influence among different candidate advertisements. In contrast to
GSP based neural auction model, we design an autoregressive decoder to better
utilize the rich context information in online advertising auctions. EdgeNet is
conceptually simple and easy to extend to the existing end-to-end neural
auction framework. We validate the efficiency of EdgeNet on a wide range of
e-commercial advertising auction, demonstrating its potential in improving user
experience and platform revenue. | [
"cs.IR",
"cs.AI",
"cs.LG"
] | false |
2305.08740 | 2023-05-09T11:17:46Z | Temporal and Heterogeneous Graph Neural Network for Financial Time
Series Prediction | [
"Sheng Xiang",
"Dawei Cheng",
"Chencheng Shang",
"Ying Zhang",
"Yuqi Liang"
] | The price movement prediction of stock market has been a classical yet
challenging problem, with the attention of both economists and computer
scientists. In recent years, graph neural network has significantly improved
the prediction performance by employing deep learning on company relations.
However, existing relation graphs are usually constructed by handcraft human
labeling or nature language processing, which are suffering from heavy resource
requirement and low accuracy. Besides, they cannot effectively response to the
dynamic changes in relation graphs. Therefore, in this paper, we propose a
temporal and heterogeneous graph neural network-based (THGNN) approach to learn
the dynamic relations among price movements in financial time series. In
particular, we first generate the company relation graph for each trading day
according to their historic price. Then we leverage a transformer encoder to
encode the price movement information into temporal representations. Afterward,
we propose a heterogeneous graph attention network to jointly optimize the
embeddings of the financial time series data by transformer encoder and infer
the probability of target movements. Finally, we conduct extensive experiments
on the stock market in the United States and China. The results demonstrate the
effectiveness and superior performance of our proposed methods compared with
state-of-the-art baselines. Moreover, we also deploy the proposed THGNN in a
real-world quantitative algorithm trading system, the accumulated portfolio
return obtained by our method significantly outperforms other baselines. | [
"q-fin.ST",
"cs.LG",
"q-fin.PM"
] | false |
2305.08778 | 2023-05-09T08:19:08Z | Copula Variational LSTM for High-dimensional Cross-market Multivariate
Dependence Modeling | [
"Jia Xu",
"Longbing Cao"
] | We address an important yet challenging problem - modeling high-dimensional
dependencies across multivariates such as financial indicators in heterogeneous
markets. In reality, a market couples and influences others over time, and the
financial variables of a market are also coupled. We make the first attempt to
integrate variational sequential neural learning with copula-based dependence
modeling to characterize both temporal observable and latent variable-based
dependence degrees and structures across non-normal multivariates. Our
variational neural network WPVC-VLSTM models variational sequential dependence
degrees and structures across multivariate time series by variational long
short-term memory networks and regular vine copula. The regular vine copula
models nonnormal and long-range distributional couplings across multiple
dynamic variables. WPVC-VLSTM is verified in terms of both technical
significance and portfolio forecasting performance. It outperforms benchmarks
including linear models, stochastic volatility models, deep neural networks,
and variational recurrent networks in cross-market portfolio forecasting. | [
"q-fin.ST",
"cs.AI",
"cs.LG"
] | false |
2305.06879 | 2023-05-09T16:11:17Z | Convex Quaternion Optimization for Signal Processing: Theory and
Applications | [
"Shuning Sun",
"Qiankun Diao",
"Dongpo Xu",
"Pauline Bourigault",
"Danilo P. Mandic"
] | Convex optimization methods have been extensively used in the fields of
communications and signal processing. However, the theory of quaternion
optimization is currently not as fully developed and systematic as that of
complex and real optimization. To this end, we establish an essential theory of
convex quaternion optimization for signal processing based on the generalized
Hamilton-real (GHR) calculus. This is achieved in a way which conforms with
traditional complex and real optimization theory. For rigorous, We present five
discriminant theorems for convex quaternion functions, and four discriminant
criteria for strongly convex quaternion functions. Furthermore, we provide a
fundamental theorem for the optimality of convex quaternion optimization
problems, and demonstrate its utility through three applications in quaternion
signal processing. These results provide a solid theoretical foundation for
convex quaternion optimization and open avenues for further developments in
signal processing applications. | [
"math.OC",
"cs.LG",
"cs.NA",
"eess.SP",
"math.NA"
] | false |
2305.05808 | 2023-05-09T23:45:16Z | On the Information Capacity of Nearest Neighbor Representations | [
"Kordag Mehmet Kilic",
"Jin Sima",
"Jehoshua Bruck"
] | The $\textit{von Neumann Computer Architecture}$ has a distinction between
computation and memory. In contrast, the brain has an integrated architecture
where computation and memory are indistinguishable. Motivated by the
architecture of the brain, we propose a model of $\textit{associative
computation}$ where memory is defined by a set of vectors in $\mathbb{R}^n$
(that we call $\textit{anchors}$), computation is performed by convergence from
an input vector to a nearest neighbor anchor, and the output is a label
associated with an anchor. Specifically, in this paper, we study the
representation of Boolean functions in the associative computation model, where
the inputs are binary vectors and the corresponding outputs are the labels ($0$
or $1$) of the nearest neighbor anchors. The information capacity of a Boolean
function in this model is associated with two quantities: $\textit{(i)}$ the
number of anchors (called $\textit{Nearest Neighbor (NN) Complexity}$) and
$\textit{(ii)}$ the maximal number of bits representing entries of anchors
(called $\textit{Resolution}$). We study symmetric Boolean functions and
present constructions that have optimal NN complexity and resolution. | [
"cs.CC",
"cs.DM",
"cs.IT",
"cs.LG",
"cs.NE",
"math.IT"
] | false |
2305.05839 | 2023-05-10T02:08:22Z | Low-Light Image Enhancement via Structure Modeling and Guidance | [
"Xiaogang Xu",
"Ruixing Wang",
"Jiangbo Lu"
] | This paper proposes a new framework for low-light image enhancement by
simultaneously conducting the appearance as well as structure modeling. It
employs the structural feature to guide the appearance enhancement, leading to
sharp and realistic results. The structure modeling in our framework is
implemented as the edge detection in low-light images. It is achieved with a
modified generative model via designing a structure-aware feature extractor and
generator. The detected edge maps can accurately emphasize the essential
structural information, and the edge prediction is robust towards the noises in
dark areas. Moreover, to improve the appearance modeling, which is implemented
with a simple U-Net, a novel structure-guided enhancement module is proposed
with structure-guided feature synthesis layers. The appearance modeling, edge
detector, and enhancement module can be trained end-to-end. The experiments are
conducted on representative datasets (sRGB and RAW domains), showing that our
model consistently achieves SOTA performance on all datasets with the same
architecture. | [
"cs.CV"
] | false |
2305.05841 | 2023-05-10T02:16:12Z | A Self-Training Framework Based on Multi-Scale Attention Fusion for
Weakly Supervised Semantic Segmentation | [
"Guoqing Yang",
"Chuang Zhu",
"Yu Zhang"
] | Weakly supervised semantic segmentation (WSSS) based on image-level labels is
challenging since it is hard to obtain complete semantic regions. To address
this issue, we propose a self-training method that utilizes fused multi-scale
class-aware attention maps. Our observation is that attention maps of different
scales contain rich complementary information, especially for large and small
objects. Therefore, we collect information from attention maps of different
scales and obtain multi-scale attention maps. We then apply denoising and
reactivation strategies to enhance the potential regions and reduce noisy
areas. Finally, we use the refined attention maps to retrain the network.
Experiments showthat our method enables the model to extract rich semantic
information from multi-scale images and achieves 72.4% mIou scores on both the
PASCAL VOC 2012 validation and test sets. The code is available at
https://bupt-ai-cz.github.io/SMAF. | [
"cs.CV"
] | false |
2305.05842 | 2023-05-10T02:19:00Z | D-Net: Learning for Distinctive Point Clouds by Self-Attentive Point
Searching and Learnable Feature Fusion | [
"Xinhai Liu",
"Zhizhong Han",
"Sanghuk Lee",
"Yan-Pei Cao",
"Yu-Shen Liu"
] | Learning and selecting important points on a point cloud is crucial for point
cloud understanding in various applications. Most of early methods selected the
important points on 3D shapes by analyzing the intrinsic geometric properties
of every single shape, which fails to capture the importance of points that
distinguishes a shape from objects of other classes, i.e., the distinction of
points. To address this problem, we propose D-Net (Distinctive Network) to
learn for distinctive point clouds based on a self-attentive point searching
and a learnable feature fusion. Specifically, in the self-attentive point
searching, we first learn the distinction score for each point to reveal the
distinction distribution of the point cloud. After ranking the learned
distinction scores, we group a point cloud into a high distinctive point set
and a low distinctive one to enrich the fine-grained point cloud structure. To
generate a compact feature representation for each distinctive point set, a
stacked self-gated convolution is proposed to extract the distinctive features.
Finally, we further introduce a learnable feature fusion mechanism to aggregate
multiple distinctive features into a global point cloud representation in a
channel-wise aggregation manner. The results also show that the learned
distinction distribution of a point cloud is highly consistent with objects of
the same class and different from objects of other classes. Extensive
experiments on public datasets, including ModelNet and ShapeNet part dataset,
demonstrate the ability to learn for distinctive point clouds, which helps to
achieve the state-of-the-art performance in some shape understanding
applications. | [
"cs.CV"
] | false |
2305.05871 | 2023-05-10T03:39:24Z | Medical supervised masked autoencoders: Crafting a better masking
strategy and efficient fine-tuning schedule for medical image classification | [
"Jiawei Mao",
"Shujian Guo",
"Yuanqi Chang",
"Xuesong Yin",
"Binling Nie"
] | Masked autoencoders (MAEs) have displayed significant potential in the
classification and semantic segmentation of medical images in the last year.
Due to the high similarity of human tissues, even slight changes in medical
images may represent diseased tissues, necessitating fine-grained inspection to
pinpoint diseased tissues. The random masking strategy of MAEs is likely to
result in areas of lesions being overlooked by the model. At the same time,
inconsistencies between the pre-training and fine-tuning phases impede the
performance and efficiency of MAE in medical image classification. To address
these issues, we propose a medical supervised masked autoencoder (MSMAE) in
this paper. In the pre-training phase, MSMAE precisely masks medical images via
the attention maps obtained from supervised training, contributing to the
representation learning of human tissue in the lesion area. During the
fine-tuning phase, MSMAE is also driven by attention to the accurate masking of
medical images. This improves the computational efficiency of the MSMAE while
increasing the difficulty of fine-tuning, which indirectly improves the quality
of MSMAE medical diagnosis. Extensive experiments demonstrate that MSMAE
achieves state-of-the-art performance in case with three official medical
datasets for various diseases. Meanwhile, transfer learning for MSMAE also
demonstrates the great potential of our approach for medical semantic
segmentation tasks. Moreover, the MSMAE accelerates the inference time in the
fine-tuning phase by 11.2% and reduces the number of floating-point operations
(FLOPs) by 74.08% compared to a traditional MAE. | [
"cs.CV"
] | false |
2305.05873 | 2023-05-10T03:40:25Z | SHS-Net: Learning Signed Hyper Surfaces for Oriented Normal Estimation
of Point Clouds | [
"Qing Li",
"Huifang Feng",
"Kanle Shi",
"Yue Gao",
"Yi Fang",
"Yu-Shen Liu",
"Zhizhong Han"
] | We propose a novel method called SHS-Net for oriented normal estimation of
point clouds by learning signed hyper surfaces, which can accurately predict
normals with global consistent orientation from various point clouds. Almost
all existing methods estimate oriented normals through a two-stage pipeline,
i.e., unoriented normal estimation and normal orientation, and each step is
implemented by a separate algorithm. However, previous methods are sensitive to
parameter settings, resulting in poor results from point clouds with noise,
density variations and complex geometries. In this work, we introduce signed
hyper surfaces (SHS), which are parameterized by multi-layer perceptron (MLP)
layers, to learn to estimate oriented normals from point clouds in an
end-to-end manner. The signed hyper surfaces are implicitly learned in a
high-dimensional feature space where the local and global information is
aggregated. Specifically, we introduce a patch encoding module and a shape
encoding module to encode a 3D point cloud into a local latent code and a
global latent code, respectively. Then, an attention-weighted normal prediction
module is proposed as a decoder, which takes the local and global latent codes
as input to predict oriented normals. Experimental results show that our
SHS-Net outperforms the state-of-the-art methods in both unoriented and
oriented normal estimation on the widely used benchmarks. The code, data and
pretrained models are publicly available. | [
"cs.CV"
] | false |
2305.05883 | 2023-05-10T04:03:59Z | Level-line Guided Edge Drawing for Robust Line Segment Detection | [
"Xinyu Lin",
"Yingjie Zhou",
"Yipeng Liu",
"Ce Zhu"
] | Line segment detection plays a cornerstone role in computer vision tasks.
Among numerous detection methods that have been recently proposed, the ones
based on edge drawing attract increasing attention owing to their excellent
detection efficiency. However, the existing methods are not robust enough due
to the inadequate usage of image gradients for edge drawing and line segment
fitting. Based on the observation that the line segments should locate on the
edge points with both consistent coordinates and level-line information, i.e.,
the unit vector perpendicular to the gradient orientation, this paper proposes
a level-line guided edge drawing for robust line segment detection (GEDRLSD).
The level-line information provides potential directions for edge tracking,
which could be served as a guideline for accurate edge drawing. Additionally,
the level-line information is fused in line segment fitting to improve the
robustness. Numerical experiments show the superiority of the proposed GEDRLSD
algorithm compared with state-of-the-art methods. | [
"cs.CV"
] | false |
2305.05887 | 2023-05-10T04:18:45Z | Weakly-supervised ROI extraction method based on contrastive learning
for remote sensing images | [
"Lingfeng He",
"Mengze Xu",
"Jie Ma"
] | ROI extraction is an active but challenging task in remote sensing because of
the complicated landform, the complex boundaries and the requirement of
annotations. Weakly supervised learning (WSL) aims at learning a mapping from
input image to pixel-wise prediction under image-wise labels, which can
dramatically decrease the labor cost. However, due to the imprecision of
labels, the accuracy and time consumption of WSL methods are relatively
unsatisfactory. In this paper, we propose a two-step ROI extraction based on
contractive learning. Firstly, we present to integrate multiscale Grad-CAM to
obtain pseudo pixelwise annotations with well boundaries. Then, to reduce the
compact of misjudgments in pseudo annotations, we construct a contrastive
learning strategy to encourage the features inside ROI as close as possible and
separate background features from foreground features. Comprehensive
experiments demonstrate the superiority of our proposal. Code is available at
https://github.com/HE-Lingfeng/ROI-Extraction | [
"cs.CV"
] | false |
2305.05902 | 2023-05-10T05:10:00Z | Multi-stage Progressive Reasoning for Dunhuang Murals Inpainting | [
"Wenjie Liu",
"Baokai Liu",
"Shiqiang Du",
"Yuqing Shi",
"Jiacheng Li",
"Jianhua Wang"
] | Dunhuang murals suffer from fading, breakage, surface brittleness and
extensive peeling affected by prolonged environmental erosion. Image inpainting
techniques are widely used in the field of digital mural inpainting. Generally
speaking, for mural inpainting tasks with large area damage, it is challenging
for any image inpainting method. In this paper, we design a multi-stage
progressive reasoning network (MPR-Net) containing global to local receptive
fields for murals inpainting. This network is capable of recursively inferring
the damage boundary and progressively tightening the regional texture
constraints. Moreover, to adaptively fuse plentiful information at various
scales of murals, a multi-scale feature aggregation module (MFA) is designed to
empower the capability to select the significant features. The execution of the
model is similar to the process of a mural restorer (i.e., inpainting the
structure of the damaged mural globally first and then adding the local texture
details further). Our method has been evaluated through both qualitative and
quantitative experiments, and the results demonstrate that it outperforms
state-of-the-art image inpainting methods. | [
"cs.CV"
] | false |
2305.05947 | 2023-05-10T07:39:14Z | iEdit: Localised Text-guided Image Editing with Weak Supervision | [
"Rumeysa Bodur",
"Erhan Gundogdu",
"Binod Bhattarai",
"Tae-Kyun Kim",
"Michael Donoser",
"Loris Bazzani"
] | Diffusion models (DMs) can generate realistic images with text guidance using
large-scale datasets. However, they demonstrate limited controllability in the
output space of the generated images. We propose a novel learning method for
text-guided image editing, namely \texttt{iEdit}, that generates images
conditioned on a source image and a textual edit prompt. As a fully-annotated
dataset with target images does not exist, previous approaches perform
subject-specific fine-tuning at test time or adopt contrastive learning without
a target image, leading to issues on preserving the fidelity of the source
image. We propose to automatically construct a dataset derived from LAION-5B,
containing pseudo-target images with their descriptive edit prompts given input
image-caption pairs. This dataset gives us the flexibility of introducing a
weakly-supervised loss function to generate the pseudo-target image from the
latent noise of the source image conditioned on the edit prompt. To encourage
localised editing and preserve or modify spatial structures in the image, we
propose a loss function that uses segmentation masks to guide the editing
during training and optionally at inference. Our model is trained on the
constructed dataset with 200K samples and constrained GPU resources. It shows
favourable results against its counterparts in terms of image fidelity, CLIP
alignment score and qualitatively for editing both generated and real images. | [
"cs.CV"
] | false |
2305.05992 | 2023-05-10T09:00:04Z | MMoT: Mixture-of-Modality-Tokens Transformer for Composed Multimodal
Conditional Image Synthesis | [
"Jianbin Zheng",
"Daqing Liu",
"Chaoyue Wang",
"Minghui Hu",
"Zuopeng Yang",
"Changxing Ding",
"Dacheng Tao"
] | Existing multimodal conditional image synthesis (MCIS) methods generate
images conditioned on any combinations of various modalities that require all
of them must be exactly conformed, hindering the synthesis controllability and
leaving the potential of cross-modality under-exploited. To this end, we
propose to generate images conditioned on the compositions of multimodal
control signals, where modalities are imperfectly complementary, i.e., composed
multimodal conditional image synthesis (CMCIS). Specifically, we observe two
challenging issues of the proposed CMCIS task, i.e., the modality coordination
problem and the modality imbalance problem. To tackle these issues, we
introduce a Mixture-of-Modality-Tokens Transformer (MMoT) that adaptively fuses
fine-grained multimodal control signals, a multimodal balanced training loss to
stabilize the optimization of each modality, and a multimodal sampling guidance
to balance the strength of each modality control signal. Comprehensive
experimental results demonstrate that MMoT achieves superior performance on
both unimodal conditional image synthesis (UCIS) and MCIS tasks with
high-quality and faithful image synthesis on complex multimodal conditions. The
project website is available at https://jabir-zheng.github.io/MMoT. | [
"cs.CV"
] | false |
2305.06002 | 2023-05-10T09:22:44Z | InfoMetIC: An Informative Metric for Reference-free Image Caption
Evaluation | [
"Anwen Hu",
"Shizhe Chen",
"Liang Zhang",
"Qin Jin"
] | Automatic image captioning evaluation is critical for benchmarking and
promoting advances in image captioning research. Existing metrics only provide
a single score to measure caption qualities, which are less explainable and
informative. Instead, we humans can easily identify the problems of captions in
details, e.g., which words are inaccurate and which salient objects are not
described, and then rate the caption quality. To support such informative
feedback, we propose an Informative Metric for Reference-free Image Caption
evaluation (InfoMetIC). Given an image and a caption, InfoMetIC is able to
report incorrect words and unmentioned image regions at fine-grained level, and
also provide a text precision score, a vision recall score and an overall
quality score at coarse-grained level. The coarse-grained score of InfoMetIC
achieves significantly better correlation with human judgements than existing
metrics on multiple benchmarks. We also construct a token-level evaluation
dataset and demonstrate the effectiveness of InfoMetIC in fine-grained
evaluation. Our code and datasets are publicly available at
https://github.com/HAWLYQ/InfoMetIC. | [
"cs.CV"
] | false |
2305.06036 | 2023-05-10T10:38:38Z | FusionDepth: Complement Self-Supervised Monocular Depth Estimation with
Cost Volume | [
"Zhuofei Huang",
"Jianlin Liu",
"Shang Xu",
"Ying Chen",
"Yong Liu"
] | Multi-view stereo depth estimation based on cost volume usually works better
than self-supervised monocular depth estimation except for moving objects and
low-textured surfaces. So in this paper, we propose a multi-frame depth
estimation framework which monocular depth can be refined continuously by
multi-frame sequential constraints, leveraging a Bayesian fusion layer within
several iterations. Both monocular and multi-view networks can be trained with
no depth supervision. Our method also enhances the interpretability when
combining monocular estimation with multi-view cost volume. Detailed
experiments show that our method surpasses state-of-the-art unsupervised
methods utilizing single or multiple frames at test time on KITTI benchmark. | [
"cs.CV"
] | false |
2305.06043 | 2023-05-10T10:52:11Z | Autonomous Stabilization of Retinal Videos for Streamlining Assessment
of Spontaneous Venous Pulsations | [
"Hongwei Sheng",
"Xin Yu",
"Feiyu Wang",
"MD Wahiduzzaman Khan",
"Hexuan Weng",
"Sahar Shariflou",
"S. Mojtaba Golzan"
] | Spontaneous retinal Venous Pulsations (SVP) are rhythmic changes in the
caliber of the central retinal vein and are observed in the optic disc region
(ODR) of the retina. Its absence is a critical indicator of various ocular or
neurological abnormalities. Recent advances in imaging technology have enabled
the development of portable smartphone-based devices for observing the retina
and assessment of SVPs. However, the quality of smartphone-based retinal videos
is often poor due to noise and image jitting, which in return, can severely
obstruct the observation of SVPs. In this work, we developed a fully automated
retinal video stabilization method that enables the examination of SVPs
captured by various mobile devices. Specifically, we first propose an ODR
Spatio-Temporal Localization (ODR-STL) module to localize visible ODR and
remove noisy and jittering frames. Then, we introduce a Noise-Aware Template
Matching (NATM) module to stabilize high-quality video segments at a fixed
position in the field of view. After the processing, the SVPs can be easily
observed in the stabilized videos, significantly facilitating user
observations. Furthermore, our method is cost-effective and has been tested in
both subjective and objective evaluations. Both of the evaluations support its
effectiveness in facilitating the observation of SVPs. This can improve the
timely diagnosis and treatment of associated diseases, making it a valuable
tool for eye health professionals. | [
"cs.CV"
] | false |
2305.06052 | 2023-05-10T11:10:09Z | Post-training Model Quantization Using GANs for Synthetic Data
Generation | [
"Athanasios Masouris",
"Mansi Sharma",
"Adrian Boguszewski",
"Alexander Kozlov",
"Zhuo Wu",
"Raymond Lo"
] | Quantization is a widely adopted technique for deep neural networks to reduce
the memory and computational resources required. However, when quantized, most
models would need a suitable calibration process to keep their performance
intact, which requires data from the target domain, such as a fraction of the
dataset used in model training and model validation (i.e. calibration dataset).
In this study, we investigate the use of synthetic data as a substitute for
the calibration with real data for the quantization method. We propose a data
generation method based on Generative Adversarial Networks that are trained
prior to the model quantization step. We compare the performance of models
quantized using data generated by StyleGAN2-ADA and our pre-trained DiStyleGAN,
with quantization using real data and an alternative data generation method
based on fractal images. Overall, the results of our experiments demonstrate
the potential of leveraging synthetic data for calibration during the
quantization process. In our experiments, the percentage of accuracy
degradation of the selected models was less than 0.6%, with our best
performance achieved on MobileNetV2 (0.05%). The code is available at:
https://github.com/ThanosM97/gsoc2022-openvino | [
"cs.CV"
] | false |
2305.06115 | 2023-05-10T13:07:46Z | VTPNet for 3D deep learning on point cloud | [
"Wei Zhou",
"Weiwei Jin",
"Qian Wang",
"Yifan Wang",
"Dekui Wang",
"Xingxing Hao",
"Yongxiang Yu"
] | Recently, Transformer-based methods for point cloud learning have achieved
good results on various point cloud learning benchmarks. However, since the
attention mechanism needs to generate three feature vectors of query, key, and
value to calculate attention features, most of the existing Transformer-based
point cloud learning methods usually consume a large amount of computational
time and memory resources when calculating global attention. To address this
problem, we propose a Voxel-Transformer-Point (VTP) Block for extracting local
and global features of point clouds. VTP combines the advantages of
voxel-based, point-based and Transformer-based methods, which consists of
Voxel-Based Branch (V branch), Point-Based Transformer Branch (PT branch) and
Point-Based Branch (P branch). The V branch extracts the coarse-grained
features of the point cloud through low voxel resolution; the PT branch obtains
the fine-grained features of the point cloud by calculating the self-attention
in the local neighborhood and the inter-neighborhood cross-attention; the P
branch uses a simplified MLP network to generate the global location
information of the point cloud. In addition, to enrich the local features of
point clouds at different scales, we set the voxel scale in the V branch and
the neighborhood sphere scale in the PT branch to one large and one small
(large voxel scale \& small neighborhood sphere scale or small voxel scale \&
large neighborhood sphere scale). Finally, we use VTP as the feature extraction
network to construct a VTPNet for point cloud learning, and performs shape
classification, part segmentation, and semantic segmentation tasks on the
ModelNet40, ShapeNet Part, and S3DIS datasets. The experimental results
indicate that VTPNet has good performance in 3D point cloud learning. | [
"cs.CV"
] | false |
2305.06133 | 2023-05-10T13:29:51Z | When ChatGPT for Computer Vision Will Come? From 2D to 3D | [
"Chenghao Li",
"Chaoning Zhang"
] | ChatGPT and its improved variant GPT4 have revolutionized the NLP field with
a single model solving almost all text related tasks. However, such a model for
computer vision does not exist, especially for 3D vision. This article first
provides a brief view on the progress of deep learning in text, image and 3D
fields from the model perspective. Moreover, this work further discusses how
AIGC evolves from the data perspective. On top of that, this work presents an
outlook on the development of AIGC in 3D from the data perspective. | [
"cs.CV"
] | false |
2305.06145 | 2023-05-10T13:48:24Z | Clothes-Invariant Feature Learning by Causal Intervention for
Clothes-Changing Person Re-identification | [
"Xulin Li",
"Yan Lu",
"Bin Liu",
"Yuenan Hou",
"Yating Liu",
"Qi Chu",
"Wanli Ouyang",
"Nenghai Yu"
] | Clothes-invariant feature extraction is critical to the clothes-changing
person re-identification (CC-ReID). It can provide discriminative identity
features and eliminate the negative effects caused by the confounder--clothing
changes. But we argue that there exists a strong spurious correlation between
clothes and human identity, that restricts the common likelihood-based ReID
method P(Y|X) to extract clothes-irrelevant features. In this paper, we propose
a new Causal Clothes-Invariant Learning (CCIL) method to achieve
clothes-invariant feature learning by modeling causal intervention P(Y|do(X)).
This new causality-based model is inherently invariant to the confounder in the
causal view, which can achieve the clothes-invariant features and avoid the
barrier faced by the likelihood-based methods. Extensive experiments on three
CC-ReID benchmarks, including PRCC, LTCC, and VC-Clothes, demonstrate the
effectiveness of our approach, which achieves a new state of the art. | [
"cs.CV"
] | false |
2305.06242 | 2023-05-10T15:22:02Z | Think Twice before Driving: Towards Scalable Decoders for End-to-End
Autonomous Driving | [
"Xiaosong Jia",
"Penghao Wu",
"Li Chen",
"Jiangwei Xie",
"Conghui He",
"Junchi Yan",
"Hongyang Li"
] | End-to-end autonomous driving has made impressive progress in recent years.
Existing methods usually adopt the decoupled encoder-decoder paradigm, where
the encoder extracts hidden features from raw sensor data, and the decoder
outputs the ego-vehicle's future trajectories or actions. Under such a
paradigm, the encoder does not have access to the intended behavior of the ego
agent, leaving the burden of finding out safety-critical regions from the
massive receptive field and inferring about future situations to the decoder.
Even worse, the decoder is usually composed of several simple multi-layer
perceptrons (MLP) or GRUs while the encoder is delicately designed (e.g., a
combination of heavy ResNets or Transformer). Such an imbalanced resource-task
division hampers the learning process.
In this work, we aim to alleviate the aforementioned problem by two
principles: (1) fully utilizing the capacity of the encoder; (2) increasing the
capacity of the decoder. Concretely, we first predict a coarse-grained future
position and action based on the encoder features. Then, conditioned on the
position and action, the future scene is imagined to check the ramification if
we drive accordingly. We also retrieve the encoder features around the
predicted coordinate to obtain fine-grained information about the
safety-critical region. Finally, based on the predicted future and the
retrieved salient feature, we refine the coarse-grained position and action by
predicting its offset from ground-truth. The above refinement module could be
stacked in a cascaded fashion, which extends the capacity of the decoder with
spatial-temporal prior knowledge about the conditioned future. We conduct
experiments on the CARLA simulator and achieve state-of-the-art performance in
closed-loop benchmarks. Extensive ablation studies demonstrate the
effectiveness of each proposed module. | [
"cs.CV"
] | false |
2305.06278 | 2023-05-10T16:15:16Z | A Multi-modal Garden Dataset and Hybrid 3D Dense Reconstruction
Framework Based on Panoramic Stereo Images for a Trimming Robot | [
"Can Pu",
"Chuanyu Yang",
"Jinnian Pu",
"Radim Tylecek",
"Robert B. Fisher"
] | Recovering an outdoor environment's surface mesh is vital for an agricultural
robot during task planning and remote visualization. Our proposed solution is
based on a newly-designed panoramic stereo camera along with a hybrid novel
software framework that consists of three fusion modules. The panoramic stereo
camera with a pentagon shape consists of 5 stereo vision camera pairs to stream
synchronized panoramic stereo images for the following three fusion modules. In
the disparity fusion module, rectified stereo images produce the initial
disparity maps using multiple stereo vision algorithms. Then, these initial
disparity maps, along with the intensity images, are input into a disparity
fusion network to produce refined disparity maps. Next, the refined disparity
maps are converted into full-view point clouds or single-view point clouds for
the pose fusion module. The pose fusion module adopts a two-stage
global-coarse-to-local-fine strategy. In the first stage, each pair of
full-view point clouds is registered by a global point cloud matching algorithm
to estimate the transformation for a global pose graph's edge, which
effectively implements loop closure. In the second stage, a local point cloud
matching algorithm is used to match single-view point clouds in different
nodes. Next, we locally refine the poses of all corresponding edges in the
global pose graph using three proposed rules, thus constructing a refined pose
graph. The refined pose graph is optimized to produce a global pose trajectory
for volumetric fusion. In the volumetric fusion module, the global poses of all
the nodes are used to integrate the single-view point clouds into the volume to
produce the mesh of the whole garden. The proposed framework and its three
fusion modules are tested on a real outdoor garden dataset to show the
superiority of the performance. | [
"cs.CV"
] | false |
2305.06307 | 2023-05-10T16:52:43Z | Analysis of Adversarial Image Manipulations | [
"Ahsi Lo",
"Gabriella Pangelinan",
"Michael C. King"
] | As virtual and physical identity grow increasingly intertwined, the
importance of privacy and security in the online sphere becomes paramount. In
recent years, multiple news stories have emerged of private companies scraping
web content and doing research with or selling the data. Images uploaded online
can be scraped without users' consent or knowledge. Users of social media
platforms whose images are scraped may be at risk of being identified in other
uploaded images or in real-world identification situations. This paper
investigates how simple, accessible image manipulation techniques affect the
accuracy of facial recognition software in identifying an individual's various
face images based on one unique image. | [
"cs.CV"
] | false |
2305.06402 | 2023-05-10T18:22:31Z | Analyzing Bias in Diffusion-based Face Generation Models | [
"Malsha V. Perera",
"Vishal M. Patel"
] | Diffusion models are becoming increasingly popular in synthetic data
generation and image editing applications. However, these models can amplify
existing biases and propagate them to downstream applications. Therefore, it is
crucial to understand the sources of bias in their outputs. In this paper, we
investigate the presence of bias in diffusion-based face generation models with
respect to attributes such as gender, race, and age. Moreover, we examine how
dataset size affects the attribute composition and perceptual quality of both
diffusion and Generative Adversarial Network (GAN) based face generation models
across various attribute classes. Our findings suggest that diffusion models
tend to worsen distribution bias in the training data for various attributes,
which is heavily influenced by the size of the dataset. Conversely, GAN models
trained on balanced datasets with a larger number of samples show less bias
across different attributes. | [
"cs.CV"
] | false |
2305.06483 | 2023-05-10T22:21:57Z | Towards L-System Captioning for Tree Reconstruction | [
"Jannes S. Magnusson",
"Anna Hilsmann",
"Peter Eisert"
] | This work proposes a novel concept for tree and plant reconstruction by
directly inferring a Lindenmayer-System (L-System) word representation from
image data in an image captioning approach. We train a model end-to-end which
is able to translate given images into L-System words as a description of the
displayed tree. To prove this concept, we demonstrate the applicability on 2D
tree topologies. Transferred to real image data, this novel idea could lead to
more efficient, accurate and semantically meaningful tree and plant
reconstruction without using error-prone point cloud extraction, and other
processes usually utilized in tree reconstruction. Furthermore, this approach
bypasses the need for a predefined L-System grammar and enables
species-specific L-System inference without biological knowledge. | [
"cs.CV",
"I.4.5"
] | false |
2305.06492 | 2023-05-10T23:18:47Z | Treasure What You Have: Exploiting Similarity in Deep Neural Networks
for Efficient Video Processing | [
"Hadjer Benmeziane",
"Halima Bouzidi",
"Hamza Ouarnoughi",
"Ozcan Ozturk",
"Smail Niar"
] | Deep learning has enabled various Internet of Things (IoT) applications.
Still, designing models with high accuracy and computational efficiency remains
a significant challenge, especially in real-time video processing applications.
Such applications exhibit high inter- and intra-frame redundancy, allowing
further improvement. This paper proposes a similarity-aware training
methodology that exploits data redundancy in video frames for efficient
processing. Our approach introduces a per-layer regularization that enhances
computation reuse by increasing the similarity of weights during training. We
validate our methodology on two critical real-time applications, lane detection
and scene parsing. We observe an average compression ratio of approximately 50%
and a speedup of \sim 1.5x for different models while maintaining the same
accuracy. | [
"cs.CV"
] | false |
2305.05838 | 2023-05-10T02:02:20Z | Generative Steganographic Flow | [
"Ping Wei",
"Ge Luo",
"Qi Song",
"Xinpeng Zhang",
"Zhenxing Qian",
"Sheng Li"
] | Generative steganography (GS) is a new data hiding manner, featuring direct
generation of stego media from secret data. Existing GS methods are generally
criticized for their poor performances. In this paper, we propose a novel flow
based GS approach -- Generative Steganographic Flow (GSF), which provides
direct generation of stego images without cover image. We take the stego image
generation and secret data recovery process as an invertible transformation,
and build a reversible bijective mapping between input secret data and
generated stego images. In the forward mapping, secret data is hidden in the
input latent of Glow model to generate stego images. By reversing the mapping,
hidden data can be extracted exactly from generated stego images. Furthermore,
we propose a novel latent optimization strategy to improve the fidelity of
stego images. Experimental results show our proposed GSF has far better
performances than SOTA works. | [
"cs.CV",
"cs.MM"
] | false |
2305.05845 | 2023-05-10T02:33:25Z | Sketching the Future (STF): Applying Conditional Control Techniques to
Text-to-Video Models | [
"Rohan Dhesikan",
"Vignesh Rajmohan"
] | The proliferation of video content demands efficient and flexible neural
network based approaches for generating new video content. In this paper, we
propose a novel approach that combines zero-shot text-to-video generation with
ControlNet to improve the output of these models. Our method takes multiple
sketched frames as input and generates video output that matches the flow of
these frames, building upon the Text-to-Video Zero architecture and
incorporating ControlNet to enable additional input conditions. By first
interpolating frames between the inputted sketches and then running
Text-to-Video Zero using the new interpolated frames video as the control
technique, we leverage the benefits of both zero-shot text-to-video generation
and the robust control provided by ControlNet. Experiments demonstrate that our
method excels at producing high-quality and remarkably consistent video content
that more accurately aligns with the user's intended motion for the subject
within the video. We provide a comprehensive resource package, including a demo
video, project website, open-source GitHub repository, and a Colab playground
to foster further research and application of our proposed method. | [
"cs.CV",
"cs.AI"
] | true |
2305.05869 | 2023-05-10T03:25:23Z | Finding Meaningful Distributions of ML Black-boxes under Forensic
Investigation | [
"Jiyi Zhang",
"Han Fang",
"Hwee Kuan Lee",
"Ee-Chien Chang"
] | Given a poorly documented neural network model, we take the perspective of a
forensic investigator who wants to find out the model's data domain (e.g.
whether on face images or traffic signs). Although existing methods such as
membership inference and model inversion can be used to uncover some
information about an unknown model, they still require knowledge of the data
domain to start with. In this paper, we propose solving this problem by
leveraging on comprehensive corpus such as ImageNet to select a meaningful
distribution that is close to the original training distribution and leads to
high performance in follow-up investigations. The corpus comprises two
components, a large dataset of samples and meta information such as
hierarchical structure and textual information on the samples. Our goal is to
select a set of samples from the corpus for the given model. The core of our
method is an objective function that considers two criteria on the selected
samples: the model functional properties (derived from the dataset), and
semantics (derived from the metadata). We also give an algorithm to efficiently
search the large space of all possible subsets w.r.t. the objective function.
Experimentation results show that the proposed method is effective. For
example, cloning a given model (originally trained with CIFAR-10) by using
Caltech 101 can achieve 45.5% accuracy. By using datasets selected by our
method, the accuracy is improved to 72.0%. | [
"cs.LG",
"cs.CV"
] | false |
2305.05886 | 2023-05-10T04:17:33Z | Computational Optics for Mobile Terminals in Mass Production | [
"Shiqi Chen",
"Ting Lin",
"Huajun Feng",
"Zhihai Xu",
"Qi Li",
"Yueting Chen"
] | Correcting the optical aberrations and the manufacturing deviations of
cameras is a challenging task. Due to the limitation on volume and the demand
for mass production, existing mobile terminals cannot rectify optical
degradation. In this work, we systematically construct the perturbed lens
system model to illustrate the relationship between the deviated system
parameters and the spatial frequency response measured from photographs. To
further address this issue, an optimization framework is proposed based on this
model to build proxy cameras from the machining samples' SFRs. Engaging with
the proxy cameras, we synthetic data pairs, which encode the optical
aberrations and the random manufacturing biases, for training the
learning-based algorithms. In correcting aberration, although promising results
have been shown recently with convolutional neural networks, they are hard to
generalize to stochastic machining biases. Therefore, we propose a dilated
Omni-dimensional dynamic convolution and implement it in post-processing to
account for the manufacturing degradation. Extensive experiments which evaluate
multiple samples of two representative devices demonstrate that the proposed
optimization framework accurately constructs the proxy camera. And the dynamic
processing model is well-adapted to manufacturing deviations of different
cameras, realizing perfect computational photography. The evaluation shows that
the proposed method bridges the gap between optical design, system machining,
and post-processing pipeline, shedding light on the joint of image signal
reception (lens and sensor) and image signal processing. | [
"cs.CV",
"cs.MM"
] | false |
2305.05901 | 2023-05-10T05:09:05Z | Text-guided High-definition Consistency Texture Model | [
"Zhibin Tang",
"Tiantong He"
] | With the advent of depth-to-image diffusion models, text-guided generation,
editing, and transfer of realistic textures are no longer difficult. However,
due to the limitations of pre-trained diffusion models, they can only create
low-resolution, inconsistent textures. To address this issue, we present the
High-definition Consistency Texture Model (HCTM), a novel method that can
generate high-definition and consistent textures for 3D meshes according to the
text prompts. We achieve this by leveraging a pre-trained depth-to-image
diffusion model to generate single viewpoint results based on the text prompt
and a depth map. We fine-tune the diffusion model with Parameter-Efficient
Fine-Tuning to quickly learn the style of the generated result, and apply the
multi-diffusion strategy to produce high-resolution and consistent results from
different viewpoints. Furthermore, we propose a strategy that prevents the
appearance of noise on the textures caused by backpropagation. Our proposed
approach has demonstrated promising results in generating high-definition and
consistent textures for 3D meshes, as demonstrated through a series of
experiments. | [
"cs.CV",
"cs.AI"
] | false |
2305.05912 | 2023-05-10T05:48:22Z | A Hybrid of Generative and Discriminative Models Based on the
Gaussian-coupled Softmax Layer | [
"Hideaki Hayashi"
] | Generative models have advantageous characteristics for classification tasks
such as the availability of unsupervised data and calibrated confidence,
whereas discriminative models have advantages in terms of the simplicity of
their model structures and learning algorithms and their ability to outperform
their generative counterparts. In this paper, we propose a method to train a
hybrid of discriminative and generative models in a single neural network (NN),
which exhibits the characteristics of both models. The key idea is the
Gaussian-coupled softmax layer, which is a fully connected layer with a softmax
activation function coupled with Gaussian distributions. This layer can be
embedded into an NN-based classifier and allows the classifier to estimate both
the class posterior distribution and the class-conditional data distribution.
We demonstrate that the proposed hybrid model can be applied to semi-supervised
learning and confidence calibration. | [
"cs.LG",
"cs.CV"
] | false |
2305.05938 | 2023-05-10T07:20:51Z | V2X-Seq: A Large-Scale Sequential Dataset for Vehicle-Infrastructure
Cooperative Perception and Forecasting | [
"Haibao Yu",
"Wenxian Yang",
"Hongzhi Ruan",
"Zhenwei Yang",
"Yingjuan Tang",
"Xu Gao",
"Xin Hao",
"Yifeng Shi",
"Yifeng Pan",
"Ning Sun",
"Juan Song",
"Jirui Yuan",
"Ping Luo",
"Zaiqing Nie"
] | Utilizing infrastructure and vehicle-side information to track and forecast
the behaviors of surrounding traffic participants can significantly improve
decision-making and safety in autonomous driving. However, the lack of
real-world sequential datasets limits research in this area. To address this
issue, we introduce V2X-Seq, the first large-scale sequential V2X dataset,
which includes data frames, trajectories, vector maps, and traffic lights
captured from natural scenery. V2X-Seq comprises two parts: the sequential
perception dataset, which includes more than 15,000 frames captured from 95
scenarios, and the trajectory forecasting dataset, which contains about 80,000
infrastructure-view scenarios, 80,000 vehicle-view scenarios, and 50,000
cooperative-view scenarios captured from 28 intersections' areas, covering 672
hours of data. Based on V2X-Seq, we introduce three new tasks for
vehicle-infrastructure cooperative (VIC) autonomous driving: VIC3D Tracking,
Online-VIC Forecasting, and Offline-VIC Forecasting. We also provide benchmarks
for the introduced tasks. Find data, code, and more up-to-date information at
\href{https://github.com/AIR-THU/DAIR-V2X-Seq}{https://github.com/AIR-THU/DAIR-V2X-Seq}. | [
"cs.CV",
"cs.AI"
] | false |
2305.05984 | 2023-05-10T08:50:04Z | Uncertainty-Aware Semi-Supervised Learning for Prostate MRI Zonal
Segmentation | [
"Matin Hosseinzadeh",
"Anindo Saha",
"Joeran Bosma",
"Henkjan Huisman"
] | Quality of deep convolutional neural network predictions strongly depends on
the size of the training dataset and the quality of the annotations. Creating
annotations, especially for 3D medical image segmentation, is time-consuming
and requires expert knowledge. We propose a novel semi-supervised learning
(SSL) approach that requires only a relatively small number of annotations
while being able to use the remaining unlabeled data to improve model
performance. Our method uses a pseudo-labeling technique that employs recent
deep learning uncertainty estimation models. By using the estimated
uncertainty, we were able to rank pseudo-labels and automatically select the
best pseudo-annotations generated by the supervised model. We applied this to
prostate zonal segmentation in T2-weighted MRI scans. Our proposed model
outperformed the semi-supervised model in experiments with the ProstateX
dataset and an external test set, by leveraging only a subset of unlabeled data
rather than the full collection of 4953 cases, our proposed model demonstrated
improved performance. The segmentation dice similarity coefficient in the
transition zone and peripheral zone increased from 0.835 and 0.727 to 0.852 and
0.751, respectively, for fully supervised model and the uncertainty-aware
semi-supervised learning model (USSL). Our USSL model demonstrates the
potential to allow deep learning models to be trained on large datasets without
requiring full annotation. Our code is available at
https://github.com/DIAGNijmegen/prostateMR-USSL. | [
"eess.IV",
"cs.CV"
] | false |
2305.05991 | 2023-05-10T08:58:54Z | DMNR: Unsupervised De-noising of Point Clouds Corrupted by Airborne
Particles | [
"Chu Chen",
"Yanqi Ma",
"Bingcheng Dong",
"Junjie Cao"
] | LiDAR sensors are critical for autonomous driving and robotics applications
due to their ability to provide accurate range measurements and their
robustness to lighting conditions. However, airborne particles, such as fog,
rain, snow, and dust, will degrade its performance and it is inevitable to
encounter these inclement environmental conditions outdoors. It would be a
straightforward approach to remove them by supervised semantic segmentation.
But annotating these particles point wisely is too laborious. To address this
problem and enhance the perception under inclement conditions, we develop two
dynamic filtering methods called Dynamic Multi-threshold Noise Removal (DMNR)
and DMNR-H by accurate analysis of the position distribution and intensity
characteristics of noisy points and clean points on publicly available WADS and
DENSE datasets. Both DMNR and DMNR-H outperform state-of-the-art unsupervised
methods by a significant margin on the two datasets and are slightly better
than supervised deep learning-based methods. Furthermore, our methods are more
robust to different LiDAR sensors and airborne particles, such as snow and fog. | [
"cs.CV",
"eess.IV"
] | false |
2305.06025 | 2023-05-10T10:21:14Z | Brain Tumor Detection using Swin Transformers | [
"Prateek A. Meshram",
"Suraj Joshi",
"Devarshi Mahajan"
] | The first MRI scan was done in the year 1978 by researchers at EML
Laboratories. As per an estimate, approximately 251,329 people died due to
primary cancerous brain and CNS (Central Nervous System) Tumors in the year
2020. It has been recommended by various medical professionals that brain tumor
detection at an early stage would help in saving many lives. Whenever
radiologists deal with a brain MRI they try to diagnose it with the
histological subtype which is quite subjective and here comes the major issue.
Upon that, in developing countries like India, where there is 1 doctor for
every 1151 people, the need for efficient diagnosis to help radiologists and
doctors come into picture. In our approach, we aim to solve the problem using
swin transformers and deep learning to detect, classify, locate and provide the
size of the tumor in the particular MRI scan which would assist the doctors and
radiologists in increasing their efficiency. At the end, the medics would be
able to download the predictions and measures in a PDF (Portable Document
Format). Keywords: brain tumor, transformers, classification, medical, deep
learning, detection | [
"eess.IV",
"cs.CV"
] | false |
2305.06080 | 2023-05-10T12:01:11Z | Towards Effective Visual Representations for Partial-Label Learning | [
"Shiyu Xia",
"Jiaqi Lv",
"Ning Xu",
"Gang Niu",
"Xin Geng"
] | Under partial-label learning (PLL) where, for each training instance, only a
set of ambiguous candidate labels containing the unknown true label is
accessible, contrastive learning has recently boosted the performance of PLL on
vision tasks, attributed to representations learned by contrasting the
same/different classes of entities. Without access to true labels, positive
points are predicted using pseudo-labels that are inherently noisy, and
negative points often require large batches or momentum encoders, resulting in
unreliable similarity information and a high computational overhead. In this
paper, we rethink a state-of-the-art contrastive PLL method PiCO[24], inspiring
the design of a simple framework termed PaPi (Partial-label learning with a
guided Prototypical classifier), which demonstrates significant scope for
improvement in representation learning, thus contributing to label
disambiguation. PaPi guides the optimization of a prototypical classifier by a
linear classifier with which they share the same feature encoder, thus
explicitly encouraging the representation to reflect visual similarity between
categories. It is also technically appealing, as PaPi requires only a few
components in PiCO with the opposite direction of guidance, and directly
eliminates the contrastive learning module that would introduce noise and
consume computational resources. We empirically demonstrate that PaPi
significantly outperforms other PLL methods on various image classification
tasks. | [
"cs.CV",
"cs.LG"
] | false |
2305.06203 | 2023-05-10T14:35:07Z | Multiclass MRI Brain Tumor Segmentation using 3D Attention-based U-Net | [
"Maryann M. Gitonga"
] | This paper proposes a 3D attention-based U-Net architecture for multi-region
segmentation of brain tumors using a single stacked multi-modal volume created
by combining three non-native MRI volumes. The attention mechanism added to the
decoder side of the U-Net helps to improve segmentation accuracy by
de-emphasizing healthy tissues and accentuating malignant tissues, resulting in
better generalization power and reduced computational resources. The method is
trained and evaluated on the BraTS 2021 Task 1 dataset, and demonstrates
improvement of accuracy over other approaches. My findings suggest that the
proposed approach has potential to enhance brain tumor segmentation using
multi-modal MRI data, contributing to better understanding and diagnosis of
brain diseases. This work highlights the importance of combining multiple
imaging modalities and incorporating attention mechanisms for improved accuracy
in brain tumor segmentation. | [
"eess.IV",
"cs.CV",
"I.4.6"
] | false |
2305.06236 | 2023-05-10T15:15:09Z | Radious: Unveiling the Enigma of Dental Radiology with BEIT Adaptor and
Mask2Former in Semantic Segmentation | [
"Mohammad Mashayekhi",
"Sara Ahmadi Majd",
"Arian Amiramjadi",
"Babak Mashayekhi"
] | X-ray images are the first steps for diagnosing and further treating dental
problems. So, early diagnosis prevents the development and increase of oral and
dental diseases. In this paper, we developed a semantic segmentation algorithm
based on BEIT adaptor and Mask2Former to detect and identify teeth, roots, and
multiple dental diseases and abnormalities such as pulp chamber, restoration,
endodontics, crown, decay, pin, composite, bridge, pulpitis, orthodontics,
radicular cyst, periapical cyst, cyst, implant, and bone graft material in
panoramic, periapical, and bitewing X-ray images. We compared the result of our
algorithm to two state-of-the-art algorithms in image segmentation named:
Deeplabv3 and Segformer on our own data set. We discovered that Radious
outperformed those algorithms by increasing the mIoU scores by 9% and 33% in
Deeplabv3+ and Segformer, respectively. | [
"cs.CV",
"cs.AI"
] | false |
2305.06244 | 2023-05-10T15:25:05Z | Explainable Knowledge Distillation for On-device Chest X-Ray
Classification | [
"Chakkrit Termritthikun",
"Ayaz Umer",
"Suwichaya Suwanwimolkul",
"Feng Xia",
"Ivan Lee"
] | Automated multi-label chest X-rays (CXR) image classification has achieved
substantial progress in clinical diagnosis via utilizing sophisticated deep
learning approaches. However, most deep models have high computational demands,
which makes them less feasible for compact devices with low computational
requirements. To overcome this problem, we propose a knowledge distillation
(KD) strategy to create the compact deep learning model for the real-time
multi-label CXR image classification. We study different alternatives of CNNs
and Transforms as the teacher to distill the knowledge to a smaller student.
Then, we employed explainable artificial intelligence (XAI) to provide the
visual explanation for the model decision improved by the KD. Our results on
three benchmark CXR datasets show that our KD strategy provides the improved
performance on the compact student model, thus being the feasible choice for
many limited hardware platforms. For instance, when using DenseNet161 as the
teacher network, EEEA-Net-C2 achieved an AUC of 83.7%, 87.1%, and 88.7% on the
ChestX-ray14, CheXpert, and PadChest datasets, respectively, with fewer
parameters of 4.7 million and computational cost of 0.3 billion FLOPS. | [
"cs.CV",
"cs.LG"
] | false |
2305.06351 | 2023-05-10T17:56:21Z | Reconstructing Animatable Categories from Videos | [
"Gengshan Yang",
"Chaoyang Wang",
"N Dinesh Reddy",
"Deva Ramanan"
] | Building animatable 3D models is challenging due to the need for 3D scans,
laborious registration, and manual rigging, which are difficult to scale to
arbitrary categories. Recently, differentiable rendering provides a pathway to
obtain high-quality 3D models from monocular videos, but these are limited to
rigid categories or single instances. We present RAC that builds category 3D
models from monocular videos while disentangling variations over instances and
motion over time. Three key ideas are introduced to solve this problem: (1)
specializing a skeleton to instances via optimization, (2) a method for latent
space regularization that encourages shared structure across a category while
maintaining instance details, and (3) using 3D background models to disentangle
objects from the background. We show that 3D models of humans, cats, and dogs
can be learned from 50-100 internet videos. | [
"cs.CV",
"cs.GR"
] | true |
2305.06394 | 2023-05-10T18:08:04Z | Local Region-to-Region Mapping-based Approach to Classify Articulated
Objects | [
"Ayush Aggarwal",
"Rustam Stolkin",
"Naresh Marturi"
] | Autonomous robots operating in real-world environments encounter a variety of
objects that can be both rigid and articulated in nature. Having knowledge of
these specific object properties not only helps in designing appropriate
manipulation strategies but also aids in developing reliable tracking and pose
estimation techniques for many robotic and vision applications. In this
context, this paper presents a registration-based local region-to-region
mapping approach to classify an object as either articulated or rigid. Using
the point clouds of the intended object, the proposed method performs
classification by estimating unique local transformations between point clouds
over the observed sequence of movements of the object. The significant
advantage of the proposed method is that it is a constraint-free approach that
can classify any articulated object and is not limited to a specific type of
articulation. Additionally, it is a model-free approach with no learning
components, which means it can classify whether an object is articulated
without requiring any object models or labelled data. We analyze the
performance of the proposed method on two publicly available benchmark datasets
with a combination of articulated and rigid objects. It is observed that the
proposed method can classify articulated and rigid objects with good accuracy. | [
"cs.CV",
"cs.RO"
] | false |
2305.06407 | 2023-05-10T18:32:32Z | Combo of Thinking and Observing for Outside-Knowledge VQA | [
"Qingyi Si",
"Yuchen Mo",
"Zheng Lin",
"Huishan Ji",
"Weiping Wang"
] | Outside-knowledge visual question answering is a challenging task that
requires both the acquisition and the use of open-ended real-world knowledge.
Some existing solutions draw external knowledge into the cross-modality space
which overlooks the much vaster textual knowledge in natural-language space,
while others transform the image into a text that further fuses with the
textual knowledge into the natural-language space and completely abandons the
use of visual features. In this paper, we are inspired to constrain the
cross-modality space into the same space of natural-language space which makes
the visual features preserved directly, and the model still benefits from the
vast knowledge in natural-language space. To this end, we propose a novel
framework consisting of a multimodal encoder, a textual encoder and an answer
decoder. Such structure allows us to introduce more types of knowledge
including explicit and implicit multimodal and textual knowledge. Extensive
experiments validate the superiority of the proposed method which outperforms
the state-of-the-art by 6.17% accuracy. We also conduct comprehensive ablations
of each component, and systematically study the roles of varying types of
knowledge. Codes and knowledge data can be found at
https://github.com/PhoebusSi/Thinking-while-Observing. | [
"cs.CV",
"cs.AI"
] | false |
2305.06437 | 2023-05-10T20:06:17Z | Self-Supervised Video Representation Learning via Latent Time Navigation | [
"Di Yang",
"Yaohui Wang",
"Quan Kong",
"Antitza Dantcheva",
"Lorenzo Garattoni",
"Gianpiero Francesca",
"Francois Bremond"
] | Self-supervised video representation learning aimed at maximizing similarity
between different temporal segments of one video, in order to enforce feature
persistence over time. This leads to loss of pertinent information related to
temporal relationships, rendering actions such as `enter' and `leave' to be
indistinguishable. To mitigate this limitation, we propose Latent Time
Navigation (LTN), a time-parameterized contrastive learning strategy that is
streamlined to capture fine-grained motions. Specifically, we maximize the
representation similarity between different video segments from one video,
while maintaining their representations time-aware along a subspace of the
latent representation code including an orthogonal basis to represent temporal
changes. Our extensive experimental analysis suggests that learning video
representations by LTN consistently improves performance of action
classification in fine-grained and human-oriented tasks (e.g., on Toyota
Smarthome dataset). In addition, we demonstrate that our proposed model, when
pre-trained on Kinetics-400, generalizes well onto the unseen real world video
benchmark datasets UCF101 and HMDB51, achieving state-of-the-art performance in
action recognition. | [
"cs.CV",
"cs.AI"
] | false |
2305.06448 | 2023-05-10T20:35:38Z | Continual Facial Expression Recognition: A Benchmark | [
"Nikhil Churamani",
"Tolga Dimlioglu",
"German I. Parisi",
"Hatice Gunes"
] | Understanding human affective behaviour, especially in the dynamics of
real-world settings, requires Facial Expression Recognition (FER) models to
continuously adapt to individual differences in user expression, contextual
attributions, and the environment. Current (deep) Machine Learning (ML)-based
FER approaches pre-trained in isolation on benchmark datasets fail to capture
the nuances of real-world interactions where data is available only
incrementally, acquired by the agent or robot during interactions. New learning
comes at the cost of previous knowledge, resulting in catastrophic forgetting.
Lifelong or Continual Learning (CL), on the other hand, enables adaptability in
agents by being sensitive to changing data distributions, integrating new
information without interfering with previously learnt knowledge. Positing CL
as an effective learning paradigm for FER, this work presents the Continual
Facial Expression Recognition (ConFER) benchmark that evaluates popular CL
techniques on FER tasks. It presents a comparative analysis of several CL-based
approaches on popular FER datasets such as CK+, RAF-DB, and AffectNet and
present strategies for a successful implementation of ConFER for Affective
Computing (AC) research. CL techniques, under different learning settings, are
shown to achieve state-of-the-art (SOTA) performance across several datasets,
thus motivating a discussion on the benefits of applying CL principles towards
human behaviour understanding, particularly from facial expressions, as well
the challenges entailed. | [
"cs.CV",
"cs.LG"
] | false |
2305.05835 | 2023-05-10T01:48:01Z | Reference-based OCT Angiogram Super-resolution with Learnable Texture
Generation | [
"Yuyan Ruan",
"Dawei Yang",
"Ziqi Tang",
"An Ran Ran",
"Carol Y. Cheung",
"Hao Chen"
] | Optical coherence tomography angiography (OCTA) is a new imaging modality to
visualize retinal microvasculature and has been readily adopted in clinics.
High-resolution OCT angiograms are important to qualitatively and
quantitatively identify potential biomarkers for different retinal diseases
accurately. However, one significant problem of OCTA is the inevitable decrease
in resolution when increasing the field-of-view given a fixed acquisition time.
To address this issue, we propose a novel reference-based super-resolution
(RefSR) framework to preserve the resolution of the OCT angiograms while
increasing the scanning area. Specifically, textures from the normal RefSR
pipeline are used to train a learnable texture generator (LTG), which is
designed to generate textures according to the input. The key difference
between the proposed method and traditional RefSR models is that the textures
used during inference are generated by the LTG instead of being searched from a
single reference image. Since the LTG is optimized throughout the whole
training process, the available texture space is significantly enlarged and no
longer limited to a single reference image, but extends to all textures
contained in the training samples. Moreover, our proposed LTGNet does not
require a reference image at the inference phase, thereby becoming invulnerable
to the selection of the reference image. Both experimental and visual results
show that LTGNet has superior performance and robustness over state-of-the-art
methods, indicating good reliability and promise in real-life deployment. The
source code will be made available upon acceptance. | [
"eess.IV",
"cs.CV",
"cs.LG",
"68T07",
"I.2; I.4"
] | false |
2305.05867 | 2023-05-10T03:20:39Z | Optical Aberration Correction in Postprocessing using Imaging Simulation | [
"Shiqi Chen",
"Huajun Feng",
"Dexin Pan",
"Zhihai Xu",
"Qi Li",
"Yueting Chen"
] | As the popularity of mobile photography continues to grow, considerable
effort is being invested in the reconstruction of degraded images. Due to the
spatial variation in optical aberrations, which cannot be avoided during the
lens design process, recent commercial cameras have shifted some of these
correction tasks from optical design to postprocessing systems. However,
without engaging with the optical parameters, these systems only achieve
limited correction for aberrations.In this work, we propose a practical method
for recovering the degradation caused by optical aberrations. Specifically, we
establish an imaging simulation system based on our proposed optical point
spread function model. Given the optical parameters of the camera, it generates
the imaging results of these specific devices. To perform the restoration, we
design a spatial-adaptive network model on synthetic data pairs generated by
the imaging simulation system, eliminating the overhead of capturing training
data by a large amount of shooting and registration. Moreover, we
comprehensively evaluate the proposed method in simulations and experimentally
with a customized digital-single-lens-reflex (DSLR) camera lens and HUAWEI
HONOR 20, respectively. The experiments demonstrate that our solution
successfully removes spatially variant blur and color dispersion. When compared
with the state-of-the-art deblur methods, the proposed approach achieves better
results with a lower computational overhead. Moreover, the reconstruction
technique does not introduce artificial texture and is convenient to transfer
to current commercial cameras. Project Page:
\url{https://github.com/TanGeeGo/ImagingSimulation}. | [
"cs.CV",
"cs.GR",
"cs.MM",
"eess.IV"
] | false |
2305.05899 | 2023-05-10T05:05:58Z | Mobile Image Restoration via Prior Quantization | [
"Shiqi Chen",
"Jinwen Zhou",
"Menghao Li",
"Yueting Chen",
"Tingting Jiang"
] | In digital images, the performance of optical aberration is a multivariate
degradation, where the spectral of the scene, the lens imperfections, and the
field of view together contribute to the results. Besides eliminating it at the
hardware level, the post-processing system, which utilizes various prior
information, is significant for correction. However, due to the content
differences among priors, the pipeline that aligns these factors shows limited
efficiency and unoptimized restoration. Here, we propose a prior quantization
model to correct the optical aberrations in image processing systems. To
integrate these messages, we encode various priors into a latent space and
quantify them by the learnable codebooks. After quantization, the prior codes
are fused with the image restoration branch to realize targeted optical
aberration correction. Comprehensive experiments demonstrate the flexibility of
the proposed method and validate its potential to accomplish targeted
restoration for a specific camera. Furthermore, our model promises to analyze
the correlation between the various priors and the optical aberration of
devices, which is helpful for joint soft-hardware design. | [
"cs.CV",
"cs.MM",
"eess.IV"
] | false |
2305.06114 | 2023-05-10T13:05:43Z | Few-shot Action Recognition via Intra- and Inter-Video Information
Maximization | [
"Huabin Liu",
"Weiyao Lin",
"Tieyuan Chen",
"Yuxi Li",
"Shuyuan Li",
"John See"
] | Current few-shot action recognition involves two primary sources of
information for classification:(1) intra-video information, determined by frame
content within a single video clip, and (2) inter-video information, measured
by relationships (e.g., feature similarity) among videos. However, existing
methods inadequately exploit these two information sources. In terms of
intra-video information, current sampling operations for input videos may omit
critical action information, reducing the utilization efficiency of video data.
For the inter-video information, the action misalignment among videos makes it
challenging to calculate precise relationships. Moreover, how to jointly
consider both inter- and intra-video information remains under-explored for
few-shot action recognition. To this end, we propose a novel framework, Video
Information Maximization (VIM), for few-shot video action recognition. VIM is
equipped with an adaptive spatial-temporal video sampler and a spatiotemporal
action alignment model to maximize intra- and inter-video information,
respectively. The video sampler adaptively selects important frames and
amplifies critical spatial regions for each input video based on the task at
hand. This preserves and emphasizes informative parts of video clips while
eliminating interference at the data level. The alignment model performs
temporal and spatial action alignment sequentially at the feature level,
leading to more precise measurements of inter-video similarity. Finally, These
goals are facilitated by incorporating additional loss terms based on mutual
information measurement. Consequently, VIM acts to maximize the distinctiveness
of video information from limited video data. Extensive experimental results on
public datasets for few-shot action recognition demonstrate the effectiveness
and benefits of our framework. | [
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2305.06289 | 2023-05-10T16:25:42Z | Learning Video-Conditioned Policies for Unseen Manipulation Tasks | [
"Elliot Chane-Sane",
"Cordelia Schmid",
"Ivan Laptev"
] | The ability to specify robot commands by a non-expert user is critical for
building generalist agents capable of solving a large variety of tasks. One
convenient way to specify the intended robot goal is by a video of a person
demonstrating the target task. While prior work typically aims to imitate human
demonstrations performed in robot environments, here we focus on a more
realistic and challenging setup with demonstrations recorded in natural and
diverse human environments. We propose Video-conditioned Policy learning (ViP),
a data-driven approach that maps human demonstrations of previously unseen
tasks to robot manipulation skills. To this end, we learn our policy to
generate appropriate actions given current scene observations and a video of
the target task. To encourage generalization to new tasks, we avoid particular
tasks during training and learn our policy from unlabelled robot trajectories
and corresponding robot videos. Both robot and human videos in our framework
are represented by video embeddings pre-trained for human action recognition.
At test time we first translate human videos to robot videos in the common
video embedding space, and then use resulting embeddings to condition our
policies. Notably, our approach enables robot control by human demonstrations
in a zero-shot manner, i.e., without using robot trajectories paired with human
instructions during training. We validate our approach on a set of challenging
multi-task robot manipulation environments and outperform state of the art. Our
method also demonstrates excellent performance in a new challenging zero-shot
setup where no paired data is used during training. | [
"cs.RO",
"cs.CV",
"cs.LG"
] | false |
2305.06305 | 2023-05-10T16:51:36Z | Self-Supervised Instance Segmentation by Grasping | [
"YuXuan Liu",
"Xi Chen",
"Pieter Abbeel"
] | Instance segmentation is a fundamental skill for many robotic applications.
We propose a self-supervised method that uses grasp interactions to collect
segmentation supervision for an instance segmentation model. When a robot
grasps an item, the mask of that grasped item can be inferred from the images
of the scene before and after the grasp. Leveraging this insight, we learn a
grasp segmentation model to segment the grasped object from before and after
grasp images. Such a model can segment grasped objects from thousands of grasp
interactions without costly human annotation. Using the segmented grasped
objects, we can "cut" objects from their original scenes and "paste" them into
new scenes to generate instance supervision. We show that our grasp
segmentation model provides a 5x error reduction when segmenting grasped
objects compared with traditional image subtraction approaches. Combined with
our "cut-and-paste" generation method, instance segmentation models trained
with our method achieve better performance than a model trained with 10x the
amount of labeled data. On a real robotic grasping system, our instance
segmentation model reduces the rate of grasp errors by over 3x compared to an
image subtraction baseline. | [
"cs.CV",
"cs.AI",
"cs.RO"
] | false |
2305.06314 | 2023-05-10T17:01:18Z | Scan2LoD3: Reconstructing semantic 3D building models at LoD3 using ray
casting and Bayesian networks | [
"Olaf Wysocki",
"Yan Xia",
"Magdalena Wysocki",
"Eleonora Grilli",
"Ludwig Hoegner",
"Daniel Cremers",
"Uwe Stilla"
] | Reconstructing semantic 3D building models at the level of detail (LoD) 3 is
a long-standing challenge. Unlike mesh-based models, they require watertight
geometry and object-wise semantics at the fa\c{c}ade level. The principal
challenge of such demanding semantic 3D reconstruction is reliable
fa\c{c}ade-level semantic segmentation of 3D input data. We present a novel
method, called Scan2LoD3, that accurately reconstructs semantic LoD3 building
models by improving fa\c{c}ade-level semantic 3D segmentation. To this end, we
leverage laser physics and 3D building model priors to probabilistically
identify model conflicts. These probabilistic physical conflicts propose
locations of model openings: Their final semantics and shapes are inferred in a
Bayesian network fusing multimodal probabilistic maps of conflicts, 3D point
clouds, and 2D images. To fulfill demanding LoD3 requirements, we use the
estimated shapes to cut openings in 3D building priors and fit semantic 3D
objects from a library of fa\c{c}ade objects. Extensive experiments on the TUM
city campus datasets demonstrate the superior performance of the proposed
Scan2LoD3 over the state-of-the-art methods in fa\c{c}ade-level detection,
semantic segmentation, and LoD3 building model reconstruction. We believe our
method can foster the development of probability-driven semantic 3D
reconstruction at LoD3 since not only the high-definition reconstruction but
also reconstruction confidence becomes pivotal for various applications such as
autonomous driving and urban simulations. | [
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2305.06386 | 2023-05-10T18:01:06Z | Text-To-Concept (and Back) via Cross-Model Alignment | [
"Mazda Moayeri",
"Keivan Rezaei",
"Maziar Sanjabi",
"Soheil Feizi"
] | We observe that the mapping between an image's representation in one model to
its representation in another can be learned surprisingly well with just a
linear layer, even across diverse models. Building on this observation, we
propose $\textit{text-to-concept}$, where features from a fixed pretrained
model are aligned linearly to the CLIP space, so that text embeddings from
CLIP's text encoder become directly comparable to the aligned features. With
text-to-concept, we convert fixed off-the-shelf vision encoders to surprisingly
strong zero-shot classifiers for free, with accuracy at times even surpassing
that of CLIP, despite being much smaller models and trained on a small fraction
of the data compared to CLIP. We show other immediate use-cases of
text-to-concept, like building concept bottleneck models with no concept
supervision, diagnosing distribution shifts in terms of human concepts, and
retrieving images satisfying a set of text-based constraints. Lastly, we
demonstrate the feasibility of $\textit{concept-to-text}$, where vectors in a
model's feature space are decoded by first aligning to the CLIP before being
fed to a GPT-based generative model. Our work suggests existing deep models,
with presumably diverse architectures and training, represent input samples
relatively similarly, and a two-way communication across model representation
spaces and to humans (through language) is viable. | [
"cs.CV",
"cs.AI",
"cs.HC",
"cs.LG"
] | false |
2305.05858 | 2023-05-10T03:07:17Z | Vārta: A Large-Scale Headline-Generation Dataset for Indic Languages | [
"Rahul Aralikatte",
"Ziling Cheng",
"Sumanth Doddapaneni",
"Jackie Chi Kit Cheung"
] | We present V\=arta, a large-scale multilingual dataset for headline
generation in Indic languages. This dataset includes 41.8 million news articles
in 14 different Indic languages (and English), which come from a variety of
high-quality sources. To the best of our knowledge, this is the largest
collection of curated articles for Indic languages currently available. We use
the data collected in a series of experiments to answer important questions
related to Indic NLP and multilinguality research in general. We show that the
dataset is challenging even for state-of-the-art abstractive models and that
they perform only slightly better than extractive baselines. Owing to its size,
we also show that the dataset can be used to pretrain strong language models
that outperform competitive baselines in both NLU and NLG benchmarks. | [
"cs.CL"
] | false |
2305.05874 | 2023-05-10T03:45:22Z | Address Matching Based On Hierarchical Information | [
"Chengxian Zhang",
"Jintao Tang",
"Ting Wang",
"Shasha Li"
] | There is evidence that address matching plays a crucial role in many areas
such as express delivery, online shopping and so on. Address has a hierarchical
structure, in contrast to unstructured texts, which can contribute valuable
information for address matching. Based on this idea, this paper proposes a
novel method to leverage the hierarchical information in deep learning method
that not only improves the ability of existing methods to handle irregular
address, but also can pay closer attention to the special part of address.
Experimental findings demonstrate that the proposed method improves the current
approach by 3.2% points. | [
"cs.CL"
] | false |
2305.05936 | 2023-05-10T07:13:47Z | Multi-hop Commonsense Knowledge Injection Framework for Zero-Shot
Commonsense Question Answering | [
"Xin Guan",
"Biwei Cao",
"Qingqing Gao",
"Zheng Yin",
"Bo Liu",
"Jiuxin Cao"
] | Commonsense question answering (QA) research requires machines to answer
questions based on commonsense knowledge. However, this research requires
expensive labor costs to annotate data as the basis of research, and models
that rely on fine-tuning paradigms only apply to specific tasks, rather than
learn a general commonsense reasoning ability. As a more robust method,
zero-shot commonsense question answering shows a good prospect. The current
zero-shot framework tries to convert triples in commonsense knowledge graphs
(KGs) into QA-form samples as the pre-trained data source to incorporate
commonsense knowledge into the model. However, this method ignores the
multi-hop relationship in the KG, which is also an important central problem in
commonsense reasoning. In this paper, we propose a novel multi-hop commonsense
knowledge injection framework. Specifically, it explores multi-hop reasoning
paradigm in KGs that conform to linguistic logic, and we further propose two
multi-hop QA generation methods based on KGs. Then, we utilize contrastive
learning to pre-train the model with the synthetic QA dataset to inject
multi-hop commonsense knowledge. Extensive experiments on five commonsense
question answering benchmarks demonstrate that our framework achieves
state-of-art performance. | [
"cs.CL"
] | false |
2305.05945 | 2023-05-10T07:33:36Z | Adapter-TST: A Parameter Efficient Method for Multiple-Attribute Text
Style Transfer | [
"Zhiqiang Hu",
"Roy Ka-Wei Lee",
"Nancy F. Chen"
] | Adapting a large language model for multiple-attribute text style transfer
via fine-tuning can be challenging due to the significant amount of
computational resources and labeled data required for the specific task. In
this paper, we address this challenge by introducing AdapterTST, a framework
that freezes the pre-trained model's original parameters and enables the
development of a multiple-attribute text style transfer model. Using BART as
the backbone model, Adapter-TST utilizes different neural adapters to capture
different attribute information, like a plug-in connected to BART. Our method
allows control over multiple attributes, like sentiment, tense, voice, etc.,
and configures the adapters' architecture to generate multiple outputs
respected to attributes or compositional editing on the same sentence. We
evaluate the proposed model on both traditional sentiment transfer and
multiple-attribute transfer tasks. The experiment results demonstrate that
Adapter-TST outperforms all the state-of-the-art baselines with significantly
lesser computational resources. We have also empirically shown that each
adapter is able to capture specific stylistic attributes effectively and can be
configured to perform compositional editing. | [
"cs.CL"
] | false |
2305.05968 | 2023-05-10T08:27:59Z | Investigating Forgetting in Pre-Trained Representations Through
Continual Learning | [
"Yun Luo",
"Zhen Yang",
"Xuefeng Bai",
"Fandong Meng",
"Jie Zhou",
"Yue Zhang"
] | Representation forgetting refers to the drift of contextualized
representations during continual training. Intuitively, the representation
forgetting can influence the general knowledge stored in pre-trained language
models (LMs), but the concrete effect is still unclear. In this paper, we study
the effect of representation forgetting on the generality of pre-trained
language models, i.e. the potential capability for tackling future downstream
tasks. Specifically, we design three metrics, including overall generality
destruction (GD), syntactic knowledge forgetting (SynF), and semantic knowledge
forgetting (SemF), to measure the evolution of general knowledge in continual
learning. With extensive experiments, we find that the generality is destructed
in various pre-trained LMs, and syntactic and semantic knowledge is forgotten
through continual learning. Based on our experiments and analysis, we further
get two insights into alleviating general knowledge forgetting: 1) training on
general linguistic tasks at first can mitigate general knowledge forgetting; 2)
the hybrid continual learning method can mitigate the generality destruction
and maintain more general knowledge compared with those only considering
rehearsal or regularization. | [
"cs.CL"
] | false |
2305.06274 | 2023-05-10T16:06:36Z | Context-Aware Document Simplification | [
"Liam Cripwell",
"Joël Legrand",
"Claire Gardent"
] | To date, most work on text simplification has focused on sentence-level
inputs. Early attempts at document simplification merely applied these
approaches iteratively over the sentences of a document. However, this fails to
coherently preserve the discourse structure, leading to suboptimal output
quality. Recently, strategies from controllable simplification have been
leveraged to achieve state-of-the-art results on document simplification by
first generating a document-level plan (a sequence of sentence-level
simplification operations) and using this plan to guide sentence-level
simplification downstream. However, this is still limited in that the
simplification model has no direct access to the local inter-sentence document
context, likely having a negative impact on surface realisation. We explore
various systems that use document context within the simplification process
itself, either by iterating over larger text units or by extending the system
architecture to attend over a high-level representation of document context. In
doing so, we achieve state-of-the-art performance on the document
simplification task, even when not relying on plan-guidance. Further, we
investigate the performance and efficiency tradeoffs of system variants and
make suggestions of when each should be preferred. | [
"cs.CL"
] | false |
2305.06330 | 2023-05-10T17:34:52Z | Korean Named Entity Recognition Based on Language-Specific Features | [
"Yige Chen",
"KyungTae Lim",
"Jungyeul Park"
] | In the paper, we propose a novel way of improving named entity recognition in
the Korean language using its language-specific features. While the field of
named entity recognition has been studied extensively in recent years, the
mechanism of efficiently recognizing named entities in Korean has hardly been
explored. This is because the Korean language has distinct linguistic
properties that prevent models from achieving their best performances.
Therefore, an annotation scheme for {Korean corpora} by adopting the CoNLL-U
format, which decomposes Korean words into morphemes and reduces the ambiguity
of named entities in the original segmentation that may contain functional
morphemes such as postpositions and particles, is proposed herein. We
investigate how the named entity tags are best represented in this
morpheme-based scheme and implement an algorithm to convert word-based {and
syllable-based Korean corpora} with named entities into the proposed
morpheme-based format. Analyses of the results of {statistical and neural}
models reveal that the proposed morpheme-based format is feasible, and the
{varied} performances of the models under the influence of various additional
language-specific features are demonstrated. Extrinsic conditions were also
considered to observe the variance of the performances of the proposed models,
given different types of data, including the original segmentation and
different types of tagging formats. | [
"cs.CL"
] | false |
2305.11064 | 2023-05-10T09:02:34Z | Bits of Grass: Does GPT already know how to write like Whitman? | [
"Piotr Sawicki",
"Marek Grzes",
"Fabricio Goes",
"Dan Brown",
"Max Peeperkorn",
"Aisha Khatun"
] | This study examines the ability of GPT-3.5, GPT-3.5-turbo (ChatGPT) and GPT-4
models to generate poems in the style of specific authors using zero-shot and
many-shot prompts (which use the maximum context length of 8192 tokens). We
assess the performance of models that are not fine-tuned for generating poetry
in the style of specific authors, via automated evaluation. Our findings
indicate that without fine-tuning, even when provided with the maximum number
of 17 poem examples (8192 tokens) in the prompt, these models do not generate
poetry in the desired style. | [
"cs.CL"
] | false |
2305.16324 | 2023-05-10T12:24:03Z | Talking with Machines: A Comprehensive Survey of Emergent Dialogue
Systems | [
"William Tholke"
] | From the earliest experiments in the 20th century to the utilization of large
language models and transformers, dialogue systems research has continued to
evolve, playing crucial roles in numerous fields. This paper offers a
comprehensive review of these systems, tracing their historical development and
examining their fundamental operations. We analyze popular and emerging
datasets for training and survey key contributions in dialogue systems
research, including traditional systems and advanced machine learning methods.
Finally, we consider conventional and transformer-based evaluation metrics,
followed by a short discussion of prevailing challenges and future prospects in
the field. | [
"cs.CL"
] | false |
2305.05821 | 2023-05-10T00:33:08Z | Context-dependent communication under environmental constraints | [
"Krzysztof Główka",
"Julian Zubek",
"Joanna Rączaszek-Leonardi"
] | There is significant evidence that real-world communication cannot be reduced
to sending signals with context-independent meaning. In this work, based on a
variant of the classical Lewis (1969) signaling model, we explore the
conditions for the emergence of context-dependent communication in a situated
scenario. In particular, we demonstrate that pressure to minimise the
vocabulary size is sufficient for such emergence. At the same time, we study
the environmental conditions and cognitive capabilities that enable contextual
disambiguation of symbol meanings. We show that environmental constraints on
the receiver's referent choice can be unilaterally exploited by the sender,
without disambiguation capabilities on the receiver's end. Consistent with
common assumptions, the sender's awareness of the context appears to be
required for contextual communication. We suggest that context-dependent
communication is a situated multilayered phenomenon, crucially influenced by
environment properties such as distribution of contexts. The model developed in
this work is a demonstration of how signals may be ambiguous out of context,
but still allow for near-perfect communication accuracy. | [
"cs.AI",
"cs.CL"
] | false |
2305.05834 | 2023-05-10T01:46:17Z | Unsupervised Dense Retrieval Training with Web Anchors | [
"Yiqing Xie",
"Xiao Liu",
"Chenyan Xiong"
] | In this work, we present an unsupervised retrieval method with contrastive
learning on web anchors. The anchor text describes the content that is
referenced from the linked page. This shows similarities to search queries that
aim to retrieve pertinent information from relevant documents. Based on their
commonalities, we train an unsupervised dense retriever, Anchor-DR, with a
contrastive learning task that matches the anchor text and the linked document.
To filter out uninformative anchors (such as ``homepage'' or other functional
anchors), we present a novel filtering technique to only select anchors that
contain similar types of information as search queries. Experiments show that
Anchor-DR outperforms state-of-the-art methods on unsupervised dense retrieval
by a large margin (e.g., by 5.3% NDCG@10 on MSMARCO). The gain of our method is
especially significant for search and question answering tasks. Our analysis
further reveals that the pattern of anchor-document pairs is similar to that of
search query-document pairs. Code available at
https://github.com/Veronicium/AnchorDR. | [
"cs.IR",
"cs.CL"
] | false |
2305.05948 | 2023-05-10T07:39:57Z | Multi-Path Transformer is Better: A Case Study on Neural Machine
Translation | [
"Ye Lin",
"Shuhan Zhou",
"Yanyang Li",
"Anxiang Ma",
"Tong Xiao",
"Jingbo Zhu"
] | For years the model performance in machine learning obeyed a power-law
relationship with the model size. For the consideration of parameter
efficiency, recent studies focus on increasing model depth rather than width to
achieve better performance. In this paper, we study how model width affects the
Transformer model through a parameter-efficient multi-path structure. To better
fuse features extracted from different paths, we add three additional
operations to each sublayer: a normalization at the end of each path, a cheap
operation to produce more features, and a learnable weighted mechanism to fuse
all features flexibly. Extensive experiments on 12 WMT machine translation
tasks show that, with the same number of parameters, the shallower multi-path
model can achieve similar or even better performance than the deeper model. It
reveals that we should pay more attention to the multi-path structure, and
there should be a balance between the model depth and width to train a better
large-scale Transformer. | [
"cs.CL",
"cs.AI"
] | false |
2305.05994 | 2023-05-10T09:03:01Z | ANALOGYKB: Unlocking Analogical Reasoning of Language Models with A
Million-scale Knowledge Base | [
"Siyu Yuan",
"Jiangjie Chen",
"Changzhi Sun",
"Jiaqing Liang",
"Yanghua Xiao",
"Deqing Yang"
] | Analogical reasoning is a fundamental cognitive ability of humans. However,
current language models (LMs) still struggle to achieve human-like performance
in analogical reasoning tasks due to a lack of resources for model training. In
this work, we address this gap by proposing ANALOGYKB, a million-scale analogy
knowledge base (KB) derived from existing knowledge graphs (KGs). ANALOGYKB
identifies two types of analogies from the KGs: 1) analogies of the same
relations, which can be directly extracted from the KGs, and 2) analogies of
analogous relations, which are identified with a selection and filtering
pipeline enabled by large LMs (InstructGPT), followed by minor human efforts
for data quality control. Evaluations on a series of datasets of two analogical
reasoning tasks (analogy recognition and generation) demonstrate that ANALOGYKB
successfully enables LMs to achieve much better results than previous
state-of-the-art methods. | [
"cs.CL",
"cs.AI"
] | false |
2305.06074 | 2023-05-10T11:55:17Z | iLab at SemEval-2023 Task 11 Le-Wi-Di: Modelling Disagreement or
Modelling Perspectives? | [
"Nikolas Vitsakis",
"Amit Parekh",
"Tanvi Dinkar",
"Gavin Abercrombie",
"Ioannis Konstas",
"Verena Rieser"
] | There are two competing approaches for modelling annotator disagreement:
distributional soft-labelling approaches (which aim to capture the level of
disagreement) or modelling perspectives of individual annotators or groups
thereof. We adapt a multi-task architecture -- which has previously shown
success in modelling perspectives -- to evaluate its performance on the SEMEVAL
Task 11. We do so by combining both approaches, i.e. predicting individual
annotator perspectives as an interim step towards predicting annotator
disagreement. Despite its previous success, we found that a multi-task approach
performed poorly on datasets which contained distinct annotator opinions,
suggesting that this approach may not always be suitable when modelling
perspectives. Furthermore, our results explain that while strongly
perspectivist approaches might not achieve state-of-the-art performance
according to evaluation metrics used by distributional approaches, our approach
allows for a more nuanced understanding of individual perspectives present in
the data. We argue that perspectivist approaches are preferable because they
enable decision makers to amplify minority views, and that it is important to
re-evaluate metrics to reflect this goal. | [
"cs.CL",
"cs.LG"
] | false |
2305.06099 | 2023-05-10T12:40:48Z | PAI at SemEval-2023 Task 2: A Universal System for Named Entity
Recognition with External Entity Information | [
"Long Ma",
"Kai Lu",
"Tianbo Che",
"Hailong Huang",
"Weiguo Gao",
"Xuan Li"
] | The MultiCoNER II task aims to detect complex, ambiguous, and fine-grained
named entities in low-context situations and noisy scenarios like the presence
of spelling mistakes and typos for multiple languages. The task poses
significant challenges due to the scarcity of contextual information, the high
granularity of the entities(up to 33 classes), and the interference of noisy
data. To address these issues, our team {\bf PAI} proposes a universal Named
Entity Recognition (NER) system that integrates external entity information to
improve performance. Specifically, our system retrieves entities with
properties from the knowledge base (i.e. Wikipedia) for a given text, then
concatenates entity information with the input sentence and feeds it into
Transformer-based models. Finally, our system wins 2 first places, 4 second
places, and 1 third place out of 13 tracks. The code is publicly available at
\url{https://github.com/diqiuzhuanzhuan/semeval-2023}. | [
"cs.CL",
"cs.AI"
] | false |
2305.06212 | 2023-05-10T14:41:51Z | Privacy-Preserving Prompt Tuning for Large Language Model Services | [
"Yansong Li",
"Zhixing Tan",
"Yang Liu"
] | Prompt tuning provides an efficient way for users to customize Large Language
Models (LLMs) with their private data in the emerging LLM service scenario.
However, the sensitive nature of private data brings the need for privacy
preservation in LLM service customization. Based on prompt tuning, we propose
Privacy-Preserving Prompt Tuning (RAPT), a framework that provides privacy
guarantees for LLM services. \textsc{rapt} adopts a local privacy setting,
allowing users to privatize their data locally with local differential privacy.
As prompt tuning performs poorly when directly trained on privatized data, we
introduce a novel privatized token reconstruction task that is trained jointly
with the downstream task, allowing LLMs to learn better task-dependent
representations. Despite the simplicity of our framework, experiments show that
RAPT achieves competitive performance across tasks while providing privacy
guarantees against adversaries. | [
"cs.CL",
"cs.CR"
] | false |
2305.06404 | 2023-05-10T18:26:42Z | LACoS-BLOOM: Low-rank Adaptation with Contrastive objective on 8 bits
Siamese-BLOOM | [
"Wen-Yu Hua",
"Brian Williams",
"Davood Shamsi"
] | Text embeddings are useful features for several NLP applications, such as
sentence similarity, text clustering, and semantic search. In this paper, we
present a Low-rank Adaptation with a Contrastive objective on top of 8-bit
Siamese-BLOOM, a multilingual large language model optimized to produce
semantically meaningful word embeddings. The innovation is threefold. First, we
cast BLOOM weights to 8-bit values. Second, we fine-tune BLOOM with a scalable
adapter (LoRA) and 8-bit Adam optimizer for sentence similarity classification.
Third, we apply a Siamese architecture on BLOOM model with a contrastive
objective to ease the multi-lingual labeled data scarcity. The experiment
results show the quality of learned embeddings from LACoS-BLOOM is proportional
to the number of model parameters and the amount of unlabeled training data.
With the parameter efficient fine-tuning design, we are able to run BLOOM 7.1
billion parameters end-to-end on a single GPU machine with 32GB memory.
Compared to previous solution Sentence-BERT, we achieve significant improvement
on both English and multi-lingual STS tasks. | [
"cs.CL",
"cs.AI"
] | true |
2305.06416 | 2023-05-10T18:53:51Z | A Method to Automate the Discharge Summary Hospital Course for Neurology
Patients | [
"Vince C. Hartman",
"Sanika S. Bapat",
"Mark G. Weiner",
"Babak B. Navi",
"Evan T. Sholle",
"Thomas R. Campion, Jr"
] | Generation of automated clinical notes have been posited as a strategy to
mitigate physician burnout. In particular, an automated narrative summary of a
patient's hospital stay could supplement the hospital course section of the
discharge summary that inpatient physicians document in electronic health
record (EHR) systems. In the current study, we developed and evaluated an
automated method for summarizing the hospital course section using
encoder-decoder sequence-to-sequence transformer models. We fine tuned BERT and
BART models and optimized for factuality through constraining beam search,
which we trained and tested using EHR data from patients admitted to the
neurology unit of an academic medical center. The approach demonstrated good
ROUGE scores with an R-2 of 13.76. In a blind evaluation, two board-certified
physicians rated 62% of the automated summaries as meeting the standard of
care, which suggests the method may be useful clinically. To our knowledge,
this study is among the first to demonstrate an automated method for generating
a discharge summary hospital course that approaches a quality level of what a
physician would write. | [
"cs.CL",
"cs.LG"
] | false |
2305.06434 | 2023-05-10T19:56:55Z | Word Grounded Graph Convolutional Network | [
"Zhibin Lu",
"Qianqian Xie",
"Benyou Wang",
"Jian-yun Nie"
] | Graph Convolutional Networks (GCNs) have shown strong performance in learning
text representations for various tasks such as text classification, due to its
expressive power in modeling graph structure data (e.g., a literature citation
network). Most existing GCNs are limited to deal with documents included in a
pre-defined graph, i.e., it cannot be generalized to out-of-graph documents. To
address this issue, we propose to transform the document graph into a word
graph, to decouple data samples (i.e., documents in training and test sets) and
a GCN model by using a document-independent graph. Such word-level GCN could
therefore naturally inference out-of-graph documents in an inductive way. The
proposed Word-level Graph (WGraph) can not only implicitly learning word
presentation with commonly-used word co-occurrences in corpora, but also
incorporate extra global semantic dependency derived from inter-document
relationships (e.g., literature citations). An inductive Word-grounded Graph
Convolutional Network (WGCN) is proposed to learn word and document
representations based on WGraph in a supervised manner. Experiments on text
classification with and without citation networks evidence that the proposed
WGCN model outperforms existing methods in terms of effectiveness and
efficiency. | [
"cs.CL",
"cs.LG"
] | false |
2305.11068 | 2023-05-10T13:19:18Z | ORKG-Leaderboards: A Systematic Workflow for Mining Leaderboards as a
Knowledge Graph | [
"Salomon Kabongo",
"Jennifer D'Souza",
"Sören Auer"
] | The purpose of this work is to describe the Orkg-Leaderboard software
designed to extract leaderboards defined as Task-Dataset-Metric tuples
automatically from large collections of empirical research papers in Artificial
Intelligence (AI). The software can support both the main workflows of
scholarly publishing, viz. as LaTeX files or as PDF files. Furthermore, the
system is integrated with the Open Research Knowledge Graph (ORKG) platform,
which fosters the machine-actionable publishing of scholarly findings. Thus the
system output, when integrated within the ORKG's supported Semantic Web
infrastructure of representing machine-actionable 'resources' on the Web,
enables: 1) broadly, the integration of empirical results of researchers across
the world, thus enabling transparency in empirical research with the potential
to also being complete contingent on the underlying data source(s) of
publications; and 2) specifically, enables researchers to track the progress in
AI with an overview of the state-of-the-art (SOTA) across the most common AI
tasks and their corresponding datasets via dynamic ORKG frontend views
leveraging tables and visualization charts over the machine-actionable data.
Our best model achieves performances above 90% F1 on the \textit{leaderboard}
extraction task, thus proving Orkg-Leaderboards a practically viable tool for
real-world usage. Going forward, in a sense, Orkg-Leaderboards transforms the
leaderboard extraction task to an automated digitalization task, which has
been, for a long time in the community, a crowdsourced endeavor. | [
"cs.CL",
"cs.AI"
] | false |
2306.01741 | 2023-05-10T10:14:16Z | GPT Models Meet Robotic Applications: Co-Speech Gesturing Chat System | [
"Naoki Wake",
"Atsushi Kanehira",
"Kazuhiro Sasabuchi",
"Jun Takamatsu",
"Katsushi Ikeuchi"
] | This technical paper introduces a chatting robot system that utilizes recent
advancements in large-scale language models (LLMs) such as GPT-3 and ChatGPT.
The system is integrated with a co-speech gesture generation system, which
selects appropriate gestures based on the conceptual meaning of speech. Our
motivation is to explore ways of utilizing the recent progress in LLMs for
practical robotic applications, which benefits the development of both chatbots
and LLMs. Specifically, it enables the development of highly responsive chatbot
systems by leveraging LLMs and adds visual effects to the user interface of
LLMs as an additional value. The source code for the system is available on
GitHub for our in-house robot
(https://github.com/microsoft/LabanotationSuite/tree/master/MSRAbotChatSimulation)
and GitHub for Toyota HSR
(https://github.com/microsoft/GPT-Enabled-HSR-CoSpeechGestures). | [
"cs.RO",
"cs.CL"
] | true |