title
stringlengths 28
135
| abstract
stringlengths 0
12k
| introduction
stringlengths 0
12k
|
---|---|---|
Weers_Masked_Autoencoding_Does_Not_Help_Natural_Language_Supervision_at_Scale_CVPR_2023 | Abstract
Self supervision and natural language supervision have
emerged as two exciting ways to train general purpose im-
age encoders which excel at a variety of downstream tasks.
Recent works such as M3AE [ 31] and SLIP [ 63] have sug-
gested that these approaches can be effectively combined,
but most notably their results use small ( <20M examples)
pre-training datasets and don’t effectively reflect the large-
scale regime ( >100M samples) that is commonly used for
these approaches. Here we investigate whether a similar
approach can be effective when trained with a much larger
amount of data. We find that a combination of two state
of the art approaches: masked auto-encoders, MAE [ 37]
and contrastive language image pre-training, CLIP [ 68]
provides a benefit over CLIP when trained on a corpus of
11.3M image-text pairs, but little to no benefit (as evaluated
on a suite of common vision tasks) over CLIP when trained
on a large corpus of 1.4B images. Our work provides some
much needed clarity into the effectiveness (or lack thereof)
of self supervision for large-scale image-text training.
| 1. Introduction
Large scale pretraining has become a powerful tool in the
arsenal of computer vision researchers to produce state of
the art results across a wider variety of tasks [ 39,88,95,98].
However, when pre-training on tens of millions to billions
of images it is difficult to rely on standard supervised meth-
ods to train models, as datasets of this size often lack re-
liable labels. In the presence of these massive but largely
under-curated datasets, two general classes of methods to
train general purpose image encoders have emerged:
1.Self Supervised techniques that learn visual represen-
tations from the image data alone [ 11,36]
2.Natural Language Supervised methods that utilize
paired free-form text data to learn visual representa-
tions [ 43,69]
Due to the unique strengths and weaknesses of each ap-proach1, a recent flurry of work has introduced methods that
combine both forms of supervision [ 31,56,64,78] to vary-
ing degrees of success. While each of these methods estab-
lishes some regime where the additional supervision helps,
none of these “joint-supervision” methods advance state of
the art in any meaningful way. Additionally, to our knowl-
edge none of these methods have shown comparative results
at the scale many large scale vision models are currently
trained at ( >100M examples) [ 43,66,69,73,80,82,98]. Fur-
thermore, methods that use both forms of supervision start
with the presumption that the additional supervision is help-
fuland either often lack clean ablations or lack evaluations
in a “high accuracy” regime—leading to further confusion
regarding whether a combination of these methods can ac-
tually improve the state of the art. To clarify this issue, in
this work, we investigate a simple question:
Does a combination of self supervision and natu-
ral language supervision actually lead to higher
quality visual representations?
In order to answer this, we first introduce a straight-
forward baseline approach that combines standard self su-
pervision and language supervision techniques. We com-
bine masked auto-encoders (MAE) and contrastive lan-
guage image-pretraining (CLIP) to make MAE-CLIP. We
then present a careful study of the performance of MAE,
M3AE, CLIP and MAE-CLIP across a wide variety of tasks
in two distinct regimes: a “low-sample”211.3 million
example regime and a “high-sample” 1.4 billion example
regime. We train self-supervised and language-supervised
methods using the same pre-training datasets under the as-
sumption that we have no knowledge about downstream
tasks. Our experiments show:
1.In the low sample size regime, without changing the
final pooling operation in the network, we observe a
large performance improvement, namely 6% on Ima-
geNet [ 18] and 4% on VTAB [ 105]. However, when
1Self supervised methods can learn representations without labels, but
natural language supervision learns better representations. Natural lan-
guage supervised methods rely on quality of captions
2We note that what low sample means has changed substantially over
the last few years
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23432
we modify the pooling operation, the improvement
substantially decreases to around 1% on both Ima-
geNet and VTAB.
2.In the high sample size regime, there is virtually no dif-
ference in performance between MAE-CLIP and CLIP
across ImageNet, VTAB, and VQA tasks.
We believe our work is the first careful study of this form
and contextualizes recent progress in both self-supervision
and natural language supervision.
The rest of the paper is organized as follows: In Sec-
tion2, we cover related work in the areas of self supervision
and natural language supervision. In Section 3, we give an
overview of the baseline methods we study, MAE, M3AE,
CLIP and our new baseline MAE-CLIP. Then we present
and analyse our small scale and large scale experimental
findings in Sections 4and5. Finally, we discuss potential
explanations for our findings and some future work in 6.
|
Wang_SmartAssign_Learning_a_Smart_Knowledge_Assignment_Strategy_for_Deraining_and_CVPR_2023 | Abstract
Existing methods mainly handle single weather types.
However, the connections of different weather conditions at
deep representation level are usually ignored. These con-
nections, if used properly, can generate complementary rep-
resentations for each other to make up insufficient train-
ing data, obtaining positive performance gains and better
generalization. In this paper, we focus on the very corre-
lated rain and snow to explore their connections at deep
representation level. Because sub-optimal connections may
cause negative effect, another issue is that if rain and snow
are handled in a multi-task learning way, how to find an
optimal connection strategy to simultaneously improve de-
raining and desnowing performance. To build desired con-
nection, we propose a smart knowledge assignment strat-
egy, called SmartAssign, to optimally assign the knowledge
learned from both tasks to a specific one. In order to fur-
ther enhance the accuracy of knowledge assignment, we
propose a novel knowledge contrast mechanism, so that
the knowledge assigned to different tasks preserves better
uniqueness. The inherited inductive biases usually limit
the modelling ability of CNNs, we introduce a novel trans-
former block to constitute the backbone of our network to ef-
fectively combine long-range context dependency and local
image details. Extensive experiments on seven benchmark
datasets verify that proposed SmartAssign explores effec-
tive connection between rain and snow, and improves the
performances of both deraining and desnowing apparently.
The implementation code will be available at https://
gitee.com/mindspore/models/tree/master/
research/cv/SmartAssign .
| 1. Introduction
Bad weather types, such as haze, rain, and snow in-
evitably degrade the visual quality of images, meanwhile
decrease the performances of other downstream computer
Input
Restormer [58]
Ours
Input
HDCWNet [7]
Ours
Figure 1. Given challenging rainy (with blurry rain streaks) and
snowy (with high bright snowflakes) images , the proposed method
effectively removes the artifacts of rain and snow simultaneously,
achieving better results than the state-of-the-art approaches. This
is attributed to the unique knowledge which captures accurate fea-
tures of rain/snow as well as the common knowledge boosting the
generalization of our model to real data.
vision tasks, e.g., autonomous driving [57]. Existing meth-
ods mainly focus on single weather types, e.g., deraining
[14,19,26,45,46,48–50,56,60], dehazing [5,10,31,41,54],
and desnowing [6, 7, 32, 49]. However, these methods usu-
ally ignore the connections among these weather types,
which, if used properly, may simultaneously improve the
performance of multiple image recovery tasks.
Some methods attempt to explore the connections among
different weather types by handling them with an uni-
fied architecture and one set of pre-trained weights, e.g.,
[8, 25, 27]. But they neglect the difference of multiple
weather types, the uniqueness belonging to single weather
types may harm the performance of other weather recovery
tasks. Therefore, the performances of such unified networks
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3677
are usually lower than the ones for single weather types [8].
In this paper, we focus on the very similar rain and snow
and develop a novel Multi-Task Learning (MTL) strategy
to explore their connections at deep representation level,
meanwhile avoiding the uniqueness of one weather type
from damaging the performance of the other weather re-
covery task. Specifically, our goal is to accurately find the
representations (i.e., connections) which can be shared by
deraining and desnowing to simultaneously enhance both
their performance, and meanwhile determine the exclusive
representations (i.e., uniqueness) for one task to specially
promote its own performance as well as avoid such unique-
ness from damaging the other task. To facilitate the descrip-
tion of our method, we define the deep representation of a
single network channel as a knowledge atom . All the knowl-
edge atoms constitute the knowledge learned by networks.
Similar to conventional MTL method, we also use a
backbone encoder E(·)to learn the whole image-recovery
knowledge simultaneously from rainy and snowy images,
and two task-targeted decoders Ddr(·)andDds(·)are fol-
lowed in parallel to remove rain and snow, separately. Con-
ventional MTL takes the whole knowledge as the input for
the subsequent decoders. Though such mechanism makes
the best of the connections between both tasks, the in-
fluences of the uniqueness of single tasks are neglected,
i.e., the uniqueness of rain may harm the performance of
desnowing, and vice versa. Instead, we propose a novel
Gated Knowledge Filtering Module (GKFM) to select op-
timal knowledge atoms for both tasks via a highly smart
strategy, so that the connections between both tasks are suf-
ficiently explored and the uniqueness of single tasks is prop-
erly used. To coordinate with GKFM, we design a Task-
targeted Knowledge FeedForward mechanism (TKFF) to let
every knowledge atom flow to its related tasks. Through our
GKFM and TKFF, we realize a smart knowledge assign-
ment, in which both tasks adaptively explore their connec-
tions and uniqueness via gradient backward-propagation.
Hence, we term our MTL mechanism as SmartAssign .
In order to further enhance the accuracy of knowledge
assignment, i.e., toward optimally exploring the connec-
tions and uniqueness of deraining and desnowing, we in-
troduce a novel knowledge contrast, making the same kind
of knowledge atoms more closer, and the ones belonging to
different kinds more discriminative under a similarity met-
ric. In this process, all the knowledge atoms are transformed
into a new low-dimension feature space by a Dimension Re-
duction Module (DRM) to avoid model collapse when op-
erating on the original high-dimension knowledge atoms.
Currently, CNNs are still the mainstream choice for
image recovery. However, the inherited inductive biases
limit their modelling capacity for long-range context depen-
dency. Though they can also obtain a large receptive field
by stacking a deep architecture, such indirect modelling isindeed inferior to that of a transformer, which models both
short and long range dependency directly via self-attention.
In this paper, we adopt transformer blocks to constitute our
backbone encoder E(·). Usually, transformer needs suffi-
cient training pairs to ensure good performance. Hence,
our transformer block introduces a gated CNN branch to
complement limited training data via the inductive biases.
Moreover, the locality of CNN helps to recover degraded
image details and the gated operation is used to reduce re-
dundant features caused by the combination of CNN and
transformer. Figure 1 gives two examples of deraining and
desnowing. By contrast, our method obtains better image
recovery quality on both tasks than SOTA methods.
Our contributions are summarized in the following:
• We propose a novel knowledge assignment strategy,
i.e., SmartAssign, to excavate the connections and
uniqueness of rain and snow, so that their connections
are used to enhance the performance of both tasks and
the uniqueness is applied to boost corresponding task
and avoided from damaging the other task.
• We propose a novel knowledge contrast mechanism to
further boost the accuracy of knowledge assignment,
in which a dimension reduction module (DRM) is in-
troduced to stabilize the training of our model.
• We propose a novel transformer block to make the best
use of the superiority of self-attention and convolution,
in which gated operations are introduced to alleviate
the feature redundancy.
|
Wang_PDPPProjected_Diffusion_for_Procedure_Planning_in_Instructional_Videos_CVPR_2023 | Abstract
In this paper, we study the problem of procedure plan-
ning in instructional videos, which aims to make goal-
directed plans given the current visual observations in un-
structured real-life videos. Previous works cast this prob-
lem as a sequence planning problem and leverage either
heavy intermediate visual observations or natural language
instructions as supervision, resulting in complex learning
schemes and expensive annotation costs. In contrast, we
treat this problem as a distribution fitting problem. In this
sense, we model the whole intermediate action sequence
distribution with a diffusion model (PDPP), and thus trans-
form the planning problem to a sampling process from this
distribution. In addition, we remove the expensive inter-
mediate supervision, and simply use task labels from in-
structional videos as supervision instead. Our model is a
U-Net based diffusion model, which directly samples ac-
tion sequences from the learned distribution with the given
start and end observations. Furthermore, we apply an ef-
ficient projection method to provide accurate conditional
guides for our model during the learning and sampling pro-
cess. Experiments on three datasets with different scales
show that our PDPP model can achieve the state-of-the-
art performance on multiple metrics, even without the task
supervision. Code and trained models are available at
https://github.com/MCG-NJU/PDPP.
| 1. Introduction
Instructional videos [1,31,38] are strong knowledge car-
riers, which contain rich scene changes and various actions.
People watching these videos can learn new skills by fig-
uring out what actions should be performed to achieve the
desired goals. Although this seems to be natural for hu-
mans, it is quite challenging for AI agents. Training a model
that can learn how to make action plans to transform from
the start state to goal is crucial for the next-generation AI
system as such a model can analyze complex human be-
B: Corresponding author (lmwang@nju.edu.cn).
(b) Supervised by visual observations
(c) Supervised by task class (Ours)Task : Make Meringue
Seen observations Intermediate supervision Predicted action
Pour
eggAdd
SugarWhisk
mixture
(a) Supervised by language instructions
Whisk
mixturePour
eggAdd
SugarSpread
mixtureWhisk
mixturePour
eggAdd
SugarSpread
mixture
Whisk
mixturePour
eggAdd
SugarSpread
mixtureFigure 1. Procedure planning example. Given a start observation
ostart and a goal state ogoal, the model is required to generate
a sequence of actions that can transform ostart toogoal. Previous
approaches rely on heavy intermediate supervision during training,
while our model only needs the task class labels (bottom row).
haviours and help people with goal-directed problems like
cooking or repairing items. Nowadays the computer vision
community is paying growing attention to the instructional
video understanding [4, 8, 9, 24, 37]. Among them, Chang
et al. [4] proposed a problem named as procedure planning
in instructional videos, which requires a model to produce
goal-directed action plans given the current visual observa-
tion of the world. Different with traditional procedure plan-
ning problem in structured environments [12, 29], this task
deals with unstructured environments and thus forces the
model to learn structured and plannable representations in
real-life videos. We follow this work and tackle the proce-
dure planning problem in instructional videos. Specifically,
given the visual observations at start and end time, we need
to produce a sequence of actions which transform the envi-
ronment from start state to the goal state, as shown in Fig. 1.
Previous approaches for procedure planning in instruc-
tional videos often treat it as a sequence planning prob-
lem and focus on predicting each action accurately. Most
works rely on a two-branch autoregressive method to pre-
dict the intermediate states and actions step by step [2,4,30].
Such models are complex and easy to accumulate errors
during the planning process, especially for long sequences.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14836
Recently, Zhao et al . [36] proposed a single branch non-
autoregressive model based on transformer [33] to predict
all intermediate steps in parallel. To obtain a good per-
formance, they used a learnable memory bank in the trans-
former decoder, augmented their model with an extra gen-
erative adversarial framework [13] and applied a Viterbi
post-processing method [34]. This method brought multiple
learning objectives, complex training schemes and tedious
inference process. Instead, we assume procedure planning
as a distribution fitting problem and planning is solved with
a sampling process. We aim to directly model the joint dis-
tribution of the whole action sequence in instructional video
rather than every discrete action. In this perspective, we can
use a simple MSE loss to optimize our generative model
and generate action sequence plans in one shot with a sam-
pling process, which results in less learning objectives and
simpler training schemes.
For supervision in training, in addition to the action se-
quence, previous methods often require heavy intermediate
visual [2,4,30] or language [36] annotations for their learn-
ing process. In contrast, we only use task labels from in-
structional videos as a condition for our learning (as shown
in Fig. 1), which could be easily obtained from the key-
words or captions of videos and requires much less labeling
cost. Another reason is that task information is closely re-
lated to the action sequences in a video. For example, in a
video of jacking up a car , the possibility for action add
sugar appears in this process is nearly zero.
Modeling the uncertainty in procedure planning is also
an important factor that we need to consider. That is, there
might be more than one reasonable plan sequences to trans-
form from the given start state to goal state. For example,
change the order of add sugar andadd butter inmaking
cake process will not affect the final result. So action se-
quences can vary even with the same start and goal states.
To address this problem, we consider adding randomness to
our distribution-fitting process and perform training with a
diffusion model [18, 26]. Solving procedure planning prob-
lem with a diffusion model has two main benefits. First, a
diffusion model changes the goal distribution to a random
Gaussian noise by adding noise slowly to the initial data
and learns the sampling process at inference time as an iter-
ative denoising procedure starting from a random Gaussian
noise. So randomness is involved both for training and sam-
pling in a diffusion model, which is helpful to model the un-
certain action sequences for procedure planning. Second, it
is convenient to apply conditional diffusion process with the
given start and goal observations based on diffusion models,
so we can model the procedure planning problem as a con-
ditional sampling process with a simple training scheme. In
this work, we concatenate conditions and action sequences
together and propose a projected diffusion model to perform
conditional diffusion process.Contributions . To sum up, the main contributions of
this work are as follows: a) We cast the procedure plan-
ning as a conditional distribution-fitting problem and model
the joint distribution of the whole intermediate action se-
quence as our learning objective, which can be learned with
a simple training scheme. b) We introduce an efficient ap-
proach for training the procedure planner, which removes
the supervision of visual or language features and relies on
task supervision instead. c) We propose a novel projected
diffusion model (PDPP) to learn the distribution of action
sequences and produce all intermediate steps at one shot.
We evaluate our PDPP on three instructional videos datasets
and achieve the state-of-the-art performance across differ-
ent prediction time horizons. Note that our model can still
achieve excellent results even if we remove the task super-
vision and use the action labels only.
|
Wu_High-Fidelity_3D_Face_Generation_From_Natural_Language_Descriptions_CVPR_2023 | Abstract
Synthesizing high-quality 3D face models from natural
language descriptions is very valuable for many applica-
tions, including avatar creation, virtual reality, and telep-
resence. However, little research ever tapped into this task.
We argue the major obstacle lies in 1) the lack of high-
quality 3D face data with descriptive text annotation, and
2) the complex mapping relationship between descriptive
language space and shape/appearance space. To solve
these problems, we build DESCRIBE 3Ddataset, the first
large-scale dataset with fine-grained text descriptions for
text-to-3D face generation task. Then we propose a two-
stage framework to first generate a 3D face that matches
the concrete descriptions, then optimize the parameters in
the 3D shape and texture space with abstract description
to refine the 3D face model. Extensive experimental re-
sults show that our method can produce a faithful 3D face
that conforms to the input descriptions with higher accu-
racy and quality than previous methods. The code and DE-
SCRIBE 3Ddataset are released at https://github.
com/zhuhao-nju/describe3d .
| 1. Introduction
3D faces are highly required in many cutting-edge tech-
nologies like digital humans, telepresence, and movie spe-
cial effects, while creating a high-fidelity 3D face is very
complex and requires vast time from an experienced mod-
eler. Recently, many efforts are devoted to the synthesis of
text-to-image and image-to-3D, but they lack the ability to
synthesize 3D faces given an abstract description. However,
there is still no reliable solution to synthesize high-quality
3D faces from descriptive texts in natural language.
We consider the difficulties of synthesizing high-quality
3D face models from natural language descriptions lie in
two folds. Firstly, there is still no available fine-grained
dataset that contains 3D face models and corresponding text
descriptions in the research community, which is crucial for
training learning-based 3D generators. Beyond that, it is
difficult to leverage massive 2D Internet images to learn
Figure 1. Given a text describing the appearance ( left), our method
can synthesize high-quality 3D faces ( middle ) containing 3D mesh
and textures. The resulting model can be easily processed into a
rigged face with hair and accessories ( right ). The dark blue texts
indicate concrete descriptions and the brown texts indicate abstract
descriptions, and similarly hereinafter.
high-quality text-to-3D mapping. Secondly, cross-modal
mapping from texts to 3D models is non-trivial. Though
the progress made in text-to-image synthesis is instructive,
the problem of mapping texts to 3D faces is even more chal-
lenging due to the complexity of 3D representation.
In this work, we aim at tackling the task of high-fidelity
3D face generation from natural text descriptions from
the above two perspectives. We first build a 3D-face-text
dataset (named D ESCRIBE 3D), which contains 1,627high-
quality 3D faces from HeadSpace dataset [6] and FaceScape
dataset [48, 55], and fine-grained manually-labeled facial
features. The provided annotations include 25 facial at-
tributes, each of which contains 3 to 8 options describing
the facial feature. Our dataset covers various races and ages
and is delicate in 3D shape and texture. We then propose
a two-stage synthesis pipeline, which consists of a concrete
synthesis stage mapping the text space to the 3D shape and
texture space, and an abstract synthesis stage refining the
3D face with a prompt learning strategy. The mapping for
different facial features is disentangled and the diversity of
the generative model can be controlled by the additional
input of random seeds. As shown in Figure 1, our pro-
posed model can take any word description or combination
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4521
of phrases as input, and then generate an output of a fine-
textured 3D face with appearances matching the descrip-
tion. Extensive experiments further validate that the con-
crete synthesis can generate a detailed 3D face that matches
the fine-grained descriptive texts well, and the abstract syn-
thesis enables the network to synthesize abstract features
like “wearing makeup” or “looks like Tony Stark”.
In summary, our contributions are as follows:
• We explore a new topic of constructing a high-quality
3D face model from natural descriptive texts and pro-
pose a baseline method to achieve such a goal.
• A new dataset - D ESCRIBE 3D is established with de-
tailed 3D faces and corresponding fine-grained de-
scriptive annotations. The dataset will be released to
the public for research purposes.
• The reliable mapping from the text embedding space
to the 3D face parametric space is learned by intro-
ducing the descriptive code space as an intermediary,
which forms the core of our concrete synthesis mod-
ule. Region-specific triplet loss and weighted ℓ1loss
further boost the performance.
• Abstract learning based on CLIP is introduced to fur-
ther optimize the parametric 3D face, enabling our re-
sults to conform with abstract descriptions.
|
Xie_Blemish-Aware_and_Progressive_Face_Retouching_With_Limited_Paired_Data_CVPR_2023 | Abstract
Face retouching aims to remove facial blemishes, while
at the same time maintaining the textual details of a giv-
en input image. The main challenge lies in distinguishing
blemishes from the facial characteristics, such as moles.
Training an image-to-image translation network with pixel-
wise supervision suffers from the problem of expensive
paired training data, since professional retouching need-
s specialized experience and is time-consuming. In this
paper, we propose a Blemish-aware and Progressive Face
Retouching model, which is referred to as BPFRe. Our
framework can be partitioned into two manageable stages
to perform progressive blemish removal. Specifically, an
encoder-decoder-based module learns to coarsely remove
the blemishes at the first stage, and the resulting interme-
diate features are injected into a generator to enrich lo-
cal detail at the second stage. We find that explicitly sup-
pressing the blemishes can contribute to an effective col-
laboration among the components. Toward this end, we
incorporate an attention module, which learns to infer a
blemish-aware map and further determine the correspond-
ing weights, which are then used to refine the intermediate
features transferred from the encoder to the decoder, and
from the decoder to the generator. Therefore, BPFRe is able
to deliver significant performance gains on a wide range of
face retouching tasks. It is worth noting that we reduce the
dependence of BPFRe on paired training samples by impos-
ing effective regularization on unpaired ones.
| 1. Introduction
With the development of social media, there is an in-
creased demand for facial image beautification from selfies
to portraits and beyond. Facial skin retouching aims to re-
Corresponding author.
Figure 1. Visual comparison of the activation maps produced by a
generic attention module [47] ( second row ) and the blemish-aware
module used in BPFRe ( third row ), given a number of images of
faces with blemishes ( top row ). BPFRe is capable of applying
attention on the regions close to the manual retouching regions
(bottom row ).
move any unexpected blemishes from facial images, while
preserving the stable characteristics that associate with face
identity [2, 38, 41]. The main challenge is due to the wide
range of blemishes including from small spots to severe ac-
ne. Conventional methods are based on blind smoothing,
such that the facial characteristics, such as moles and freck-
les, may be removed. Professional face retouching can be
expensive and needs specialized experience, which impedes
the collection of large-scale paired data for model training.
Deep neural networks have been widely used for image-
to-image translation, especially based on Generative Adver-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5599
sarial Networks (GANs) [6, 12, 22]. The translation per-
formance has witnessed rapid progress in style transfer [7],
image restoration [45, 48], image inpainting [47], and so
on. The existing models are typically based on an encoder-
decoder architecture. The source image is encoded into a
latent representation, based on which a task-specific trans-
formation is performed by the decoder. Different from the
above image enhancement tasks, the regions needed to be
retouched may be small, and most of the pixels are un-
changed in this case. Generic encoder-decoder-based trans-
lation methods can preserve irrelevant content but tend to
overlook large blemishes and produce over-smoothed im-
ages. Considering that StyleGAN-based methods have the
capability of rendering the complex textual details [1, 11],
we design a two-stage progressive face retouching frame-
work to make use of the advantage of these types of archi-
tectures, and learn the blemish-aware attention (as shown in
Figure 1) to guide the image rendering process.
More specifically, we propose a Blemish-aware Progres-
sive Face Retouching model (BPFRe), which consists of t-
wo stages: An encoder-decoder architecture is applied at the
first stage to perform coarse retouching. The intermediate
features from the encoder are integrated into the decoder via
skip connections for better reconstruction of image content.
At the second stage, we modify the generator architecture
of StyleGAN [22] to operate on the multi-scale intermedi-
ate features of the decoder and render an image with finer
details. We consider that blemish removal cannot be ef-
fectively achieved by simply transferring the intermediate
features between the components, since there is no mecha-
nism to suppress the blemishes before being passed to the
next components. To address this issue, we incorporate t-
wo blemish-aware attention modules between the encoder
and decoder, and between the decoder and generator, re-
spectively. This design enables progressive retouching by
leveraging and refining the information from the previous
components. In addition to the paired training images, we
use the unpaired ones to optimize the discriminator, which
in turn guides the generator to synthesize realistic detail-
s. We perform extensive experiments to qualitatively and
quantitatively assess BPFRe on both standard benchmarks
and data in the wild.
The main contributions of this work are summarized as
follows: (a) To deal with a wide range of facial blemishes,
we exploit the merits of both encoder-decoder and genera-
tor architectures by seamlessly integrating them into a uni-
fied framework to progressively remove blemishes. (b) A
blemish-aware attention module is incorporated to enhance
the collaboration between the components by refining the
intermediate features that are transferred among the com-
ponents. (c) We leverage unpaired training data to regular-
ize the proposed framework, which effectively reduces the
dependence on paired training data. |
Walz_Gated_Stereo_Joint_Depth_Estimation_From_Gated_and_Wide-Baseline_Active_CVPR_2023 | Abstract
We propose Gated Stereo, a high-resolution and long-
range depth estimation technique that operates on active
gated stereo images. Using active and high dynamic range
passive captures, Gated Stereo exploits multi-view cues
alongside time-of-flight intensity cues from active gating.
To this end, we propose a depth estimation method with a
monocular and stereo depth prediction branch which are
combined in a final fusion stage. Each block is super-
vised through a combination of supervised and gated self-
supervision losses. To facilitate training and validation, we
acquire a long-range synchronized gated stereo dataset for
automotive scenarios. We find that the method achieves an
improvement of more than 50 % MAE compared to the next
best RGB stereo method, and 74 % MAE to existing monoc-
ular gated methods for distances up to 160 m. Our code,
models and datasets are available here1.
| 1. Introduction
Long-range high-resolution depth estimation is critical
for autonomous drones, robotics, and driver assistance sys-
tems. Most existing fully autonomous vehicles strongly rely
on scanning LiDAR for depth estimation [51, 52]. While
these sensors are effective for obstacle avoidance the mea-
surements are often not as semantically rich as RGB im-
ages. LiDAR sensing also has to make trade-offs due to
physical limitations, especially beyond 100 meters range,
including range range versus eye-safety and spatial reso-
lution. Although recent advances in LiDAR sensors such
as, MEMS scanning [60] and photodiode technology [58]
have drastically reduced the cost and led to a number of sen-
sor designs with ≈100-200scanlines, these are still sig-
nificantly lower resolutions than modern HDR megapixel
camera sensors with a vertical resolution more than ≈5000
pixels. However, extracting depth from RGB images with
monocular methods is challenging as existing estimation
methods suffer from a fundamental scale ambiguity [16].
Stereo-based depth estimation methods resolve this issue
but need to be well calibrated and often fail on texture-less
1https://light.princeton.edu/gatedstereo/regions and in low-light scenarios when no reliable features,
and hence triangulation candidate, can be found.
To overcome the limitations of existing scanning LiDAR
and RGB stereo depth estimation methods, a body of work
has explored gated imaging [2, 7–9, 22,27]. Gated im-
agers integrate the transient response from flash-illuminated
scenes in broad temporal bins, see Section 3for more de-
tails. This imaging technique is robust to low-light, and
adverse weather conditions [7] and the embedded time-of-
flight information can be decoded as depth. Specifically,
Gated2Depth [23] estimates depth from three gated slices
and learns the prediction through a combination of simula-
tion and LiDAR supervision. Building on these findings, re-
cently, Walia et al. [59] proposed a self-supervised training
approach predicting higher-quality depth maps. However,
both methods have in common that they often fail in condi-
tions where the signal-to-noise ratio is low, e.g., in the case
of strong ambient light.
We propose a depth estimation method from gated stereo
observations that exploits both multi-view and time-of-
flight cues to estimate high-resolution depth maps. We
propose a depth reconstruction network that consists of a
monocular depth network per gated camera and a stereo
network that utilizes both active and passive slices from the
gated stereo pair. The monocular network exploits depth-
dependent gated intensity cues to estimate depth in monoc-
ular and low-light regions while the stereo network relies
on active stereo cues. Both network branches are fused in a
learned fusion block. Using passive slices allows us to per-
form robustly under bright daylight where active cues have
a low signal-to-noise ratio due to ambient illumination. To
train our network, we rely on supervised and self-supervised
losses tailored to the stereo-gated setup, including ambient-
aware and illuminator-aware consistency along with multi-
camera consistency. To capture training data and assess the
method, we built a custom prototype vehicle and captured a
stereo-gated dataset under different lighting conditions and
automotive driving scenarios in urban, suburban and high-
way environments across 1000 km of driving.
Specifically, we make the following contributions:
• We propose a novel depth estimation approach us-
ing gated stereo images that generates high-resolution
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13252
dense depth maps from multi-view and time-of-flight
depth cues.
• We introduce a depth estimation network with two
different branches for depth estimation, a monocular
branch and a stereo branch, that use active and passive
measurement, and a semi-supervised training scheme
to train the estimator.
• We built a prototype vehicle to capture test and training
data, allowing us to assess the method in long-range
automotive scenes, where we reduce the MAE error by
50 % to the next best RGB stereo method and by 74 %
on existing monocular gated methods for distances up
to 160 m.
|
Wang_Improving_Robust_Generalization_by_Direct_PAC-Bayesian_Bound_Minimization_CVPR_2023 | Abstract
Recent research in robust optimization has shown
an overfitting-like phenomenon in which models trained
against adversarial attacks exhibit higher robustness on
the training set compared to the test set. Although pre-
vious work provided theoretical explanations for this phe-
nomenon using a robust PAC-Bayesian bound over the ad-
versarial test error, related algorithmic derivations are at
best only loosely connected to this bound, which implies
that there is still a gap between their empirical success
and our understanding of adversarial robustness theory.
To close this gap, in this paper we consider a different
form of the robust PAC-Bayesian bound and directly min-
imize it with respect to the model posterior. The derivation
of the optimal solution connects PAC-Bayesian learning to
the geometry of the robust loss surface through a Trace of
Hessian (TrH) regularizer that measures the surface flat-
ness. In practice, we restrict the TrH regularizer to the top
layer only, which results in an analytical solution to the
bound whose computational cost does not depend on the
network depth. Finally, we evaluate our TrH regulariza-
tion approach over CIFAR-10/100 and ImageNet using Vi-
sion Transformers (ViT) and compare against baseline ad-
versarial robustness algorithms. Experimental results show
that TrH regularization leads to improved ViT robustness
that either matches or surpasses previous state-of-the-art
approaches while at the same time requires less memory
and computational cost.
| 1. Introduction
Despite their success in a wide range of fields and tasks,
deep learning models still remain susceptible to manipu-
lating their outputs by even tiny perturbations to the input
[6,7,10,19,30,32,43,47]. Several lines of work have fo-
cused on developing robust training techniques against such
*Work done in Google.
†Corresponding authorLayer 1 Layer N-1Top Layer
adversarialRobustness Loss !"cleanTrace of Hessian #$(∇!!"!")weights:(#weights:($+|""|##+|"$|##+Figure 1. We propose Trace of Hessian (TrH) regularization for
training adversarially robust models. In addition to an ordinary
robust loss (e.g., TRADES [ 54]), we regularize the TrH of the loss
with respect to the weights of the top layer to encourage flatness.
The training objective is the result of direct PAC-Bayesian bound
minimization in Theorem 3.
adversarial attacks [ 8,20,31,32,36,41,46,48,54]. Im-
portantly, Rice et al. [ 38] observe a robust overfitting phe-
nomenon, referred to as the robust generalization gap , in
which a robustly-trained classifier shows much higher ac-
curacy on adversarial examples from the training set, com-
pared to lower accuracy on the test set. Indeed, several tech-
nical approaches have been developed that could alleviate
this overfitting phenomenon, including `2weight regular-
ization, early stopping [ 38], label smoothing, data augmen-
tation [ 51,53], using synthetic data [ 21] and etc.
According to learning theory, the phenomenon of overfit-
ting can be characterized by a PAC-Bayesian bound [ 4,9,18,
34,42] which upper-bounds the expected performance of a
random classifier over the underlying data distribution by its
performance on a finite set of training points plus some ad-
ditional terms. Although several prior works [ 21,24,26,49]
have built upon insights from the PAC-Bayesian bound,
none attempted to directly minimize the upper bound, likely
due to the fact that the minimization of their forms of the
PAC-Bayesian bound do not have an analytical solution.
In this paper, we rely on a different form of the PAC-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16458
Bayesian bound [ 18], which can be readily optimized using
a Gibbs distribution [ 16] with which we derive a second-
order upper bound over the robust test loss. Interestingly,
the resulting bound consists of a regularization term that in-
volves Trace of Hessian (TrH) [ 12] of the network weights,
a well-known measure of the loss-surface flatness.
For practical reasons, we limit TrH regularization to the
top layer of the network only because computing a Hessian
matrix and its trace for the entire network is too costly. We
further derive the analytical expression of the top-layer TrH
and show both theoretically and empirically that top-layer
TrH regularization has a similar effect as regularizing the
entire network. The resulting TrH regularization (illustrated
in Figure 1) is less expensive and more memory efficient
compared to other competitive methods [ 21,24,26,49].
In summary, our contributions are as follows: (1) We
provide a PAC-Bayesian upper-bound over the robust test
loss and show how to directly minimize it (Theorem 3). To
the best of our knowledge, this has not been done by prior
work. Our bound includes a TrH term which encourages the
model parameters to converge at a flat area of the loss func-
tion; (2) Taking efficiency into consideration, we restrict
the TrH regularization to the top layer only (Algorithm 1)
and show that it is an implicit but empirically effective reg-
ularization on the TrH of each internal layer (Theorem 4
and Example 1); and (3) Finally, we conduct experiments
with our new TrH regularization and compare the results
to several baselines using Vision Transformers [ 14]. On
CIFAR-10/100, our method consistently matches or beats
the best baseline. On ImageNet, we report a significant gain
(+2.7%) in robust accuracy compared to the best baseline
and establish a new state-of-the-art result of 48.9%.
|
Wang_Seeing_What_You_Said_Talking_Face_Generation_Guided_by_a_CVPR_2023 | Abstract
Talking face generation, also known as speech-to-lip
generation, reconstructs facial motions concerning lips
given coherent speech input. The previous studies revealed
the importance of lip-speech synchronization and visual
quality. Despite much progress, they hardly focus on the
content of lip movements i.e., the visual intelligibility of the
spoken words, which is an important aspect of generation
quality. To address the problem, we propose using a lip-
reading expert to improve the intelligibility of the gener-
ated lip regions by penalizing the incorrect generation re-
sults. Moreover, to compensate for data scarcity, we train
the lip-reading expert in an audio-visual self-supervised
manner. With a lip-reading expert, we propose a novel
contrastive learning to enhance lip-speech synchronization,
and a transformer to encode audio synchronically with
video, while considering global temporal dependency of au-
dio. For evaluation, we propose a new strategy with two
different lip-reading experts to measure intelligibility of the
generated videos. Rigorous experiments show that our pro-
posal is superior to other State-of-the-art (SOTA) methods,
such as Wav2Lip, in reading intelligibility i.e., over 38%
Word Error Rate (WER) on LRS2 dataset and 27.8% ac-
curacy on LRW dataset. We also achieve the SOTA perfor-
mance in lip-speech synchronization and comparable per-
formances in visual quality.
| 1. Introduction
Talking Face Generation (TFG) aims at generating high-
fidelity talking heads which are temporally synchronized
with the input speech. It plays a significant role in many
Human Robot Interaction (HRI) applications, such as film
dubbing [1], video editing, face animation [2, 3], and com-
munication with people who has hearing loss but master in
lip-reading. Thanks to its various practical usage, TFG has
∗jiadong.wang@u.nus.edu
†corresponding author (qianxy@ustb.edu.cn)also received an increasing attention in both industrial and
research community over the past decades [4, 5].
In TFG, there are two major aspects of concerna: lip-
speech synchronization and visual quality. While hu-
mans are sensitive to subtle abnormalities in asynchronized
speech and facial motions [6], the mechanism of speech
production highly relies on lip movements [7]. As a result,
the main challenge of TFG exists in temporal alignment
between the input speech and synthesized video streams.
One solution to this problem is to place an auxiliary em-
bedding network at the end of the generator to analyze the
audio-visual coherence. For example, [8] uses a pre-trained
embedding network [9] as the lip-sync discriminator, while
[10] investigates an asymmetric mutual information estima-
tor. Another solution is to compute a sync loss between
visual lip features of ground truth and generated video se-
quences [11]. Other attempts use the encoder-decoder struc-
ture to facilitate TFG by improving audio and visual repre-
sentations, such as [12, 13], which disentangle the visual
features to enhance audio representations into a shared la-
tent space; or, [14], which disentangles audio factors, i.e.,
emotional and phonetic content, to remove sync-irrelevant
features.
Apart from lip-speech synchronization, blurry or unreal-
istic visual quality also penalizes generation performance.
To preserve defining facial features of target persons, skip
connections [15] are applied [5]. Others [12, 16] employ
Generative Adversarial Nets (GAN) to distinguish real and
synthesized results by modelling the temporal dynamics of
visual outputs. In this way, the resulting models can gen-
erate more plausible talking heads that can be qualitatively
measured by subjective evaluations.
In addition, reading intelligibility should be indispens-
able, but it has not been emphasized. Reading intelligibil-
ity indicates how much text content can be interpreted from
face videos by humans’ lip reading ability, which is espe-
cially significant for hearing-impaired users. However, im-
age quality and lip-speech synchronization do not explicitly
reflect reading intelligibility. Specifically, a well-qualified
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14653
image may contain fine-grained lip-sync errors [8] while
precise synchronization may convey incorrect text contents.
According to the McGurk effect [17], when people listen
and see an unpaired but synchronized sequence of speech
and lip movements, they may recognize a phoneme from
audio or video, or a fused artifact.
In this paper, we propose a TalkLip net to synthesize
talking faces by focusing on reading intelligibility. Specifi-
cally, we employ a lip-reading expert which transcribes im-
age sequences to text to penalize the generator for incor-
rectly generated face images. We replace some images of
a face sequence with the generated ones, and feed it to the
lip-reading expert during training to supervise the face gen-
erator.
However, lip reading is hard even for humans. In [18],
four people with equal gender distribution are invited to
read lip movements. However, the average error rate is as
high as 47%. Therefore, a reliable lip-reading model re-
lies on a great amount of data. We employ A V-Hubert [19],
a self-supervised method, which has yielded SOTA perfor-
mance in lip-reading, speech recognition, and audio-visual
speech recognition. The encoders of lip-reading and speech
recognition systems are highly synchronized since they are
supervised by the same pseudo label during pre-training.
Leveraging the lip-reading expert from the A V-Hubert,
we propose a new method to enhance lip-speech synchro-
nization. Particularly, we conduct contrastive learning be-
tween audio embeddings for face generation and visual con-
text features from the lip-reading encoder. Besides, the
A V-Hubert also provides a synchronized speech recognition
system whose encoder considers long-term temporal depen-
dency, we adopt this encoder to encode audio inputs, instead
of encoders in [8, 11, 13] only rely on short-term temporal
dependency (0.2s audio), or the single-modality (audio) pre-
trained encoder in [20]. Our contributions are summarized
as follows:
• We tackle the reading intelligibility problem of speech-
driven talking face generation by leveraging a lip-
reading expert.
• To enhance lip-speech synchronization, we propose
a novel cross-modal contrastive learning strategy, as-
sisted by a lip-reading expert.
• We employ a transformer encoder trained synchron-
ically with the lip-reading expert to consider global
temporal dependency across the entire audio utterance.
• We propose a new strategy to evaluate reading intelli-
gibility for TFG and make the benchmark code pub-
licly available *.
*Code link: https://github.com/Sxjdwang/TalkLip• Extensive experiments demonstrate the feasibility of
our proposal and its superiority over other prevail-
ing methods in reading intelligibility (over 38% WER
on LRS and 27.8% accuracy on LRW). Additionally,
our approach performs comparably to or better than
other SOTA methods in terms of visual quality and lip-
speech synchronization.
|
Xiu_ECON_Explicit_Clothed_Humans_Optimized_via_Normal_Integration_CVPR_2023 | Abstract
The combination of deep learning, artist-curated scans,
and Implicit Functions ( IF), is enabling the creation of de-
tailed, clothed, 3D humans from images. However, existing
methods are far from perfect. IF-based methods recover
free-form geometry, but produce disembodied limbs or de-
generate shapes for novel poses or clothes. To increase
robustness for these cases, existing work uses an explicit
parametric body model to constrain surface reconstruction,
but this limits the recovery of free-form surfaces such as
loose clothing that deviates from the body. What we want is
a method that combines the best properties of implicit repre-
sentation and explicit body regularization. To this end, we
make two key observations: (1) current networks are better
at inferring detailed 2D maps than full-3D surfaces, and (2)
a parametric model can be seen as a “canvas” for stitch-
ing together detailed surface patches. Based on these, our
method, ECON , has three main steps: (1) It infers detailed
2D normal maps for the front and back side of a clothed per-
son. (2) From these, it recovers 2.5D front and back surfaces,
called d-BiNI , that are equally detailed, yet incomplete, and
registers these w.r.t. each other with the help of a SMPL-Xbody mesh recovered from the image. (3) It “inpaints” the
missing geometry between d-BiNI surfaces. If the face and
hands are noisy, they can optionally be replaced with the
ones of SMPL-X . As a result, ECON infers high-fidelity 3D
humans even in loose clothes and challenging poses. This
goes beyond previous methods, according to the quantitative
evaluation on the CAPE andRenderpeople datasets. Per-
ceptual studies also show that ECON ’s perceived realism is
better by a large margin. Code and models are available for
research purposes at econ.is.tue.mpg.de
| 1. Introduction
Human avatars will be key for future games and movies,
mixed-reality, tele-presence and the “metaverse”. To build re-
alistic and personalized avatars at scale, we need to faithfully
reconstruct detailed 3D humans from color photos taken in
the wild. This is still an open problem, due to its challenges;
people wear all kinds of different clothing and accessories,
and they pose their bodies in many, often imaginative, ways.
A good reconstruction method must accurately capture these,
while also being robust to novel clothing and poses.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
512
Initial, promising, results have been made possible by
using artist-curated scans as training data, and implicit func-
tions ( IF) [56,59] as the 3D representation. Seminal work on
PIFu(HD) [70, 71] uses “pixel-aligned” IFand reconstructs
clothed 3D humans with unconstrained topology. However,
these methods tend to overfit to the poses seen in the training
data, and have no explicit knowledge about the human body’s
structure. Consequently, they produce disembodied limbs or
degenerate shapes for images with novel poses; see the 2nd
row of Fig. 2. Follow-up work [26, 82, 96] accounts for such
artifacts by regularizing the IFusing a shape prior provided
by an explicit body model [52, 61], but regularization intro-
duces a topological constraint, restricting generalization to
novel clothing while attenuating shape details; see the 3rd
and 4th rows of Fig. 2. In a nutshell, there are trade-offs
between robustness, generalization and detail.
What we want is the best of both worlds ; that is, the
robustness of explicit anthropomorphic body models, and
the flexibility of IFto capture arbitrary clothing topology. To
that end, we make two key observations: (1) While inferring
detailed 2D normal maps from color images is relatively
easy [31, 71, 82], inferring 3D geometry with equally fine
details is still challenging [9]. Thus, we exploit networks
to infer detailed “geometry-aware” 2D maps that we then
lift to 3D. (2) A body model can be seen as a low-frequency
“canvas” that “guides” the stitching of detailed surface parts.
With these in mind, we develop ECON , which stands
for “Explicit Clothed humans Optimized via Normal inte-
gration”. It takes, as input, an RGB image and a SMPL-X
body inferred from the image. Then, it outputs a 3D human
in free-form clothing with a level of detail and robustness
that goes beyond the state of the art ( SOTA ); see the bottom
of Fig. 2. Specifically, ECON has three steps .
Step 1: Front & back normal reconstruction. We
predict front- and back-side clothed-human normal maps
from the input RGB image, conditioned on the body estimate,
with a standard image-to-image translation network.
Step 2: Front & back surface reconstruction. We take
the previously predicted normal maps, and the correspond-
ing depth maps that are rendered from the SMPL-X mesh,
to produce detailed and coherent front-/back-side 3D sur-
faces,{M F,MB}. To this end, we extend the recent BiNI
method [7], and develop a novel optimization scheme that is
aimed at satisfying three goals for the resulting surfaces: (1)
their high-frequency components agree with clothed-human
normals, (2) their low-frequency components and the dis-
continuities agree with the SMPL-X ones, and (3) the depth
values on their silhouettes are coherent with each other and
consistent with the SMPL-X -based depth maps. The two out-
put surfaces, {M F,MB}, are detailed yet incomplete, i.e.,
there is missing geometry in occluded and “profile” regions.
Step 3: Full 3D shape completion. This module takes
two inputs: (1) the SMPL-X mesh, and (2) the two d-BiNI
Figure 2. Summary of SOTA .PIFuHD [71] recovers clothing
details, but struggles with novel poses. ICON [82] and PaMIR [96]
regularize shape to a body shape, but over-constrain the skirts, or
over-smooth the wrinkles. ECON combines their best aspects.
surfaces, {M F,MB}. The goal is to “inpaint” the missing
geometry. Existing methods struggle with this problem. On
one hand, Poisson reconstruction [38] produces “blobby”
shapes and naively “infills” holes without exploiting a shape
distribution prior. On the other hand, data-driven approaches,
such as IF-Nets [10], struggle with missing parts caused by
(self-)occlusions, and fail to keep the fine details present on
two d-BiNI surfaces, producing degenerate geometries.
We address above the limitations in two steps: (1) We ex-
tend and re-train IF-Nets to be conditioned on the SMPL-X
body, so that SMPL-X regularizes shape “infilling”. We dis-
card the triangles that lie close to {M F,MB}, and keep the
remaining ones as “infilling patches”. (2) We stitch together
the front- and back-side surfaces and infilling patches via
Poisson reconstruction; note that holes between these are
small enough for a general purpose method. The result is a
full 3D shape of a clothed human; see Fig. 2, bottom.
We evaluate ECON both on established benchmarks
(CAPE [55] and Renderpeople [66]) and in-the-wild images.
Quantitative analysis reveals ECON ’s superiority. A percep-
tual study echos this, showing that ECON is significantly
preferred over competitors on challenging poses and loose
clothing, and competitive with PIFuHD on fashion images.
Qualitative results show that ECON generalizes better than
theSOTA to a wide variety of poses and clothing, even with
extreme looseness or complex topology; see Fig. 9.
With both pose-robustness and topological flexibility,
ECON recovers 3D clothed humans with a good level of
detail and realistic pose. Code and models are available for
research purposes at econ.is.tue.mpg.de
513
|
Wu_Cap4Video_What_Can_Auxiliary_Captions_Do_for_Text-Video_Retrieval_CVPR_2023 | Abstract
Most existing text-video retrieval methods focus on
cross-modal matching between the visual content of videos
and textual query sentences. However, in real-world sce-
narios, online videos are often accompanied by relevant text
information such as titles, tags, and even subtitles, which
can be utilized to match textual queries. This insight has
motivated us to propose a novel approach to text-video
retrieval, where we directly generate associated captions
from videos using zero-shot video captioning with knowl-
edge from web-scale pre-trained models (e.g., CLIP and
GPT-2). Given the generated captions, a natural ques-
tion arises: what benefits do they bring to text-video re-
trieval? To answer this, we introduce Cap4Video, a new
framework that leverages captions in three ways: i) Input
data: video-caption pairs can augment the training data. ii)
Intermediate feature interaction: we perform cross-modal
feature interaction between the video and caption to pro-
duce enhanced video representations. iii) Output score:
the Query-Caption matching branch can complement the
original Query-Video matching branch for text-video re-
trieval. We conduct comprehensive ablation studies to
demonstrate the effectiveness of our approach. Without any
post-processing, Cap4Video achieves state-of-the-art per-
formance on four standard text-video retrieval benchmarks:
MSR-VTT (51.4%), VATEX (66.6%), MSVD (51.8%), and
DiDeMo (52.0%). The code is available at https://
github.com/whwu95/Cap4Video .
| 1. Introduction
Text-video retrieval is a fundamental task in video-
language learning. With the rapid advancements in image-
language pre-training [15, 30, 46, 47], researchers have fo-
cused on expanding pre-trained image-language models, es-
pecially CLIP [30], to tackle the text-video retrieval task.
The research path has evolved from the most direct global
*Equal contribution.
VideoCaptionsCLIPGPT-2
(a)Zero-Shot Caption Generation(b)CaptionsforVideoUnderstandingEnhancedRepresentationAugmentedTrainingPairsAuxiliaryMatchingScoresTextVideoVideoCaptions
QueryCaption
VideoQuery-VideoMatchingNewtrainingpairsQuery-CaptionMatchingCross-modalInteractionVideoQueryQuery-VideoMatchingVideoCaptionCLIPGPT-2Captioner(a)(b)(c)
Figure 1. (a) An existing end-to-end learning paradigm for text-
video retrieval. (b) Zero-shot video captioning achieved by guid-
ing a large language model (LLM) such as GPT-2 [31] with
CLIP [30]. (c) Our Cap4Video framework leverages the generated
captions in three aspects: input data augmentation, intermediate
feature interaction, and output score fusion.
matching ( i.e., video-sentence alignment [11, 24]) to fine-
grained matching ( e.g., frame-word alignment [36], video-
word alignment [13], multi-hierarchical alignment [9, 28],
etc.). These studies have demonstrated remarkable per-
formance and significantly outperformed previous models.
Two key factors contribute to this improvement. Firstly,
CLIP offers powerful visual and textual representations that
are pre-aligned in the semantic embedding space, thereby
reducing the challenge of cross-modal learning in video-
text matching. Secondly, these methods can fine-tune the
pre-trained vision and text encoders using sparsely sampled
frames in an end-to-end manner. All of these methods aim
to learn cross-modal alignment between the visual represen-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10704
tation of videos and the textual representation of the corre-
sponding query, as depicted in Figure 1(a).
However, in real-life scenarios, online videos usually
come with related content such as the video’s title or tag
on the video website. In addition to the visual signal in the
video, the associated textual information can also be used
to some extent to describe the video content and match the
query ( i.e., the common text-to-text retrieval). This raises a
pertinent question: How can we generate associated text de-
scriptions for videos? One possible solution is to crawl the
video title from the video website. However, this method
relies on annotations, and there is a risk that the video URL
may have become invalid. Another automated solution is
to generate captions using zero-shot video caption mod-
els. Therefore, we turn our attention to knowledge-rich pre-
trained models to handle such challenging |
Wang_Practical_Network_Acceleration_With_Tiny_Sets_CVPR_2023 | Abstract
Due to data privacy issues, accelerating networks with
tiny training sets has become a critical need in practice.
Previous methods mainly adopt filter-level pruning to ac-
celerate networks with scarce training samples. In this pa-
per, we reveal that dropping blocks is a fundamentally su-
perior approach in this scenario. It enjoys a higher ac-
celeration ratio and results in a better latency-accuracy
performance under the few-shot setting. To choose which
blocks to drop, we propose a new concept namely recov-
erability to measure the difficulty of recovering the com-
pressed network. Our recoverability is efficient and effec-
tive for choosing which blocks to drop. Finally, we propose
an algorithm named PRACTISE to accelerate networks us-
ing only tiny sets of training images. PRACTISE outper-
forms previous methods by a significant margin. For 22%
latency reduction, PRACTISE surpasses previous methods
by on average 7% on ImageNet-1k. It also enjoys high
generalization ability, working well under data-free or out-
of-domain data settings, too. Our code is at https:
//github.com/DoctorKey/Practise .
| 1. Introduction
In recent years, convolutional neural networks (CNNs)
have achieved remarkable success, but they suffer from high
computational costs. To accelerate the networks, many net-
work compression methods have been proposed, such as
network pruning [11,18,20,22], network decoupling [6,15]
and network quantization [2, 7]. However, most previous
methods rely on the original training set (i.e., all the train-
ing data) to recover the model’s accuracy. But, to preserve
data privacy and/or to achieve fast deployment, only scarce
training data may be available in many scenarios.
For example, a customer often asks the algorithmic
provider to accelerate their CNN models, but due to privacy
*J. Wu is the corresponding author. This research was partly sup-
ported by the National Natural Science Foundation of China under Grant
62276123 and Grant 61921006.
/g1010/g1009/g1010/g1010/g1010/g1011/g1010/g1012/g1010/g1013/g1011/g1004/g1011/g1005/g1011/g1006/g1011/g1007/g1011/g1008
/g1006/g1012 /g1007/g1004 /g1007/g1006 /g1007/g1008 /g1007/g1010 /g1007/g1012 /g1008/g1004 /g1008/g1006/g100/g381/g393/g882/g1005/g3/g4/g272/g272/g856/g3/g894/g1081/g895
/g62/g258/g410/g286/g374/g272/g455/g3/g894/g373/g400/g895/g17/g367/g381/g272/g364
/g38/g349/g367/g410/g286/g396/g3/g894/g18/g24/g882/g400/g410/g455/g367/g286/g895
/g38/g349/g367/g410/g286/g396/g3/g894/g374/g381/g396/g373/g258/g367/g895
/g38/g349/g367/g410/g286/g396/g3/g894/g396/g286/g400/g349/g282/g437/g258/g367/g895
/g104/g374/g272/g381/g373/g393/g396/g286/g400/g400/g286/g282/g271/g286/g410/g410/g286/g396Figure 1. Comparison of different compression schemes with only
500 training images. We propose dropping blocks for few-shot
network acceleration. Our method (‘Block’) outperforms previ-
ous methods dominantly for the latency-accuracy tradeoff. The
ResNet-34 model was compressed on ImageNet-1k and all laten-
cies were tested on an NVIDIA TITAN Xp GPU.
concerns, the whole training data cannot be available. Only
the raw uncompressed model and a few training examples
are presented to the algorithmic provider. In some extreme
cases, not even a single data point is to be provided. The
algorithmic engineers need to synthesize images or collect
some out-of-domain training images by themselves. Hence,
to learn or tune a deep learning model with only very few
samples is emerging as a critical problem to be solved.
In this few-shot compression scenario, most previous
works [1,12,30] adopt filter-level pruning. However, it can-
not achieve a high acceleration ratio on real-world com-
puting devices (e.g., on GPUs). To make compressed mod-
els indeed run faster than the uncompressed models, lots of
FLOPs (number of floating point operations) are required to
be reduced by filter-level pruning. And without the whole
training dataset, it is difficult to recover the compressed
model’s accuracy. Hence, previous few-shot compression
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20331
KD [10] FSKD [12] CD [1] MiR [30] BP (blocks)
44.5 45.3 56.2 64.1 66.5
Table 1. Top-1 validation accuracy (%) on ImageNet-1k for differ-
ent compression schemes. ResNet-34 was accelerated by reducing
16% latency with 50 training images. Previous methods prune fil-
ters with the ‘normal’ style. For the block-level pruning, we sim-
ply remove the first kblocks and finetune the pruned network by
back propagation, i.e., ‘BP (blocks)’ in this table.
methods often exhibit a poor latency (wall-clock timing) vs.
accuracy tradeoff.
In this paper, we advocate that we need to focus on
latency-accuracy rather than FLOPs-accuracy, and reveal
that block-level pruning is fundamentally superior in the
few-shot compression scenario. Compared to pruning fil-
ters, dropping blocks enjoys a higher acceleration ratio.
Therefore it can keep more capacity from the original model
and its accuracy is easier to be recovered by a tiny train-
ing set under the same latency when compared with filter
pruning. Fig. 1 shows dropping blocks dominantly out-
performs previous compression schemes for the latency-
accuracy tradeoff. Table 1 further reports that an em-
barrassingly simple dropping block baseline (i.e., finetune
without any other processing) has already surpassed exist-
ing methods which use complicated techniques . The base-
line, ‘BP (blocks)’, simply removes the first few blocks and
finetune the pruned network with the cross-entropy loss.
To further improve block pruning, we study the strat-
egy for choosing which blocks to drop, especially when
only scarce training samples are available. Several cri-
teria [21, 31, 34] have been proposed for pruning blocks
on the whole dataset. However, some [31, 34] require a
large amount of data for choosing, whereas others [21]
only evaluate the output difference before/after block re-
moval. In this paper, we notice that although dropping
some blocks significantly changes the feature maps, they are
easily recovered by end-to-end finetuning even with a tiny
training set. So simply measuring the difference between
pruned/original networks is not valid. To deal with these
problems, a new concept namely recoverability is proposed
in this paper for better indicating blocks to drop. And we
propose a method to compute it efficiently, with only a few
training images. At last, our recoverability is surprisingly
consistent with the accuracy of the finetuned network.
Finally, we propose P RACTISE , namely Pr actical net-
work ac celeration with ti ny se ts of images, to effectively
accelerate a network with scarce data. P RACTISE signifi-
cantly outperforms previous few-shot pruning methods. For
22.1%latency reduction, P RACTISE surpasses the previous
state-of-the-art (SOTA) method on average by 7.0% (per-
centage points, not relative improvement) Top-1 accuracyon ImageNet-1k. It is also robust and enjoys high gener-
alization ability which can be used on synthesized/out-of-
domain images. Our contributions are:
•We argue that the FLOPs-accuracy tradeoff is a mis-
leading metric for few-shot compression, and advocate that
the latency-accuracy tradeoff (which measures real runtime
on devices) is more crucial in practice. For the first time, we
find that in terms of latency vs. accuracy, block pruning is
an embarrassingly simple but powerful method—dropping
blocks with simple finetuning has already surpassed previ-
ous methods (cf. Table 1). Note that although dropping
blocks is previously known, we are the first to reveal its
great potential in few-shot compression , which is both a sur-
prising and an important finding.
•To further boost the latency-accuracy performance of
block pruning, we study the optimal strategy to drop blocks.
A new concept recoverability is proposed to measure the
difficulty of recovering each block, and in determining the
priority to drop blocks. Then, we propose P RACTISE , an al-
gorithm for accelerating networks with tiny sets of images.
•Extensive experiments demonstrate the extraordinary
performance of our P RACTISE . In both the few-shot and
even the extreme data-free scenario, P RACTISE improves
results by a significant margin. It is versatile and widely
applicable for different network architectures, too.
|
Wang_Hard_Patches_Mining_for_Masked_Image_Modeling_CVPR_2023 | Abstract
Masked image modeling (MIM) has attracted much
research attention due to its promising potential for learning
scalable visual representations. In typical approaches,
models usually focus on predicting specific contents of
masked patches, and their performances are highly related
to pre-defined mask strategies. Intuitively, this procedure can
be considered as training a student (the model) on solving
given problems (predict masked patches). However, we
argue that the model should not only focus on solving given
problems, but also stand in the shoes of a teacher to produce
a more challenging problem by itself. To this end, we propose
Hard Patches Mining (HPM), a brand-new framework for
MIM pre-training. We observe that the reconstruction loss
can naturally be the metric of the difficulty of the pre-
training task. Therefore, we introduce an auxiliary loss
predictor, predicting patch-wise losses first and deciding
where to mask next. It adopts a relative relationship learning
strategy to prevent overfitting to exact reconstruction loss
values. Experiments under various settings demonstrate
the effectiveness of HPM in constructing masked images.
Furthermore, we empirically find that solely introducing the
loss prediction objective leads to powerful representations,
verifying the efficacy of the ability to be aware of where is
hard to reconstruct.1
| 1. Introduction
Self-supervised learning [6, 8, 9, 18, 20], with the goal
of learning scalable feature representations from large-
scale datasets without any annotations, has been a research
hotspot in computer vision (CV). Inspired by masked
1Code: https://github.com/Haochen-Wang409/HPM
Pre-definedmaskstrategy
(a)ConventionalMIMpre-trainingparadigm.
Whereishardtoreconstruct?
imageinputmodelimage
imageinputmodel(asastudent)image
(b)OurproposedMIMpre-trainingparadigm.model(asateacher)Figure 1. Comparison between conventional MIM pre-training
paradigm and our proposed HPM. (a)Conventional approaches
can be interpreted as training a student , where the model is only
equipped with the ability to solve a given problem under some
pre-defined mask strategies. (b)Our proposed HPM pre-training
paradigm makes the model to be both a teacher and a student , with
the extra ability to produce a challenging pretext task .
language modeling (MLM) [4,11,44,45] in natural language
processing (NLP), where the model is urged to predict
masked words within a sentence, masked image modeling
(MIM), the counterpart in CV , has attracted numerous
interests of researchers [3, 13, 19, 26, 42, 61, 66, 69].
Fig. 1a illustrates the paradigm of conventional ap-
proaches for MIM pre-training [3, 19, 67]. In these typical
solutions, models usually focus on predicting specific
contents of masked patches. Intuitively, this procedure
can be considered as training a student ( i.e., the model)
on solving given problems ( i.e., predict masked patches).
To alleviate the spatial redundancy in CV [19] and produce
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10375
Figure 2. Visual comparison between reconstruction loss anddiscriminativeness onImageNet validation set. We load the pre-trained
ViT-B/16 [14] provided by MAE [19]. For each tuple, we show the (a)input image ,(b)patch-wise reconstruction loss averaged over 10
different masks, (c)predicted loss , and (d)masked images generated by the predicted loss ( i.e., patches with top 75% predicted loss are
masked). Red means higher loss while blue indicates the opposite. Discriminative parts tend to be hard to reconstruct .
a challenging pretext task, mask strategies become critical,
which are usually generated under pre-defined manners, e.g.,
random masking [19], block-wise masking [3], and uniform
masking [29]. However, we argue that a difficult pretext
task is not all we need, and not only learning to solve the
MIM problem is important, but also learning to produce
challenging tasks is crucial. In other words, as shown in
Fig. 1b, by learning to create challenging problems and
solving them simultaneously , the model can stand in the
shoes of both a student and a teacher , being forced to hold a
more comprehensive understanding of the image contents,
and thus leading itself by generating a more desirable task.
To this end, we propose Hard Patches Mining (HPM) , a
new training paradigm for MIM. Specifically, given an input
image, instead of generating a binary mask under a manually-
designed criterion, we first let the model be a teacher to
produce a demanding mask, and then train the model to
predict masked patches as a student just like conventional
methods. Through this way, the model is urged to learn
where it is worth being masked, and how to solve the problem
at the same time. Then, the question becomes how to design
the auxiliary task, to make the model aware of where the
hard patches are.
Intuitively, we observe that the reconstruction loss can
be naturally a measure of the difficulty of the MIM task,
which can be verified by the first two elements of each tuplein Fig. 2, where the backbone2pre-trained by MAE [19]
with 1600 epochs is used for visualization. As expected,
we find that those discriminative parts of an image ( e.g.,
object) are usually hard to reconstruct, resulting in larger
losses. Therefore, by simply urging the model to predict
reconstruction loss for each patch, and then masking those
patches with higher predicted losses, we can obtain a more
formidable MIM task. To achieve this, we introduce an
auxiliary loss predictor, predicting patch-wise losses first
and deciding where to mask next based on its outputs. To
prevent it from being overwhelmed by the exact values of
reconstruction losses and make it concentrate on the relative
relationship among patches , we design a novel relative loss
based on binary cross-entropy as the objective. We further
evaluate the effectiveness of the loss predictor using a ViT-B
under 200 epochs pre-training in Fig. 2. As the last two
elements for each tuple in Fig. 2 suggest, patches with larger
predicted losses tend to be discriminative, and thus masking
these patches brings a challenging situation, where objects
are almost masked. Meanwhile, considering the training
evolution, we come up with an easy-to-hard mask generation
strategy, providing some reasonable hints at the early stages.
Empirically, we observe significant and consistent im-
provements over the supervised baseline and vanilla MIM
2https://dl.fbaipublicfiles.com/mae/visualize/
mae_visualize_vit_base.pth
10376
pre-training under various settings. Concretely, with only
800 epochs pre-training, HPM achieves 84.2% and 85.8%
Top-1 accuracy on ImageNet-1K [49] using ViT-B and ViT-
L, outperforming MAE [19] pre-trained with 1600 epochs
by +0.6% and +0.7%, respectively.
|
Wang_FeatureBooster_Boosting_Feature_Descriptors_With_a_Lightweight_Neural_Network_CVPR_2023 | Abstract
We introduce a lightweight network to improve descrip-
tors of keypoints within the same image. The network takes
the original descriptors and the geometric properties of key-points as the input, and uses an MLP-based self-boosting
stage and a Transformer-based cross-boosting stage to en-hance the descriptors. The boosted descriptors can be ei-
ther real-valued or binary ones. We use the proposed net-
work to boost both hand-crafted (ORB [ 34], SIFT [ 24]) and
the state-of-the-art learning-based descriptors (SuperPoint
[10], ALIKE [ 53]) and evaluate them on image matching,
visual localization, and structure-from-motion tasks. Theresults show that our method significantly improves the per-formance of each task, particularly in challenging cases
such as large illumination changes or repetitive patterns.
Our method requires only 3.2ms on desktop GPU and 27mson embedded GPU to process 2000 features, which is fast
enough to be applied to a practical system. The code and
trained weights are publicly available at github.com/SJTU-
ViSYS/FeatureBooster .
| 1. Introduction
Extracting sparse keypoints or local features from an im-
age is a fundamental building block in various computer vi-
sion tasks, such as structure from motion (SfM), simultane-
ous localization and mapping (SLAM), and visual localiza-tion. The feature descriptor, represented by a real-valued or
binary descriptor, plays a key role in matching those key-points across different images.
The descriptors are commonly hand-crafted in the early
days. Recently, learning-based descriptors [ 10,53]h a v e
*Corresponding Author: Danping Zou ( dpzou@sjtu.edu.cn ).
This works was supported by National Key R&D Program(2022YFB3903802) and National of Science Foundation of China(62073214)
Figure 1. ORB descriptors perform remarkably better in challeng-
ing cases after being boosted by the proposed lightweight network.
Left column : Matching results of using raw ORB descriptors.
Right column : Results of using boosted ORB descriptors. Near-
est neighbor search and RANSAC [ 14] were used for matching.
shown to be more powerful than hand-crafted ones, espe-
cially in challenging cases such as significant viewpoint
and illumination changes. Both hand-crafted and learning-
based descriptors have shown to work well in practice.Some of them have become default descriptors for some ap-plications. For example, the simple binary descriptor ORB
[34] is widely used for SLAM systems [ 20,29]. SIFT [ 24]
is typically used in structure-from-motion systems.
Considering that the descriptors have already been inte-
grated into practical systems, replacing them with totallynew ones can be problematic, as it may require more com-puting power that may not be supported by the existinghardware, or sometimes require extensive modifications to
the software because of changed descriptor type ( e.g. from
binary to real).
In this work, we attempt to reuse existing descriptors
and enhance their discrimination ability with as little com-
putational overhead as possible. To this end, we propose
a lightweight network to improve the original descriptors.
The input of this network is the descriptors and the geomet-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7630
ric properties such as the 2D locations of all the keypoints
within the entire image. Each descriptor is firstly processedby an MLP (Multi-layer perceptron) and summed with ge-ometric properties encoded by another MLP . The new ge-ometrically encoded descriptors are then aggregated by an
efficient Transformer to produce powerful descriptors that
are aware of the high-level visual context and spatial lay-out of those keypoints. The enhanced descriptors can beeither real-valued or binary ones and matched by using Eu-clidean/Hamming distance respectively.
The core idea of our approach, motivated by recent work
[25,36,41], is integrating the visual and geometric infor-
mation of all the keypoints into individual descriptors by a
Transformer. This can be better understood intuitively by
considering when people are asked to find correspondences
between images, they would check all the keypoints and
the spatial layout of those keypoints in each image. With
the help of the global receptive field in Transformer, the
boosted descriptors contain global contextual informationthat makes them more robust and discriminative as shown
in Fig. 1.
We apply our FeatureBooster to both hand-crafted de-
scriptors (SIFT [ 24], ORB [ 34]) and the state-of-the-art
learning-based descriptors (SuperPoint [ 10], ALIKE [ 53]).
We evaluated the boosted descriptors on tasks includingimage matching, visual localization, and structure-from-motion. The results show that our method can significantlyimprove the performance of each task by using our boosted
descriptors.
Because FeatureBooster does not need to process the im-
age and adopts a lightweight Transformer, it is highly effi-
cient. It takes only 3.2ms on NVIDIA RTX 3090 and 27mson NVIDIA Jetson Xavier NX (for embedded devices) toboost 2000 features, which makes our method applicable topractical systems.
|
Wang_Image_Cropping_With_Spatial-Aware_Feature_and_Rank_Consistency_CVPR_2023 | Abstract
Image cropping aims to find visually appealing crops in
an image. Despite the great progress made by previous
methods, they are weak in capturing the spatial relationship
between crops and aesthetic elements ( e.g., salient objects,
semantic edges). Besides, due to the high annotation cost
of labeled data, the potential of unlabeled data awaits to be
excavated. To address the first issue, we propose spatial-
aware feature to encode the spatial relationship between
candidate crops and aesthetic elements, by feeding the con-
catenation of crop mask and selectively aggregated feature
maps to a light-weighted encoder. To address the second
issue, we train a pair-wise ranking classifier on labeled
images and transfer such knowledge to unlabeled images
to enforce rank consistency. Experimental results on the
benchmark datasets show that our proposed method per-
forms favorably against state-of-the-art methods.
| 1. Introduction
The task of image cropping aims to find good crops in
an image that can improve the image quality and meet aes-
thetic requirement. Image cropping is a prevalent and criti-
cal operation in numerous photography-related applications
like image thumbnailing, view recommendation, and cam-
era view adjustment suggestion.
Many Researchers [2, 4–7, 12, 21, 23, 36, 43, 46, 52, 54,
60, 62, 63] have studied automatic image cropping in the
past decades with the goal to reduce the workload of man-
ual cropping. Earlier works [2, 3, 12, 31, 43, 44] mainly
used saliency detection [49, 59] to detect salient objects
and crop around salient objects. Another group of meth-
ods [6, 12, 26, 33, 54, 62] designed hand-crafted features
to represent specific composition rules in photography.
With the construction of moderate-sized image cropping
datasets [4, 52, 54, 56], recently proposed image cropping
methods [4, 5, 7, 21, 23, 36, 52, 56, 57, 63] are usually data-
driven manner and directly learn how to crop visually ap-
*Corresponding author
Source Image
Low-level
High-level
Figure 1. Two examples of the spatial relationship between crops
(yellow bounding box) and aesthetic elements ( e.g., semantic
edges and salient objects). The first column shows the source im-
ages, and the second ( resp. , third) column shows their low-level
(resp. , high-level) feature maps extracted by a pre-trained Mo-
bileNetv2 [39] network with channel-wise max pooling. It can
be seen that low-level feature maps emphasize semantic edges and
high-level feature maps highlight salient objects.
pealing views from the labeled data. Although these ap-
proaches have achieved impressive improvement on image
cropping task, there still exist some drawbacks which will
be discussed below.
One problem is that when considering the spatial rela-
tionship between crops and aesthetic elements ( e.g., salient
objects, semantic edges), which is very critical for image
cropping, previous methods usually designed some intu-
itive rules. For example, the crop should enclose the salient
object [2, 43, 44], or should not cut through the semantic
edges [2, 54]. However, these hand-crafted rules did not
consider the spatial layout of all aesthetic elements as a
whole, and may not generalize well to various scenes be-
cause the rules designed for specific subjects can not cover
complex image cropping principles [10].
In this work, we explore learnable spatial-aware fea-
tures, which encode the spatial relationship between crops
and aesthetic elements. We observe that the feature map
obtained using channel-wise max pooling can emphasize
some aesthetic elements. In Figure 1, we show several
pooled feature maps from MobileNetv2 [39], from which
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10052
it can be seen that the low-level feature maps emphasize
semantic edges ( e.g., the outlines of semantic objects and
regions) and the high-level feature maps emphasize salient
objects ( e.g., bird, balloon). With concatenated feature
maps from different layers, we learn channel attention [16]
to select important layers. The weighted feature maps are
concatenated with candidate crop masks and sent to a light-
weighted encoder to produce spatial-aware features. The
extracted spatial-aware features encode the spatial relation-
ship between candidate crops and aesthetic elements with-
out being limited by any hand-crafted rules.
Another problem is that the cost of crop annotation is
very high and the performance is limited by the scale of
the annotated training set. Therefore, some previous works
explored how to utilize unlabeled data to improve the crop-
ping performance. For example, VFN [5] collects unlabeled
professional photographs from public websites and perform
pairwise ranking based on the assumption that the entire
image has higher aesthetic quality than any of its crops.
However, such assumption does not always hold obviously.
VPN [52] used a pre-trained network VEN [52] to predict
aesthetic scores for the crops from unlabeled images, which
function as pseudo labels to supervise training a new net-
work. However, the predicted pseudo labels may be very
noisy and provide misleading guidance.
In this work, we explore transferring ranking knowl-
edge from labeled images to unlabeled images. Specifically,
given two annotated crops from a labeled image, we learn a
binary pairwise ranking classifier to judge which crop has
higher aesthetic quality, by sending the concatenation of
two crop features to a fully connected layer. We expect that
the knowledge of comparing the aesthetic quality of two
crops with similar content could be transferred to unlabeled
data. Given two unannotated crops from an unlabeled im-
age, we can obtain two types of ranks. On the one hand, we
can rank them according to the predicted crop-level scores.
On the other hand, we can employ the pairwise ranking clas-
sifier to get the rank. Then, we enforce two types of ranks
to be consistent.
We conduct experiments on GAICD [57] and FCDB [4]
dataset. For unlabeled images, we use unlabeled test im-
ages, which falls into the scope of transductive learning.
Our major contributions can be summarized as:
• We design a novel spatial-aware feature to model the
spatial relationship between candidate crops and aes-
thetic elements.
• We propose to transfer ranking knowledge from la-
beled images to unlabeled images, and enforce ranking
consistency on unlabeled images.
• Our proposed method obtains the state-of-the-art per-
formance on benchmark datasets. |
Villa_PIVOT_Prompting_for_Video_Continual_Learning_CVPR_2023 | Abstract
Modern machine learning pipelines are limited due to
data availability, storage quotas, privacy regulations, and
expensive annotation processes. These constraints make it
difficult or impossible to train and update large-scale mod-
els on such dynamic annotated sets. Continual learning di-
rectly approaches this problem, with the ultimate goal of
devising methods where a deep neural network effectively
learns relevant patterns for new (unseen) classes, without
significantly altering its performance on previously learned
ones. In this paper, we address the problem of contin-
ual learning for video data. We introduce PIVOT, a novel
method that leverages extensive knowledge in pre-trained
models from the image domain, thereby reducing the num-
ber of trainable parameters and the associated forgetting.
Unlike previous methods, ours is the first approach that ef-
fectively uses prompting mechanisms for continual learning
without any in-domain pre-training. Our experiments show
that PIVOT improves state-of-the-art methods by a signifi-
cant 27% on the 20-task ActivityNet setup.
| 1. Introduction
Modern Deep Neural Networks (DNNs) are at the core
of many state-of-the-art methods in machine vision [10,
20, 21, 38, 44, 46] tasks. To achieve their remarkable
performance, most DNNs rely on large-scale pre-training
[11, 26, 36, 46], thereby enabling feature reuse in related
downstream tasks [4]. However, adapting and fine-tuning
a pre-trained DNN on a novel dataset commonly leads to
catastrophic forgetting [16]. This phenomenon explains
how the effectiveness of the fine-tuned DNNs drastically
reduces in the original training distribution, in favor of in-
creased performance on the downstream task.
This undesirable property has driven multiple research
efforts in the area of Continual Learning (CL) [7, 8, 18, 25,
Figure 1. Performance improvement by each PIVOT com-
ponent. We report the average accuracy on all tasks under the
10-task CIL on UCF101 and ActivityNet. We report the per-
formance of basic CLIP, and then gradually equip it with other
components: Spatial Prompting, Memory Buffer, Multi-modal
Contrastive Learning, Temporal Encoder, to finally reach our pro-
posed PIVOT method. The addition of each proposed component
generally boosts the performance on both datasets. Stars represent
the upper bound performance on each benchmark.
29], yielding techniques that enable fine-tuning a DNN on
a sequence of tasks while mitigating the performance drop
along the intermediate steps. One of the most challenging
scenarios for the study of CL is Class Incremental Learning
(CIL), where the labels and data are mutually exclusive be-
tween tasks, training data is available only for the current
task, and there are no task identifiers on the validation step
(i.etask boundaries are not available at test time). Such a
setup requires learning one model that, despite the continu-
ous adaptation to novel tasks, performs well on all the seen
classes. Recent advances in mitigating catastrophic forget-
ting rely on deploying episodic memory or regularization
techniques [2,8,25,37]. Nevertheless, most of this progress
has been directed toward analyzing the catastrophic forget-
ting of DNNs in the image domain. The works of [34,35,47]
introduced the CIL setup to the video domain, in particular
the action recognition task. Unlike its image counterpart,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24214
video CIL requires careful modeling of the temporal infor-
mation, making it an even more challenging setup. Despite
the success in small-scale datasets (like UCF101), state-of-
the-art methods have shown limited effectiveness on more
challenging video test-beds built upon larger action tax-
onomies, such as Kinetics and ActivityNet [47].
Currently, state-of-the-art methods for video CIL rely on
temporal masking and feature distillation to mitigate catas-
trophic forgetting [34,47]. In this paper, we take inspiration
from recent advances in large-scale DNNs for zero-shot im-
age classification [36] and learnable prompts for continual
learning [50], and we propose a novel strategy for CIL in
the video domain. We show that a zero-shot baseline pre-
trained in the image domain already outperforms the best
CIL methods in the action recognition task *. Moreover,
we show that this baseline can be significantly improved
by enabling temporal reasoning and augmenting the modal-
ity encoders with a novel prompting strategy. As a con-
sequence, our proposed method, PIVOT, outperforms ev-
ery other baseline, setting a new state-of-the-art in all the
3 datasets included in the challenging vCLIMB benchmark
for video CIL [47]. Figure 1 summarizes the performance
improvements of our approach.
Notably and following the core ideas of the vCLIMB
[47] benchmark, PIVOT does not rely on any in-distribution
pre-training (a common feature of prompting methods for
CL [45,50]). Rather, it leverages the vast and general visual
knowledge contained in the CLIP visual encoder (trained
on massive amounts of paired static images and text) and
maps that knowledge into a feature space suitable for video
understanding in a continual learning setup.
Contributions. This paper proposes PIVOT (Prompt-
Ing for Video cOnTinual learning), a novel strategy for con-
tinual learning in the video domains that leverages large-
scale pre-trained networks in the image domain. Our work
brings the following contributions: (i)We show that a multi-
modal classifier (Video-Text) mitigates catastrophic forget-
ting while greatly increasing the final average CIL accuracy.
(ii)We design the first prompt-based strategy for video CIL.
Our approach leverages image pre-training to significantly
mitigate forgetting when learning a sequence of video ac-
tion recognition tasks. (iii)We conduct extensive experi-
mental analysis to demonstrate the effectiveness of PIVOT.
Our results show that PIVOT outperforms state-of-the-art
methods in the challenging vCLIMB benchmark by 31%,
27%, and 17.2% in the 20-task setups of Kinetics, Activi-
tyNet, and UCF101, respectively.
|
Wang_Binary_Latent_Diffusion_CVPR_2023 | Abstract
In this paper, we show that a binary latent space can
be explored for compact yet expressive image representa-
tions. We model the bi-directional mappings between an
image and the corresponding latent binary representation
by training an auto-encoder with a Bernoulli encoding dis-
tribution. On the one hand, the binary latent space provides
a compact discrete image representation of which the distri-
bution can be modeled more efficiently than pixels or con-
tinuous latent representations. On the other hand, we now
represent each image patch as a binary vector instead of
an index of a learned cookbook as in discrete image repre-
sentations with vector quantization. In this way, we obtain
binary latent representations that allow for better image
quality and high-resolution image representations without
any multi-stage hierarchy in the latent space. In this binary
latent space, images can now be generated effectively us-
ing a binary latent diffusion model tailored specifically for
modeling the prior over the binary image representations.
We present both conditional and unconditional image gen-
eration experiments with multiple datasets, and show that
the proposed method performs comparably to state-of-the-
art methods while dramatically improving the sampling ef-
ficiency to as few as 16 steps without using any test-time
acceleration. The proposed framework can also be seam-
lessly scaled to 1024 ⇥1024 high-resolution image gener-
ation without resorting to latent hierarchy or multi-stage
refinements.
| 1. Introduction
The goal of modeling the image distribution that allows
the efficient generation of high-quality novel samples drives
the research of representation learning and generative mod-
els. Directly representing and generating images in the pixel
space stimulates various research such as generative adver-
sarial networks [ 2,7,18,29], flow models [ 11,34,43,47],
energy-based models [ 12,13,66,71], and diffusion mod-
els [24,41,54,55]. As the resolution grows, it becomes
increasingly difficult to accurately regress the pixel values.
And this challenge usually has to be addressed through hi-
FFHQCelebA-HQ
LSUNChurches
1024x1024256x256LSUNBedroomsImageNet
FFHQCelebA
Figure 1. Examples of generated images with different resolutions
using the proposed method.
erarchical model architectures [ 29,72] or at a notably high
cost [ 24]. Moreover, while demonstrating outstanding gen-
erated image quality, GAN models suffer from issues in-
cluding insufficient mode coverage [ 38] and training insta-
bility [ 21].
Representing and generating images in a learned latent
space [ 33,42,49] provides a promising alternative. Latent
diffusion [ 49] performs denoising in latent feature space
with a lower dimension than the pixel space, therefore re-
ducing the cost of each denoising step. However, regress-
ing the real-value latent representations remains complex
and demands hundreds of diffusion steps. Variational auto-
encoders (V AEs) [ 23,33,48] generate images without any it-
erative steps. However, the static prior of the latent space re-
stricts the expressiveness, and can lead to posterior collapse.
To achieve higher flexibility of the latent distribution with-
out significantly increasing the modeling complexity, VQ-
V AE [ 61] introduces a vector-quantized latent space, where
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22576
each image is represented as a sequence of indexes, each of
which points to a vector in a learned codebook. The prior
over the vector-quantized representations is then modeled
by a trained sampler, which is usually parametrized as an
autoregressive model. The success of VQ-V AE stimulates a
series of works that model the discrete latent space of code-
book indexes with different models such as accelerated par-
allel autoregressive models [ 8] and multinomial diffusion
models [ 6,20]. VQ-based generative models demonstrate
surprising image synthesis performance and model cover-
age that is better than the more sophisticated methods like
GANs without suffering from issues like training instability.
However, the hard restriction of using one codebook index
to represent each image patch introduces a trade-off on the
codebook size, as a large enough codebook to cover more
image patterns will introduce an over-complex multinomial
latent distribution for the sampler to model.
In this research, we explore a compact yet expressive
representation of images in a binary latent space, where
each image patch is now represented as a binary vector, and
the prior over the discrete binary latent codes is effectively
modeled by our improved binary diffusion model tailored
for Bernoulli distribution. Specifically, the bi-directional
mappings between images and the binary representations
are modeled by a feed-forward autoencoder with a binary
latent space. Given an image, the encoder now outputs the
normalized parameters of a sequence of independently dis-
tributed Bernoulli variables, from which a binary represen-
tation of this image is sampled, and fed into the decoder to
reconstruct the image. The discrete sampling in Bernoulli
distribution does not naturally permit gradient propagation.
We find that a simple straight-through gradient copy [ 4,17]
is sufficient for high-quality image reconstruction while
maintaining high training efficiency.
With images compactly represented in the binary latent
space, we then introduce how to generate novel samples
by modeling the prior over binary latent codes of images.
To overcome the shortcomings of many existing generative
models such as being uni-directional [ 46,61] and the non-
regrettable greedy sampling [ 6,8], we introduce binary la-
tent diffusion that generates the binary representations of
novel samples by a sequence of denoising starting from a
random Bernoulli distribution. Performing diffusion in a
binary latent space, modeled as Bernoulli distribution, re-
duces the need for precisely regressing the target values as
in Gaussian-based diffusion processes [ 24,49,55], and per-
mits sampling at a higher efficiency. We then introduce how
to progressively reparametrize the prediction targets at each
denoising step as the residual between the inputs and the de-
sired samples, and train the proposed binary latent diffusion
models to predict such ‘flipping probability’ for improved
training and sampling stability.
We support our findings with both conditional and
unconditional image generation experiments on multiple
datasets. We show that our method can deliver remarkable
image generation quality and diversity with more compactlatent codes, larger image-to-latent resolution ratios, as well
as fewer sampling steps, and faster sampling speed. We
present some examples with different resolutions generated
by the proposed method in Figure 1.
We organize this paper as follows: Related works are
discussed in Section 2. In Section 3, we introduce binary
image representations by training an auto-encoder with a bi-
nary latent space. We then introduce in Section 4binary la-
tent diffusion, a diffusion model for multi-variate Bernoulli
distribution, and techniques tailored specifically for improv-
ing the training and sampling of binary latent diffusion. We
present both quantitative and qualitative experimental re-
sults in Section 5and conclude the paper in Section 6.
|
Wei_Physically_Adversarial_Infrared_Patches_With_Learnable_Shapes_and_Locations_CVPR_2023 | Abstract
Owing to the extensive application of infrared object de-
tectors in the safety-critical tasks, it is necessary to evaluate
their robustness against adversarial examples in the real
world. However, current few physical infrared attacks are
complicated to implement in practical application because
of their complex transformation from digital world to physi-
cal world. To address this issue, in this paper, we propose a
physically feasible infrared attack method called ”adversar-
ial infrared patches”. Considering the imaging mechanism
of infrared cameras by capturing objects’ thermal radiation,
adversarial infrared patches conduct attacks by attaching a
patch of thermal insulation materials on the target object to
manipulate its thermal distribution. To enhance adversarial
attacks, we present a novel aggregation regularization to
guide the simultaneous learning for the patch’ shape and
location on the target object. Thus, a simple gradient-based
optimization can be adapted to solve for them. We verify
adversarial infrared patches in different object detection
tasks with various object detectors. Experimental results
show that our method achieves more than 90% Attack Suc-
cess Rate (ASR) versus the pedestrian detector and vehicle
detector in the physical environment, where the objects are
captured in different angles, distances, postures, and scenes.
More importantly, adversarial infrared patch is easy to im-
plement, and it only needs 0.5 hours to be constructed in the
physical world, which verifies its effectiveness and efficiency.
| 1. Introduction
Deep Neural Networks (DNNs) have shown promising
performance in various vision tasks, including object de-
tection [14], classification [8], face recognition [16], and
autonomous driving [15]. However, it is typically known
that DNNs are vulnerable to adversarial examples [2, 5, 26],
i.e., the human-imperceptible perturbed inputs can fool the
DNNs-based system to give wrong predictions. Moreover,
*Corresponding author
Original Object Perturbed ObjectPatch in physical worldattackPatch in digital world
stickFigure 1. The generation process of adversarial infrared patches.
We see the pedestrian cannot be detected after the infrared patches
are pasted on the pedestrian in the physical world.
these adversarial examples can be exploited in the physical
world. In such cases, a widely used technique is called ad-
versarial patches [1, 4, 24, 28], which have been successfully
applied to traffic sign detection by generating a carefully
designed sticker [4, 23], or face recognition by adding spe-
cific textures on eyeglass frames [17, 22]. The success of
adversarial patches raises the concerns because of their great
threat to the deployed DNN-based systems in the real world.
Nowadays, famous for its strong anti-interference ability
in the severe environment, object detection in the thermal
infrared images has been widely used in many safety-critical
tasks such as security surveillance [19], remote sensing [25],
etc. Consequently, it is necessary to evaluate the physical
adversarial robustness of infrared object detectors. However,
the aforementioned adversarial patches cannot work well in
the infrared images because they depend on the adversarial
perturbations generated from the view of RGB appearance.
These perturbations cannot be captured by infrared cameras,
which perform the imaging by encoding the objects’ ther-
mal radiation [20]. Although few recent works have been
proposed to address this issue, they have their own limita-
tions. For example, adversarial bulbs based on a cardboard
of alight small bulbs [30] are complicated to implement in
the real world, and are also not stealthy because they produce
heat source. Adversarial clothing [29] based on a large-scale
QR code improves the stealthiness by utilizing the thermal
insulation material to cover the object’s thermal radiation,
but still has complex transformations from digital to physical
world, making it not easy to implement in the real world.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12334
Figure 2. The comparison between different infrared attacks.
In this paper, we propose a physically stealthy and easy-
to-implement infrared attack method called “adversarial in-
frared patches”. Considering the imaging mechanism of
infrared cameras by capturing objects’ thermal radiation,
we also attach the thermal insulation materials on the target
object to manipulate its thermal distribution to ensure the
stealthiness. But different from adversarial clothing [29]
via the complex QR code pattern, we utilize a simple patch
to crop the thermal insulation material, and then adjust the
patch’s shape and location on the target object to conduct
attacks. Compared with adversarial RGB perturbations, the
changes of shapes and locations of the thermal patch can
be accurately captured by infrared cameras, which helps
perform an effective attack.
However, the shape and location are two kinds of different
variables, it is challenging to directly optimize them via
unified gradients. For that, we present a novel aggregation
regularization to guide the simultaneous learning for the
patch’ shape and location on the target object. Specifically,
an aggregation degree is proposed to quantify how close one
pixel’s neighbours are to being a clique. By combining this
metric with the attack goal, the object’s pixels needing to be
covered by the thermal insulation material will automatically
be gathered to form a valid shape on the available location of
the object. In this way, we can adapt a simple gradient-based
optimization to solve for the optimal shape and location of
the patch. An example of our adversarial infrared patch
against the pedestrian detector is shown in Figure 1, and
a comparison with the existing physical infrared attacks is
given in Figure 2. We see that adversarial infrared patch is
simpler than other methods in the digital world, and we just
need to crop the thermal insulation materials according to
the learned shape, and then paste the patch on the learned
location of the pedestrian, which is more easy-to-implement
than other methods in the real world.
Our contributions can be summarized as follows:
•We propose the novel “adversarial infrared patches”,
a physically stealthy and easy-to-implement attack
method for infrared object detection. Instead of gen-
erating adversarial perturbations, we perform attacks
by learning the available shape and location on the tar-get object. Owing to this careful design, adversarial
infrared patches are easier to implement in the physical
world than the existing methods.
•We design a novel aggregation regularization to guide
the simultaneous learning for the patch’ shape and loca-
tion on the target object. Thus, a simple gradient-based
optimization can be adapted to solve for the optimal
shape and location of the patch.
•We verify the adversarial infrared patches in the pedes-
trian detection task from both the digital and physi-
cal world. Experiments show that adversarial infrared
patches can work well in various angles, distances, pos-
tures, and scenes, and achieve competitive attacking
performance with the SOTA infrared attack while only
costing five percent of their time to construct physical
adversarial examples. We also extend our method to
the vehicle detection task to verify its generalization.
|
Wang_Raw_Image_Reconstruction_With_Learned_Compact_Metadata_CVPR_2023 | Abstract
While raw images exhibit advantages over sRGB images
(e.g., linearity and fine-grained quantization level), they are
not widely used by common users due to the large stor-
age requirements. Very recent works propose to compress
raw images by designing the sampling masks in the raw
image pixel space, leading to suboptimal image represen-
tations and redundant metadata. In this paper, we propose
a novel framework to learn a compact representation in the
latent space serving as the metadata in an end-to-end man-
ner. Furthermore, we propose a novel sRGB-guided con-
text model with the improved entropy estimation strategies,
which leads to better reconstruction quality, smaller size
of metadata, and faster speed. We illustrate how the pro-
posed raw image compression scheme can adaptively al-
locate more bits to image regions that are important from
a global perspective. The experimental results show that
the proposed method can achieve superior raw image re-
construction results using a smaller size of the metadata
on both uncompressed sRGB images and JPEG images.
The code will be released at https://github.com/
wyf0912/R2LCM .
| 1. Introduction
As an unprocessed and uncompressed data format di-
rectly obtained from camera sensors, raw images has unique
advantages for computer vision tasks in practice. For exam-
ple, it is easier to model the distribution of real image noise
in raw space, which enables generalized deep real denoising
networks [1, 40]; As pixel values in raw images have a lin-
ear relationship with scene radiance, they own benefits to re-
cover shadows and highlights without bringing in the grainy
noise usually associated with high ISO [12, 16, 33, 35],
which greatly contributes to the low-light image enhance-
ment. Besides, with richer colors, raw images offer more
room for correction and artistic manipulation.
*Corresponding author.
UNetISPRawsRGBSampledRawPixelssRGBRawDecoderReconstructedRawSamplingmask(a) Previous SOTA methods sample in the raw space [26, 30].
Learned CompactFeatureEncoderDecodersRGBReconstructedRawRaw
bitstream01···10Learnedentropycoding
(b) The proposed method samples in the latent space.
Figure 1. The comparison between the previous SOTA methods (in
blue) and our proposed method (in green). Different from the pre-
vious work where the sampling strategy is hand-crafted or learned
by a pre-defined sampling loss, we learn the sampling and recon-
struction process in a unified end-to-end manner. In addition, the
sampling of previous works is in the raw pixel space, which in fact
still includes a large amount of spatial redundancy and precision
redundancy. Instead, we conduct sampling in the feature space,
and more compact metadata is obtained for pixels in the feature
space via the adaptive allocation. The saved metadata is annotated
in dashed box.
Despite of these merits, raw images are not widely
adopted by common users due to large file sizes. In addi-
tion, since raw images are unprocessed, additional post pro-
cessing steps, e.g., demosaicing and denoising, are always
needed before displaying them. For fast image rendering in
practice, a copy of JPEG image is usually saved along with
its raw data [2]. To improve the storage efficiency, raw-
image reconstruction problem attracts more and more atten-
tion, i.e., how to minimize the amount of metadata required
for de-rendering sRGB images back to raw space. Classic
metadata-based raw image reconstruction methods model
the workflow of image signal processing (ISP) pipeline and
save the required parameters in ISP as metadata [27]. To
further reduce the storage and computational complexity to-
wards a lightweight and flexible reverse ISP reconstruction,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18206
DynamicrangeclippingQuantizationNon-linearmapping(Bothgloballyandlocally)(a) Simplified ISP adopted from [21]
(b) Processed Raw
(c) Quantized sRGB
(d) Quantization error map
Figure 2. An illustration of the information loss caused by the ISP.
(a) A simplified ISP suffers from the information loss caused by
nonlinear transformations. (b) Raw image after process to better
display the details. (c) Quantized sRGB image after ISP which
suffers information loss, e.g., the red bounding box area. (d) The
quantization error map. As we can see from the above figures, the
information loss caused by the quantization is non-uniformly dis-
tributed in both over-exposed areas and normally-exposed areas.
very recent methods focus on sparse sampling of raw image
pixels [26, 30]. Specifically, in [30], a uniform sampling
strategy is proposed to combine with an interpolation al-
gorithm that solves systems of linear equations. The work
in [26] proposes a sampling network and approximates the
reconstruction process by deep learning to further improve
the sampling strategy.
Though lots of progress has been made, existing sparse
sampling based raw image reconstruction methods still face
limitations in terms of coding efficiency and image recon-
struction quality. Specifically, the bit allocation should be
adaptive and globally optimized for the image contents,
given the non-linear transformation and quantization steps
in ISP as shown in Fig. 2. For example, the smooth regions
of an image can be well reconstructed with much sparser
samples, comparing to the texture-rich regions which de-
serve denser sampling. In constrast, in existing practices,
even for the state-of-the-art method [26] where the sampling
is enforced to be locally non-uniform, it is still almost uni-
form from the global perspective, which causes metadata
redundancy and limits the reconstruction performance. In
addition, very recent works [26, 30] sample in a fixed sam-
pling space, i.e., raw image space, with a fixed bit depth of
sampled pixels, leading to limited representation ability and
precision redundancy.
To address the above issues, instead of adopting a pre-
defined sampling strategy or sampling loss, e.g., super-pixel
loss [37], we propose a novel end-to-end learned raw im-
age reconstruction framework based on encoded latent fea-
tures. Specifically, the latent features are obtained by mini-
mizing the reconstruction loss and its bitstream cost simul-
taneously. To further improve the rate-distortion perfor-
mance, we propose an sRGB-guided context model based
on a learnable order prediction network. Different from the
commonly used auto-regressive models [9, 24] which en-code/decode the latent features pixel-by-pixel in a sequen-
tial way, the proposed sRGB-guided context requires much
fewer steps (reduce by more than 106-fold) with the aid of
a learned order mask, which makes the computational cost
feasible while maintaining comparable performance. Fig. 1
compares the proposed raw image reconstruction method
with the previous strategies [9, 24].
Our contributions are summarized as follows,
1. We propose the first end-to-end deep encoding frame-
work for raw image reconstruction, by fully optimizing
the use of stored metadata.
2. A novel sRGB-guided context model is proposed by
introducing two improved entropy estimation strate-
gies, which leads to better reconstruction quality,
smaller size of metadata, and faster speed.
3. We evaluate our method over popular raw image
datasets. The experimental results demonstrate that
we can achieve better reconstruction quality with less
metadata required comparing with SOTA methods.
|
Xiao_Level-S2fM_Structure_From_Motion_on_Neural_Level_Set_of_Implicit_CVPR_2023 | Abstract
This paper presents a neural incremental Structure-
from-Motion (SfM) approach, Level-S2fM, which estimates
the camera poses and scene geometry from a set of uncali-
brated images by learning coordinate MLPs for the implicit
surfaces and the radiance fields from the established key-
point correspondences. Our novel formulation poses some
new challenges due to inevitable two-view and few-view
configurations in the incremental SfM pipeline, which com-
plicates the optimization of coordinate MLPs for volumetric
neural rendering with unknown camera poses. Neverthe-
less, we demonstrate that the strong inductive basis convey-
ing in the 2D correspondences is promising to tackle those
challenges by exploiting the relationship between the ray
sampling schemes. Based on this, we revisit the pipeline
of incremental SfM and renew the key components, includ-
ing two-view geometry initialization, the camera poses reg-
istration, the 3D points triangulation, and Bundle Adjust-
ment, with a fresh perspective based on neural implicit sur-
faces. By unifying the scene geometry in small MLP net-
works through coordinate MLPs, our Level-S2fM treats the
zero-level set of the implicit surface as an informative top-
down regularization to manage the reconstructed 3D points,
reject the outliers in correspondences via querying SDF ,
and refine the estimated geometries by NBA (Neural BA).
Not only does our Level-S2fM lead to promising results on
camera pose estimation and scene geometry reconstruction,
but it also shows a promising way for neural implicit ren-
dering without knowing camera extrinsic beforehand.
| 1. Introduction
Structure-from-Motion (SfM) is a fundamental 3D vi-
sion problem that aims at reconstructing 3D scenes and
estimating the camera motions from a set of uncalibrated
images. As a long-standing problem, there have been a
tremendous of studies that are mostly established on the
keypoint correspondences across viewpoints and the theo-
retical findings of Multi-View Geometry (MVG) [11], and
*Corresponding author
Neural Network Parametric Level Sets𝛷(x,𝜃)
…Incremental Reconstruction on Neural Level SetsZero-Level SetDiscrete Points with Feature TracksDriving the Optimization
SIFT Matches+RANSAC
3D Outliers from Mismatches
CausesRejection by SDF ProjectionFigure 1. SfM calculations on neural level sets. We learn to
do geometric calculations including Triangulation, PnP, and Bun-
dle Adjustment above neural level sets, which easily help to re-
ject the outliers in the matches especially in the texture repeated
scenes. Also, due to the continuous surface priors of neural level
sets, we achieve better pose estimation accuracy and our recon-
structed points are sticking on the surface which are painted with
color in the figure. While, there are a lot of outlier 3d points re-
constructed by COLMAP [32] which are painted by black.
have formed three representative pipelines of Incremental
SfM [32], Global SfM [4, 43], and Hybrid SfM [5].
In this paper, we focus on the incremental pipeline of
SfM and we will use SfM to refer to the incremental SfM.
Given an unordered image set, an SfM system initializes
the computation by a pair of images that are with well-
conditioned keypoint correspondences to yield an initial set
of feature tracks, then incrementally adds new views one
by one to estimate the camera pose from the 2D-3D point
correspondences and update the feature tracks with new
matches. Because the feature tracks are generated by group-
ing the putative 2D correspondences across viewpoints in
bottom-up manners, they would be ineffective or inaccurate
to represent holistic information of scenes. Accordingly,
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17205
Bundle Adjustment (BA) is necessary to jointly optimize
the camera poses and 3D points in a top-down manner. The
success of BA indicates that a global perspective is vital
for accurate 3D reconstruction, however, their input feature
tracks are the bottom-up cues without enough holistic con-
straints for the 3D scenes. To this end, we study to integrate
the top-down information into the SfM system by propos-
ing a novel Level-S2fM. Fig. 1 illustrates a representative
case for the classic SfM systems that generate more flying
3D scene points, which can be addressed by our method.
Our Level-S2fM is inspired by the recently-emerged
neural implicit surface that could manage all 3D scene
points as the zero-level set of the signed distance function
(SDF). Because the neural implicit surfaces can be param-
eterized by Multi-Layer Perceptrons (MLPs), it could be
viewed as a kind of top-down information of 3D scenes and
is of great potential for accurate 3D reconstruction. How-
ever, because both the 3D scene and camera poses are to be
determined, it poses a challenging problem:
How can we optimize a neural SDF (or other neu-
ral fields such as NeRF) from only a set of uncal-
ibrated images without any 3D information?
Most recently, the above problem was partially answered
in BARF [18] and NeRF- - [42] that relaxed the requirement
of optimizing Neural Radiance Fields [24] without know-
ing accurate camera poses, but they can only handle the un-
known pose configurations in small-scale face-forwarding
scenes. Moreover, when we confine the problem in the in-
cremental SfM pipelines, it would be more challenging as
we need to optimize the neural fields with only two over-
lapped images at the initialization stage. To this end, we
found that the optimization of neural SDF can be accom-
plished by the 2D matches at the initialization stage, and
facilitate the management of feature tracks by querying the
3D points and tracing the 2D keypoints in a holistic way.
As shown in Fig. 1, we define a neural network that pa-
rameterizes an SDF as the unified representation for the
underdetermined 3D scene and accomplishes the computa-
tions of PnP for camera pose intersection, the 3D points tri-
angulation as well as the geometry refinement on the param-
eterized SDF. In the initialization stage with a pair of over-
lapped images, Level-S2fM uses the differentiable sphere
tracing algorithm [19] to attain the corresponding 3d points
of the keypoints and calculate the reprojection error to drive
the joint optimization. For the traced 3d points with small
SDF values and 2D reprojection errors for its feature track,
they are added into a dynamic point set and take the point
set with feature tracks as the Lagrangian representation for
the level sets. Because the pose estimation and the scene
points reconstruction are sequentially estimated, the estima-
tion errors will be accumulated. To this end, we present an
NBA ( i.e., Neural Bundle Adjustment) that plays a similarrole as in Bundle Adjustment, but it optimizes the implicit
surface and camera poses from the explicit flow of points
by the energy function of the reprojection errors, which can
be viewed as an evolutionary step between Lagrangian and
Eulerian representations as discussed in [23].
In the experiments, we evaluate our Level-S2fM on a va-
riety of scenes from the BlendedMVS [45], DTU [14], and
ETH3D [34] datasets. On the BlendedMVS dataset, our
proposed Level-S2fM clearly outperforms the state-of-the-
art COLMAP [32] by significant margins. On the DTU and
ETH3D datasets [14, 34], our method also obtains on-par
performance with COLMAP for both camera pose estima-
tion and dense surface reconstruction, which are all com-
puted in one stage.
The contributions of this paper are in two folds:
• We present a novel neural SfM approach Level-S2fM,
which formulates to optimize the coordinate MLP net-
works for implicit surface and radiance field and esti-
mate the camera poses and scene geometry. To the best
of our knowledge, our Level-S2fM is the first implicit
neural SfM solution on the zero-level set of surfaces.
• From the perspective of neural implicit fields learning,
we show that the challenging problems of two-view
and few-view optimization of neural implicit fields can
be addressed by exploiting the inductive biases con-
veyed in the 2D correspondences. Besides, our method
presents a promising way for neural implicit rendering
without knowing camera extrinsics beforehand.
|
Xu_High-Fidelity_Generalized_Emotional_Talking_Face_Generation_With_Multi-Modal_Emotion_Space_CVPR_2023 | Abstract
Recently, emotional talking face generation has received
considerable attention. However, existing methods only
adopt one-hot coding, image, or audio as emotion condi-
tions, thus lacking flexible control in practical applications
and failing to handle unseen emotion styles due to limited
semantics. They either ignore the one-shot setting or the
quality of generated faces. In this paper, we propose a more
flexible and generalized framework. Specifically, we supple-
ment the emotion style in text prompts and use an Aligned
Multi-modal Emotion encoder to embed the text, image, and
audio emotion modality into a unified space, which inher-
its rich semantic prior from CLIP . Consequently, effective
multi-modal emotion space learning helps our method sup-
port arbitrary emotion modality during testing and could
generalize to unseen emotion styles. Besides, an Emotion-
aware Audio-to-3DMM Convertor is proposed to connect
the emotion condition and the audio sequence to struc-
tural representation. A followed style-based High-fidelity
Emotional Face generator is designed to generate arbitrary
high-resolution realistic identities. Our texture generator
hierarchically learns flow fields and animated faces in a
residual manner. Extensive experiments demonstrate the
flexibility and generalization of our method in emotion con-
trol and the effectiveness of high-quality face synthesis.
| 1. Introduction
Talking face generation [13,38,46,58] is the task of driv-
ing a static portrait with given audio. Recently, many works
have tried to solve the challenges of maintaining lip move-
ments synchronized with input speech contents and syn-
thesizing natural facial motion simultaneously. However,
most researchers ignore a more challenging task, emotional
*Corresponding authorsaudio-driven talking face generation, which is critical for
creating vivid talking faces.
Some works have achieved significant progress in solv-
ing the above task conditioned on emotion embedding.
However, there are three continuously critical issues: 1)
How to explore a more semantic emotion embedding to
achieve better generalization for unseen emotions . Early
efforts [41,47,55] adopt the one-hot vector to indicate emo-
tion category, which could only cover the pre-defined la-
bel and lacks semantic cues. Subsequently, EVP [19] dis-
entangles emotion embedding from the audio, while GC-
A VT [23] and EAMM [18] drive emotion by visual im-
ages. However, tailored audio- and image-based emotion
encoders show limited semantics and also struggle to handle
unseen emotion styles. 2) Could we construct multi-modal
emotion sources into a unified feature space to allow a more
flexible and user-friendly emotion control. Existing meth-
ods only support one specific modality as the emotion con-
dition, while the desired modality is usually not available in
practical applications. 3) How to design a high-resolution
identity-generalized generator . Early works [19, 41, 47] are
in identity-specific design, while recent works [18,23] have
started to enable one-shot emotional talking face genera-
tion. However, as shown in Fig. 1(a), GC-A VT and EAMM
fail to produce high-resolution faces due to the inevitable
information loss in face embedding and the challenge of es-
timating accurate high-resolution flow fields, respectively.
To address the aforementioned challenges, we first sup-
plement the emotion styles in the text prompt inspired by
the zero-shot CLIP-guided image manipulation [29,39,43],
which could inherit rich semantic knowledge and conve-
nient interaction after being encoded. As shown in Fig. 1(b),
unseen emotions, e.g.,Satisfied , could be flexibly speci-
fied using the text description and precisely reflected on
the source face. Furthermore, to achieve alignment among
multi-modal emotion features, we introduce an Aligned
Multi-modal Emotion (AME) encoder to unify the text, im-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6609
Text:Image:
Source Audio Sequence
Audio:
‘Satisfied’
Emotion
HappySurprisedID-1
GC-AVT
(CVPR22)
Single-ModalIdentity -
generalized
Facial embedding256
EAMM
(TOG22)
Single-ModalIdentity -
generalized
Flow
Facial structure256
Ours
Multi-Modal
inCLIPDomainIdentity -
generalizedFlow……
Facial structure512
Emotion Source
ID-2Emotion Input
Source Input
Output Face
(a) (b)Figure 1. (a) An illustrative comparison of GC-A VT [23], EAMM [18], and our approach. First, our method supports multi-modal emotion
cues as input. As shown in (b), given a source face, an audio sequence, and diverse emotion conditions, our results fulfill synchronized lip
movements with the speech content and emotional face with the desired style. Besides, benefiting from the effective multi-modal emotion
space and rich semantics of CLIP, our method could generalize to unseen style marked in Red. Second, the hierarchical style-based
generator with coarse-to-fine facial deformation learning helps us generalize to unseen faces in high resolution and provides more realistic
details and precise emotion than GC-A VT and EAMM. Images are from the official attached results or released codes for fair comparisons.
age, and audio emotion modality into the same domain, thus
supporting flexible emotion control by multi-modal inputs,
as depicted in Fig. 1(b). In particular, the fixed CLIP text
and image encoders are leveraged to extract their embed-
ding and a learned CLIP audio encoder guided by several
losses to find the proper emotion representation of the given
audio sequence in CLIP space.
To this end, we follow the previous talking face gener-
ation methods [34] that rely on intermediate structural in-
formation such as 3DMM, and propose an Emotion-aware
Audio-to-3DMM Convertor (EAC), to distill the rich emo-
tional semantics from AME and project them to the facial
structure. Specifically, we employ the Transformer [40]
to capture the longer-term audio context and sufficiently
learn correlated audio-emotion features for expression co-
efficient prediction, which involves precise facial emotion
and synchronized lip movement. Notably, a learned inten-
sity token is extended to control the emotion intensity con-
tinuously. Furthermore, to generate high-resolution realis-
tic faces, we propose a coarse-to-fine style-based identity-
generalized model, High-fidelity Emotional Face (HEF)
generator, which integrates appearance features, geometry
information, and a style code within an elegant design. As
shown in Fig. 1(a), unlike the EAMM that predicts the flow
field at a single resolution by an isolated process, we hier-
archically perform the flow estimation in a residual manner
and incorporate it with texture refinement for efficiency.
In summary, we make the following three contributions:
• We propose a novel AME that provides a unified multi-
modal semantic-rich emotion space, allowing flexibleemotion control and unseen emotion generalization,
which is the first attempt in this field.
• We propose a novel HEF to hierarchically learn the
facial deformation by sufficiently modeling the inter-
action among emotion, source appearance, and drive
geometry for the high-resolution one-shot generation.
• Abundant experiments are conducted to demonstrate
the superiority of our method for flexible and gener-
alized emotion control, and high-resolution one-shot
talking face animation over SOTA methods.
|
Wu_DropMAE_Masked_Autoencoders_With_Spatial-Attention_Dropout_for_Tracking_Tasks_CVPR_2023 | Abstract
In this paper, we study masked autoencoder (MAE) pre-
training on videos for matching-based downstream tasks,
including visual object tracking (VOT) and video object seg-
mentation (VOS). A simple extension of MAE is to randomly
mask out frame patches in videos and reconstruct the frame
pixels. However, we find that this simple baseline heav-
ily relies on spatial cues while ignoring temporal relations
for frame reconstruction, thus leading to sub-optimal tem-
poral matching representations for VOT and VOS. To al-
leviate this problem, we propose DropMAE, which adap-
tively performs spatial-attention dropout in the frame re-
construction to facilitate temporal correspondence learning
in videos. We show that our DropMAE is a strong and effi-
cient temporal matching learner, which achieves better fine-
tuning results on matching-based tasks than the ImageNet-
based MAE with 2×faster pre-training speed. Moreover,
we also find that motion diversity in pre-training videos
is more important than scene diversity for improving the
performance on VOT and VOS. Our pre-trained DropMAE
model can be directly loaded in existing ViT-based track-
ers for fine-tuning without further modifications. Notably,
DropMAE sets new state-of-the-art performance on 8 out
of 9 highly competitive video tracking and segmentation
datasets. Our code and pre-trained models are available at
https://github.com/jimmy-dq/DropMAE.git .
| 1. Introduction
Recently, transformers have achieved enormous success
in many research areas, such as natural language processing
(NLP) [6, 22], computer vision [97] and audio generation
[43, 73]. In NLP, masked autoencoding is commonly used
to train large-scale generalizable NLP transformers contain-
*Corresponding Author
60 62 64 66 68 70 72 74 76
Average Overlap (%)70727476788082848688 Success Rate (%)State-of-the-art Comparison on GOT-10k
DropTrack
(Ours)
OSTrack
SwinTrackSwinTrack MixFormerAiATrack
SimTrack
StarkCIA50
TransTAutoMatch
SiamR-CNNOcean
DiMPFigure 1. State-of-the-art comparison on GOT-10k [39] following
the one-shot protocol. Our DropTrack with the proposed Drop-
MAE pre-training achieves state-of-the-art performance without
using complicated pipelines such as online updating.
ing billions of parameters. Inspired by the great success of
self-supervised learning in NLP, recent advances [36, 89]
in computer vision suggest that training large-scale vision
transformers may undertake a similar trajectory with NLP.
The seminal work MAE [36] reconstructs the input image
from a small portion of patches. The learned representa-
tion in this masked autoencoder has been demonstrated to
be effective in many computer vision tasks, such as image
classification, object detection and semantic segmentation.
In video object tracking (VOT), recently two works,
SimTrack [10] and OSTrack [97], explore using an MAE
pre-trained ViT model as the tracking backbone. Notably,
these two trackers achieve state-of-the-art performance on
existing tracking benchmarks without using complicated
tracking pipelines. The key to their success is the robust
pre-training weights learned by MAE on ImageNet [68]. In
addition, [10, 97] also demonstrate that, for VOT, MAE un-
supervised pre-training on ImageNet is more effective than
supervised pre-training using class labels – this is mainly
because MAE pre-training learns more fine-grained local
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14561
Input Frame PairDropMAE(Layer-4)
TempMAE(Layer-6)DropMAE(Layer-6)
high
low
TempMAE(Layer-8)DropMAE(Layer-8)
TwinMAE(Layer-6)
DropMAE(Layer-6)
high
low
TwinMAE(Layer-8)
DropMAE(Layer-8)
Figure 2. Visualization of the attention maps of the TwinMAE
baseline and our DropMAE in the reconstruction of a random
masked patch, which is denoted as a red bounding box in the left
input frame. TwinMAE leverages the spatial cues (within the same
frame) more than temporal cues (between frames) for reconstruc-
tion. Our proposed DropMAE improves the baseline by effec-
tively alleviating co-adaptation between spatial cues in the recon-
struction, focusing more on temporal cues, thus achieving better
learning of temporal correspondences for VOT and VOS.
structures that are useful for accurate target localization re-
quired for VOT, whereas supervised training learns high-
level class-related features that are invariant over appear-
ance changes. Despite the promising performance achieved
by [10,97], the MAE pre-training on ImageNet could still be
sub-optimal for the tracking task due to the natural gap be-
tween images and videos, i.e., no prior temporal correspon-
dence information can be learned in static images. How-
ever, previous tracking methods [3, 42, 83] have shown that
temporal correspondence learning is the key in developing
a robust and discriminative tracker. Thus there is an oppor-
tunity to further develop the MAE framework specifically
for matching-base video tasks, such as VOT and VOS.
One simple way to extend MAE to videos is to ran-
domly mask out frame patches in a video clip (i.e., video
frame pairs) and then reconstruct the video clip. We de-
note this simple baseline as twin MAE (TwinMAE). Given
a masked patch query, as illustrated in Figs. 2 & 4, we
find that TwinMAE heavily relies on the spatially neigh-
bouring patches within the same frame to reconstruct the
masked patch, which implies a heavy co-adaptation of spa-
tial cues (within-frame tokens) for reconstruction and may
cause learning of sub-optimal temporal representations for
matching-based downstream tasks like video object track-
ing and segmentation.
To address this issue with the TwinMAE baseline, we
propose DropMAE specifically designed for pre-training a
masked autoencoder for matching-based video downstream
tasks (e.g., VOT and VOS). Our DropMAE adaptively per-
forms spatial-attention dropout to break up co-adaptation
between spatial cues (within-frame tokens) during the frame
reconstruction, thus encouraging more temporal interac-
tions and facilitating temporal correspondence learning inthe pre-training stage. Interestingly, we obtain several im-
portant findings with DropMAE: 1) DropMAE is an effec-
tive and efficient temporal correspondence learner, which
achieves better fine-tuning results on matching-based tasks
than the ImageNet-based MAE with 2×faster pre-training
speed. 2) Motion diversity in pre-training videos is more
important than scene diversity for improving the perfor-
mance on VOT and VOS.
We conduct downstream task evaluation on 9 competi-
tive VOT and VOS benchmarks, achieving state-of-the-art
performance on these benchmarks. In particular, our track-
ers with DropMAE pre-training obtain 75.9% AO on GOT-
10k, 52.7% AUC on LaSOT ext, 56.9% AUC on TNL2K
and 92.1%/83.0% J&Fscores on DA VIS-16/17, w/o us-
ing complicated online updating or memory mechanisms.
In summary, the main contributions of our work are:
• To the best of our knowledge, we are the first to investi-
gate masked autoencoder video pre-training for tempo-
ral matching-based downstream tasks. Specifically, we
explore various video data sources for pre-training and
build a TwinMAE baseline to study its effectiveness on
temporal matching tasks. Since none exists, we further
build a ViT-based VOS baseline for fine-tuning.
• We propose DropMAE, which adaptively performs
spatial-attention dropout in the frame reconstruction
to facilitate effective temporal correspondence learn-
ing in videos.
• Our trackers with DropMAE pre-training sets new
state-of-the-art performance on 8 out of 9 highly com-
petitive video tracking and segmentation benchmarks
without complicated tracking pipelines.
|
Wang_InternImage_Exploring_Large-Scale_Vision_Foundation_Models_With_Deformable_Convolutions_CVPR_2023 | Abstract
Compared to the great progress of large-scale vision
transformers (ViTs) in recent years, large-scale models
based on convolutional neural networks (CNNs) are still
in an early state. This work presents a new large-scale
CNN-based foundation model, termed InternImage, which
can obtain the gain from increasing parameters and train-
ing data like ViTs. Different from the recent CNNs that focus
on large dense kernels, InternImage takes deformable con-
volution as the core operator, so that our model not only
has the large effective receptive field required for down-
stream tasks such as detection and segmentation, but also
has the adaptive spatial aggregation conditioned by input
and task information. As a result, the proposed InternIm-
age reduces the strict inductive bias of traditional CNNs
and makes it possible to learn stronger and more robust
patterns with large-scale parameters from massive data like
ViTs. The effectiveness of our model is proven on challeng-
ing benchmarks including ImageNet, COCO, and ADE20K.
It is worth mentioning that InternImage-H achieved a new
record 65.4 mAP on COCO test-dev and 62.9 mIoU on
ADE20K, outperforming current leading CNNs and ViTs.
| 1. Introduction
With the remarkable success of transformers in large-
scale language models [3–8], vision transformers (ViTs) [2,
9–15] have also swept the computer vision field and are
becoming the primary choice for the research and prac-
tice of large-scale vision foundation models. Some pio-
neers [16–20] have made attempts to extend ViTs to very
large models with over a billion parameters, beating convo-
lutional neural networks (CNNs) and significantly pushing
the performance bound for a wide range of computer vision
tasks, including basic classification, detection, and segmen-
* equal contribution, Bcorresponding author (qiaoyu@pjlab.org.cn)
(b) localattention✗long-range dependence✓adaptive spatialaggregation✓computation/memory efficient
(d)dynamic sparse kernel (ours)✓long-range dependence✓adaptive spatialaggregation✓computation/memory efficient(a) globalattention✓long-range dependence✓adaptive spatialaggregation✗computation/memory efficient
(c) large kernel✓long-range dependence✗adaptive spatialaggregation✓computation/memory efficientresponsepixelswithfixedweightsquerypixelsresponsepixelswithadaptiveweightsFigure 1. Comparisons of different core operators. (a) shows
the global aggregation of multi-head self-attention (MHSA) [1],
whose computational and memory costs are expensive in down-
stream tasks that require high-resolution inputs. (b) limits the
range of MHSA into a local window [2] to reduce the cost. (c)
is a depth-wise convolution with very large kernels to model long-
range dependencies. (d) is a deformable convolution, which shares
similar favorable properties with MHSA and is efficient enough
for large-scale models. We start from it to build a large-scale CNN.
tation. While these results suggest that CNNs are inferior
to ViTs in the era of massive parameters and data, we ar-
gue that CNN-based foundation models can also achieve
comparable or even better performance than ViTs when
equipped with similar operator-/architecture-level designs,
scaling-up parameters, and massive data .
To bridge the gap between CNNs and ViTs, we first
summarize their differences from two aspects: (1) From
the operator level [9, 21, 22], the multi-head self-attention
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14408
(MHSA) of ViTs has long-range dependencies and adap-
tive spatial aggregation (see Fig. 1(a)). Benefiting from the
flexible MHSA, ViTs can learn more powerful and robust
representations than CNNs from massive data. (2) From
the architecture view [9, 22, 23], besides MHSA, ViTs con-
tain a series of advanced components that are not included
in standard CNNs, such as Layer Normalization (LN) [24],
feed-forward network (FFN) [1], GELU [25], etc. Although
recent works [21,22] have made meaningful attempts to in-
troduce long-range dependencies into CNNs by using dense
convolutions with very large kernels ( e.g., 31×31) as shown
in Fig. 1 (c), there is still a considerable gap with the state-
of-the-art large-scale ViTs [16, 18–20, 26] in terms of per-
formance and model scale.
In this work, we concentrate on designing a CNN-based
foundation model that can efficiently extend to large-scale
parameters and data. Specifically, we start with a flexible
convolution variant—deformable convolution (DCN) [27,
28]. By combining it with a series of tailored block-
level and architecture-level designs similar to transformers,
we design a brand-new convolutional backbone network,
termed InternImage . As shown in Fig. 1, different from
recently improved CNNs with very large kernels such as
31×31 [22], the core operator of InternImage is a dynamic
sparse convolution with a common window size of 3 ×3, (1)
whose sampling offsets are flexible to dynamically learn ap-
propriate receptive fields (can be long- or short-range) from
given data; (2) the sampling offsets and modulation scalars
are adaptively adjusted according to the input data, which
can achieve adaptive spatial aggregation like ViTs, reduc-
ing the over-inductive bias of regular convolutions; and (3)
the convolution window is a common 3 ×3, avoiding the
optimization problems and expensive costs caused by large
dense kernels [22, 29].
With the aforementioned designs, the proposed Intern-
Image can efficiently scale to large parameter sizes and
learn stronger representations from large-scale training
data, achieving comparable or even better performance to
large-scale ViTs [2, 11, 19] on a wide range of vision tasks.
In summary, our main contributions are as follows:
(1) We present a new large-scale CNN-based founda-
tion model—InternImage. To our best knowledge, it is the
first CNN that effectively scales to over 1 billion parameters
and 400 million training images and achieves comparable or
even better performance than state-of-the-art ViTs, showing
that convolutional models are also a worth-exploring direc-
tion for large-scale model research.
(2) We successfully scale CNNs to large-scale settings
by introducing long-range dependencies and adaptive spa-
tial aggregation using an improved 3 ×3 DCN operator, and
explore the tailored basic block, stacking rules, and scaling
strategies centered on the operator. These designs make ef-
fective use of the operator, enabling our models to obtain
65.463.164.263.762.4
454749515355575961636567
00.511.522.533.5COCO box AP (%)#parameter (B) InternImage-H (test-dev) SwinV2 (test-dev) FD-SwinV2-G (test-dev) BEiT-3 (test-dev) Florence-CoSwin-H (test-dev) InternImage (val2017) Swin (val2017) ConvNeXt (val2017)Figure 2. Performance comparison on COCO of different
backbones. The proposed InternImage-H achieves a new record
65.4 box AP on COCO test-dev, significantly outperforming state-
of-the-art CNNs and large-scale ViTs.
the gains from large-scale parameters and data.
(3) We evaluate the proposed model on representative
vision tasks including image classification, object detec-
tion, instance and semantic segmentation, and compared it
with state-of-the-art CNNs and large-scale ViTs by scal-
ing the model size ranging from 30 million to 1 billion,
the data ranging from 1 million to 400 million. Specifi-
cally, our model with different parameter sizes can consis-
tently outperform prior arts on ImageNet [30]. InternImage-
B achieves 84.9% top-1 accuracy trained only on the
ImageNet-1K dataset, outperforming CNN-based counter-
parts [21, 22] by at least 1.1 points. With large-scale pa-
rameters ( i.e., 1 billion) and training data ( i.e., 427 million),
the top-1 accuracy of InternImage-H is further boosted to
89.6%, which is close to well-engineering ViTs [2, 19] and
hybrid-ViTs [20]. In addition, on COCO [31], a challeng-
ing downstream benchmark, our best model InternImage-H
achieves state-of-the-art 65.4% box mAP with 2.18 billion
parameters, 2.3 points higher than SwinV2-G [16] (65.4 vs.
63.1) with 27% fewer parameters as shown in Fig. 2.
|
Wang_DSVT_Dynamic_Sparse_Voxel_Transformer_With_Rotated_Sets_CVPR_2023 | Abstract
Designing an efficient yet deployment-friendly 3D back-
bone to handle sparse point clouds is a fundamental problem
in 3D perception. Compared with the customized sparse
convolution, the attention mechanism in Transformers is
more appropriate for flexibly modeling long-range relation-
ships and is easier to be deployed in real-world applications.
However, due to the sparse characteristics of point clouds,
it is non-trivial to apply a standard transformer on sparse
points. In this paper, we present Dynamic Sparse Voxel
Transformer (DSVT), a single-stride window-based voxel
Transformer backbone for outdoor 3D perception. In order
to efficiently process sparse points in parallel, we propose
Dynamic Sparse Window Attention, which partitions a series
of local regions in each window according to its sparsity
and then computes the features of all regions in a fully par-
allel manner. To allow the cross-set connection, we design
a rotated set partitioning strategy that alternates between
two partitioning configurations in consecutive self-attention
layers. To support effective downsampling and better en-
code geometric information, we also propose an attention-
style 3D pooling module on sparse points, which is powerful
and deployment-friendly without utilizing any customized
CUDA operations. Our model achieves state-of-the-art per-
formance with a broad range of 3D perception tasks. More
importantly, DSVT can be easily deployed by TensorRT with
real-time inference speed (27Hz). Code will be available at
https://github.com/Haiyang-W/DSVT .
| 1. Introduction
3D perception is a crucial challenge in computer vision,
garnering increased attention thanks to its potential appli-
cations in various fields such as autonomous driving sys-
*Equal contribution.
†Corresponding author.
CenterPoint-Voxel
CenterPoint-Pillar
0 5 10 15 20 25 3073
71
69
67
65
63
61
57SecondPointPillarsSSTPV-RCNN
Part-A2PV-RCNN++Performance(mAPH/L2)
Scene per second (Hz)VoxSetDVST-Pillar-TS (ours)
DVST-Pillar (ours)DVST-Pillar-RT (ours)Figure 1. Detection performance (mAPH/L2) vs speed (Hz) of
different methods on Waymo [36] validation set. All the speeds are
evaluated on an NVIDIA A100 GPU with AMD EPYC 7513 CPU.
tems [2, 41] and modern robotics [44, 53]. In this paper, we
propose DSVT, a general-purpose and deployment-friendly
Transformer-only 3D backbone that can be easily applied to
various 3D perception tasks for point clouds processing.
Unlike the well-studied 2D community where the input
image is in a densely packed array, 3D point clouds are
sparsely and irregularly distributed in continuous space due
to the nature of 3D sensors, which makes it challenging to
directly apply techniques used for traditional regular data. To
support sparse feature learning from raw point clouds, pre-
vious methods mainly apply customized sparse operations,
such as PointNet++ [27, 28] and sparse convolution [12, 13].
PointNet based methods [27, 28, 39] use point-wise MLPs
with the ball-query and max-pooling operators to extract
features. Sparse convolution based methods [6, 12, 13] first
convert point clouds to regular grids and handle the sparse
volumes efficiently. Though impressive, they suffer from
either the intensive computation of sampling and group-
ing [28] or the limited representation capacity due to sub-
manifold dilation [13]. More importantly, these specifically-
designed operations generally can not be implemented with
well-optimized deep learning tools ( e.g., PyTorch and Tensor-
Flow) and require writing customized CUDA codes, which
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13520
greatly limits their deployment in real-world applications.
Witnessing the success of transformer [40] in the 2D do-
main, numerous attention-based 3D vision methods have
been investigated in point cloud processing. However, be-
cause of the sparse characteristic of 3D points, the number of
non-empty voxels in each local region can vary significantly,
which makes directly applying a standard Transformer non-
trivial. To efficiently process the attention on sparse data,
many approaches rebalance the token number by random
sampling [25,49] or group local regions with similar number
of tokens together [10, 37]. These methods are either insep-
arable from superfluous computations ( e.g., dummy token
padding and non-parallel attention) or noncompetitive per-
formance ( e.g., token random dropping). Alternatively, some
approaches [15, 24] try to solve these problems by writing
customized CUDA operations, which require heavy opti-
mization to be deployed on modern devices. Hence, building
an efficient and deployment-friendly 3D transformer back-
bone is the main challenge we aim to address in this paper.
In this paper, we seek to expand the applicability of Trans-
former such that it can serve as a powerful backbone for out-
door 3D perception, as it does in 2D vision. Our backbone
is efficient yet deployment-friendly without any customized
CUDA operations. To achieve this goal, we present two
major modules, one is the dynamic sparse window attention
to support efficient parallel computation of local windows
with diverse sparsity, and the other one is a novel learnable
3D pooling operation to downsample the feature map and
better encode geometric information.
Specifically, as illustrated in Figure 2, given the sparse
voxelized representations and window partition, we first di-
vide each window’s sparse voxels into some non-overlapped
subsets, where each subset is guaranteed to have the same
number of voxels for parallel computation. The partitioning
configuration of these subsets will be changed in consecu-
tive self-attention layers based on the rotating partition axis
between the X-Axis and Y-Axis. It bridges the subsets of
preceding layers for intra-window fusion, providing connec-
tions among them that significantly enhance modeling power
(see Table 6). To better process the inter-window fusion
and encode multi-scale information, we propose the hybrid
window partition, which alternates its window shape in suc-
cessive transformer blocks. It leads to a drop in computation
cost with even better performance (see Table 7). With the
above designs, we process all regions in a fully parallel man-
ner by calculating them in the same batch. This strategy is
efficient in regards to real-world latency: i) all the windows
are processed in parallel, which is nearly independent of the
voxel distribution, ii) using self-attention without key dupli-
cation, which facilitates memory access in hardware. Our
experiments show that the dynamic sparse window attention
approach has much lower latency than previous bucketing-
based strategies [10, 37] or vanilla window attention [20],
Voxel in Set 1
Voxel in Set 2Voxel in Set 3
Voxel in Set 4Local WindowX-Axis Partition Y-Axis Partition X
YFigure 2. A demonstration of dynamic sparse window attention
in our DSVT block. In the X-Axis DSVT layer, the sparse voxels
will be split into a series of window-bounded and size-equivalent
subsets in X-Axis main order, and self-attention is computed within
each set. In the next layer, the set partition is switched to Y-Axis,
providing connections among the previous sets.
yet is similar in modeling power (see Table 5).
Secondly, we present a powerful yet deployed-friendly
3D sparse pooling operation to efficiently process downsam-
pling and encode better geometric representation. To tackle
the sparse characteristic of 3D point clouds, previous meth-
ods adopt some custom operations, e.g., customized scatter
function [11] or strided sparse convolution to generate down-
sampled feature volumes [46, 50]. The requirement of heavy
optimization for efficient deployment limits their real-world
applications. More importantly, we empirically find that
inserting some linear or max-pooling layers between our
transformer blocks also harms the network convergence and
the encoding of geometric information. To address the above
limitations, we first convert the sparse downsampling region
into dense and process an attention-style 3D pooling oper-
ation to automatically aggregate the local spatial features.
Our 3D pooling module is powerful and deployed-friendly
without any self-designed CUDA operations, and the perfor-
mance gains (see Table 8) demonstrate its effectiveness.
In a nutshell, our contributions are four-fold: 1) We pro-
pose Dynamic Sparse Window Attention, a novel window-
based attention strategy for efficiently handling sparse 3D
voxels in parallel. 2) We present a learnable 3D pooling
operation, which can effectively downsample sparse voxels
and encode geometric information better. 3) Based on the
above key designs, we introduce an efficient yet deployment-
friendly transformer 3D backbone without any customized
CUDA operations. It can be easily accelerated by NVIDIA
TensorRT to achieve real-time inference speed ( 27Hz ), as
shown in Figure 1. 4) Our approach outperforms previous
state-of-the-art methods on the large-scale Waymo [35] and
nuScenes [3] datasets across various 3D perception tasks.
|
Wang_Semantic_Scene_Completion_With_Cleaner_Self_CVPR_2023 | Abstract
Semantic Scene Completion (SSC) transforms an image
of single-view depth and/or RGB 2D pixels into 3D vox-
els, each of whose semantic labels are predicted. SSC is a
well-known ill-posed problem as the prediction model has
to “imagine” what is behind the visible surface, which is
usually represented by Truncated Signed Distance Func-
tion (TSDF). Due to the sensory imperfection of the depth
camera, most existing methods based on the noisy TSDF
estimated from depth values suffer from 1) incomplete vol-
umetric predictions and 2) confused semantic labels. To
this end, we use the ground-truth 3D voxels to generate a
perfect visible surface, called TSDF-CAD, and then train
a “cleaner” SSC model. As the model is noise-free, it
is expected to focus more on the “imagination” of un-
seen voxels. Then, we propose to distill the intermediate
“cleaner” knowledge into another model with noisy TSDF
input. In particular, we use the 3D occupancy feature and
the semantic relations of the “cleaner self” to supervise
the counterparts of the “noisy self” to respectively address
the above two incorrect predictions. Experimental results
validate that our method improves the noisy counterparts
with3.1%IoU and 2.2%mIoU for measuring scene com-
pletion and SSC, and also achieves new state-of-the-art ac-
curacy on the popular NYU dataset. The code is available
at https://github.com/fereenwong/CleanerS.
| 1. Introduction
3D scene understanding is an important visual task for
many practical applications, e.g., robotic navigation [16]
and augmented reality [55], where the scene geometry and
semantics are two key factors to the agent interaction with
the real world [24, 64]. However, visual sensors can only
perceive a partial world given their limited field of view
with sensory noises [49]. Therefore, an agent is expected to
leverage prior knowledge to estimate the complete geome-
*Corresponding author.try and semantics from the imperfect perception. Semantic
Scene Completion (SSC) is designed for such an ability to
infer complete volumetric occupancy and semantic labels
for a scene from a single depth and/or RGB image [49, 52].
Based on an input 2D image, the 2D →3D projection is
a vital bond for mapping 2D perception to the correspond-
ing 3D spatial positions, which is determined by the depth
value [6]. After this, the model recovers the visible surface
in 3D space, which sheds light on completing and labeling
the occluded regions [31, 52], because the geometry of the
visible and occluded areas is tightly intertwined. For exam-
ple, you can easily infer the shapes and the semantic labels
when you see a part of a “chair” or “bed”. Thus, a high-
quality visible surface is crucial for the SSC task.
However, due to the inherent imperfection of the depth
camera, the depth information is quite noisy, what follows
is an imperfect visible surface that is usually represented by
Truncated Signed Distance Function (TSDF) [52]. In gen-
eral, the existing depth noises can be roughly categorized
into the following two basic types:
1) Zero Noise . This type of noise happens when a depth
sensor cannot confirm the depth value of some local regions,
it will fill these regions with zeroes [14,43]. Zero noise gen-
erally occurs on object surfaces with reflection or uneven-
ness [41]. Based on zero noise, the visible surface will be
incomplete after the 2D-3D projection via TSDF [49], so
the incomplete volumetric prediction problem may occur in
the final 3D voxels. For example, as shown in the upper-half
of Figure 1, for the input RGB “kitchen” image, the depth
value of some parts of the “cupboard” surface (marked with
the red dotted frames) (in (b)) is set to zero due to reflec-
tions. Based on this, both the visible surface (in (d)) and the
predicted 3D voxels (in (f)) appear incomplete in reflective
regions of this “cupboard”. Our method uses the perfect vis-
ible surface (in (e)) generated by the noise-free ground-truth
depth value (in (c)) as intermediate supervision in training ,
which helps the model to estimate “cupboard” 3D voxels in
inference even with the noisy depth value as input.
2) Delta Noise . This type of noise refers to the inevitable
deviation of the obtained depth value due to the inherent
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
867
(1) Zero Noise Leads to Incomplete Vo lumetric Predictions
(b)Noisy Depth
(c)Depth -CAD (d)Noisy TSDF
(a)RGB
(2) Delta Noise Leads to Confused Semantic Labels
(b)Noisy Depth (c) Depth -CAD (d)Noisy TSDF (a)RGB
(f)Prediction with (a,d) (g) Our Prediction
(f)Prediction with (a, d) (g) Our Prediction
table✓
chair×
furniture ×
(e)TSDF -CAD
(e)TSDF -CAD
/ noisy / noise- freedepth value to visible surface /incomplete /complete visible surface /visible surface with class label× ∆𝑑𝑑×Figure 1. The existing depth noises can be roughly categorized into: 1) zero noise and 2) delta noise . By zero noise , we mean that
when the depth camera cannot confirm the depth value of some local regions, it fills these regions with zeroes, leading to the problem
of incomplete volumetric predictions. By delta noise , we mean the inevitable deviation ( i.e.,∆d) of the obtained depth value due to the
inherent quality defects of the depth camera, which leads to the problem of confused semantic labels in the final 3D voxels. In the above
blocks, the pairwise subfigures (e.g., (d) and (e)) show the cases of “with noise” and “without noise” on the left and right, respectively.
quality defects of the depth camera [41], i.e., the obtained
depth value does not match the true depth value. Delta noise
shifts the 3D position of the visible surface, resulting in the
wrong semantic labels, such that the final 3D voxels will
suffer from the problem of confusing semantic labels [52].
A real delta noise case is shown in the bottom half part of
Figure 1. For the input RGB “classroom” image, the depth
camera mistakenly estimates the depth value of the “table”
as the depth value of “furniture” (in (b)). Therefore, the vis-
ible surface represented by TSDF shifts from the class of
“table” (marked with blue points) to the class of “furniture”
(marked with orange points in (d)). Based on this, the final
estimated 3D voxels (in (f)) also mistakenly estimate the
part of the “table” as the “furniture”. In comparison, when
our SSC model is trained on the visible surface in (e), which
is generated by the correct depth value in (c), as the interme-
diate supervision, semantic labels for both the “table” and
the “furniture” can be estimated correctly in (g).
In practice, these two types of noise are randomly mixed
together to form a more complex noise [14, 65]. To handle
these two noise types, although some recent SSC attempts
have been made by rendering the noise-free depth value
from 3D voxel ground-truth [12, 51], they are not of practi-cal use as the 3D voxels ground-truth is still needed in infer-
ence. However, they indeed validate the potential that more
accurate recognition performance can be achieved using the
noise-free depth value [4, 56, 66]. To the best of our knowl-
edge, no prior work focuses on mitigating the noisy depth
values in SSC without the use of ground-truth depth val-
ues in inference. Therefore, the crux is to transfer the clean
knowledge learned from ground-truth depth into the noisy-
depth pipeline only during training. So, in inference, we can
directly use this pipeline without the need for ground-truth.
In this paper, we propose a Cleaner Self (CleanerS)
framework to shield the harmful effects of the two depth
noises for SSC. CleanerS consists of two networks that
share the same network architecture (that is what “self”
means). The only difference between these two networks is
that the depth value of the teacher network is rendered from
ground-truth, while the depth value of the student network
is inherently noisy. Therefore, the depth value of the teacher
network is cleaner than the depth value of the student net-
work. In the training stage, we make the teacher network
to provide intermediate supervision for learning of the stu-
dent network via knowledge distillation (KD), such that the
student network can disentangle the clean visible surface
868
reconstruction and occluded region completion. To pre-
serve both the detailed information and the abstract seman-
tics of the teacher network, we adopt both feature-based
andlogit-based KD strategies. In inference, only the stu-
dent network is used. Compared to the noisy self, as shown
in Figure 1, CleanerS achieves more accurate performance
with the help of ground-truth depth values in training but
not in testing.
The main contributions of this work are summarized as
the following two aspects: 1) we propose a novel Clean-
erS framework for SSC, which can mitigate the negative
effects of the noisy depth value in training; 2) CleanerS
achieves the new state-of-the-art results on the challenging
NYU dataset with the input of noisy depth values.
|
Xu_Bias-Eliminating_Augmentation_Learning_for_Debiased_Federated_Learning_CVPR_2023 | Abstract
Learning models trained on biased datasets tend to ob-
serve correlations between categorical and undesirable fea-
tures, which result in degraded performances. Most exist-
ing debiased learning models are designed for centralized
machine learning, which cannot be directly applied to dis-
tributed settings like federated learning (FL), which col-
lects data at distinct clients with privacy preserved. To
tackle the challenging task of debiased federated learn-
ing, we present a novel FL framework of Bias-Eliminating
Augmentation Learning ( FedBEAL ), which learns to de-
ploy Bias-Eliminating Augmenters ( BEA ) for producing
client-specific bias-conflicting samples at each client. Since
the bias types or attributes are not known in advance, a
unique learning strategy is presented to jointly train BEA
with the proposed FL framework. Extensive image clas-
sification experiments on datasets with various bias types
confirm the effectiveness and applicability of our FedBEAL,
which performs favorably against state-of-the-art debiasing
and FL methods for debiased FL.
| 1. Introduction
Deep neural networks have shown promising progress
across different domains such as computer vision [14]
and natural language processing [8]. Their successes
are typically based on the collection of and training on
data that properly describe the inherent distribution of the
data of interest. However, in real-world scenarios, biased
data [24] are often observed during data collection. Biased
datasets [10, 22, 42] contain features that are highly cor-
related to class labels in the training dataset but not suffi-
ciently describing the inherent semantic meaning. Training
on such biased data thus result in degraded model general-
ization capability. Take Fig. 1 for example; when address-
ing the cat-dog classification task, training images collected
by users might contain only orange cats and black dogs.
Their color attributes are strongly correlated with the image
labels during training, but such attributes are not necessar-
ily relevant to the classification task during inference. As
Figure 1. Example of local data bias in FL. When deploying
FL to train a cat-dog classifier with image datasets collected by
multiple pet owners, most of the local images are obtained with
their pets with specific colors. Therefore, the models trained with
each local dataset are likely to establish decision rules on biased
attributes ( e.g., fur color), which prevents the aggregated model
from learning proper representation for classification.
pointed out in [10, 42], deep neural networks trained with
such biased data are more likely to make decisions based
onbias attributes instead of semantic attributes. As a re-
sult, during inference, performances of the learned models
would dramatically drop when observing bias-conflicting
samples ( i.e., data containing semantic and bias attributes
that are rarely correlated in the training set).
To tackle the data bias problem, several works have been
proposed to remove or alleviate data bias when training
deep learning models [6, 11, 18, 24, 27, 32, 36, 40]. For ex-
ample, Nam et al. [36] train an intentionally biased auxil-
iary model while enforcing the main model to go against
the prejudice of the biased network. Lee et al. [27] utilize
the aforementioned biased model to synthesize diverse bias-
conflicting hidden features for learning debiased represen-
tations. Nevertheless, the above techniques are designed for
centralized datasets. When performing distributed training
of learning models, such methods might fail to generalize.
For distributed learning, federated learning (FL) [35]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20442
particularly considers data collection and training con-
ducted at each client, with data privacy needing to be pre-
served. When considering privately distributed datasets,
real-world FL applications are more likely to suffer data
heterogeneity issues [20, 28, 51], i.e., data collected by
different clients are not independent and identically dis-
tributed (IID). Recently, several works [19,21,29–31,34,47]
propose to alleviate performance degradation caused by
data heterogeneity. However, existing methods typically
consider data heterogeneity in terms of label distribution
skew [21, 29, 30, 34, 47] or domain discrepancy [19, 31]
among clients. These FL methods are not designed to tackle
potential data bias across different clients, leaving the debi-
ased FL a challenging task to tackle.
To mitigate the local bias in Federated learning, we pro-
pose a novel FL scheme of Bias-Eliminating Augmentation
Learning ( FedBEAL ). In FedBEAL, we learn a Bias-
Eliminating Augmenter (BEA) for each client, with the goal
of producing bias-conflicting samples. To identify and in-
troduce the desirable semantic and bias attributes to the aug-
mented samples, our FedBEAL uniquely adopts the global
server model and each client model trained across iterations
without prior knowledge of bias type or annotation. With
the introduced augmenter and the produced bias-conflicting
samples, debiased local updates can be performed at each
client, followed by simple aggregation of such models for
deriving the server model.
We now summarize the contributions of this work below:
• To the best of our knowledge, We are among the first
to tackle the problem of debiased federated learning, in
which local yet distinct biases exist at the client level.
• We present FedBEAL for debiased FL, which intro-
duces Bias-Eliminating Augmenters (BEA) at each
client with the goal of generating bias-conflicting sam-
ples to eliminate local data biases.
• Learning of BEA can be realized by utilizing the global
server and local client models trained across iterations,
which allows us to identify and embed desirable se-
mantic and bias features for augmentation purposes.
|
Weder_Removing_Objects_From_Neural_Radiance_Fields_CVPR_2023 | Abstract
Neural Radiance Fields (NeRFs) are emerging as a ubiq-
uitous scene representation that allows for novel view syn-
thesis. Increasingly, NeRFs will be shareable with other
people. Before sharing a NeRF , though, it might be desir-
able to remove personal information or unsightly objects.
Such removal is not easily achieved with the current NeRF
editing frameworks. We propose a framework to remove
objects from a NeRF representation created from an RGB-
D sequence. Our NeRF inpainting method leverages re-
cent work in 2D image inpainting and is guided by a user-
provided mask. Our algorithm is underpinned by a confi-
dence based view selection procedure. It chooses which of
the individual 2D inpainted images to use in the creation of
the NeRF , so that the resulting inpainted NeRF is 3D consis-
tent. We show that our method for NeRF editing is effective
for synthesizing plausible inpaintings in a multi-view co-
herent manner, outperforming competing methods. We vali-
date our approach by proposing a new and still-challenging
dataset for the task of NeRF inpainting.
| 1. Introduction
Since the initial publication of Neural Radiance Fields
(NeRFs) [42], there has been an explosion of extensions
to the original framework, e.g., [3, 4, 8, 12, 25, 35, 39, 42].
NeRFs are being used beyond the initial task of novel view
synthesis. It is already appealing to get them into the hands
of non-expert users for novel applications, e.g., for NeRF
editing [80] or live capture and training [47], and these more
casual use cases are driving interesting new technical issues.
One of those issues is how to seamlessly remove parts
of the rendered scene. Removing parts of the scene can
be desirable for a variety of reasons. For example, a
house scan being shared on a property selling website may
need unappealing or personally identifiable objects to be re-
moved [68]. Similarly, objects could be removed so they
can be replaced in an augmented reality application, e.g.,
removing a chair from a scan to see how a new chair fits
Input images
NeRF
Ours
Novel view
Novel view with the
object removed
Novel views Novel views
without the object
Ours
Input images
NeRF Figure 1. Removal of unsightly objects. Our method allows for
objects to be plausibly removed from NeRF reconstructions, in-
painting missing regions whilst preserving multi-view coherence.
in the environment [51]. Removing objects might also be
desirable when a NeRF is part of a traditional computer vi-
sion pipeline, e.g., removing parked cars from scans that are
going to be used for relocalization [44].
Some editing of NeRFs has already been explored. For
example, object-centric representations disentangle labeled
objects from the background, which allows editing of the
trained scene with user-guided transformations [74, 77],
while semantic decomposition allows selective editing and
transparency for certain semantic parts of the scene [26].
However, these previous approaches only augment informa-
tion from the input scan, limiting their generative capabil-
ities, i.e.,the hallucination of elements that have not been
observed from any view.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16528
With this work, we tackle the problem of removing ob-
jects from scenes, while realistically filling the resulting
holes, as shown in Fig. 1. Solving this problem requires: a)
exploiting multi-view information when parts of the scene
are observed in some frames but occluded in others and, b)
leveraging a generative process to fill areas that are never
observed. To this end, we pair the multi-view consistency
of NeRFs with the generative power of 2D inpainting mod-
els [69] that are trained on large scale 2D image datasets.
Such 2D inpaintings are not multi-view consistent by con-
struction, and may contain severe artefacts. Using these in-
paintings directly causes corrupted reconstructions, so we
design a new confidence-based view-selection scheme that
iteratively removes inconsistent inpaintings from the opti-
mization. We validate our approach on a new dataset and
show that we outperform existing approaches for novel view
synthesis on standard metrics of image quality, as well as
producing multi-view consistent results.
In summary, we make the following contributions:
1) We propose the first approach focusing on inpainting
NeRFs by leveraging the power of single image inpainting.
2) We introduce a novel view-selection mechanism that au-
tomatically removes inconsistent views from the optimiza-
tion. 3) We present a new dataset for evaluating object re-
moval and inpainting in indoor and outdoor scenes.
|
Wang_A_Practical_Upper_Bound_for_the_Worst-Case_Attribution_Deviations_CVPR_2023 | Abstract
Model attribution is a critical component of deep neural
networks (DNNs) for its interpretability to complex models.
Recent studies bring up attention to the security of attribu-
tion methods as they are vulnerable to attribution attacks
that generate similar images with dramatically different at-
tributions. Existing works have been investigating empir-
ically improving the robustness of DNNs against those at-
tacks; however, none of them explicitly quantifies the actual
deviations of attributions. In this work, for the first time,
a constrained optimization problem is formulated to derive
an upper bound that measures the largest dissimilarity of
attributions after the samples are perturbed by any noises
within a certain region while the classification results re-
main the same. Based on the formulation, different prac-
tical approaches are introduced to bound the attributions
above using Euclidean distance and cosine similarity un-
der both ℓ2andℓ∞-norm perturbations constraints. The
bounds developed by our theoretical study are validated on
various datasets and two different types of attacks (PGD at-
tack and IFIA attribution attack). Over 10 million attacks
in the experiments indicate that the proposed upper bounds
effectively quantify the robustness of models based on the
worst-case attribution dissimilarities.
| 1. Introduction
Attribution methods play an important role in deep learn-
ing applications as one of the subareas of explainable AI.
Practitioners use attribution methods to measure the rela-
tive importance among different features and to understand
the impacts of features contributing to the model outputs.
They have been widely used in a number of critical real-
world applications, such as risk management [2], medical
imaging [24, 29] and drug discovery [13]. In particular,
attributions are supposed to be secure and resistant to ex-
ternal manipulation such that proper explanations can be
applied to safety-sensitive applications. Regulations arealso deployed in countries to enforce the interpretability
of deep learning models for a ‘right to explain’ [10]. Al-
though attribution methods have been extensively studied
[18,25,28,31,36,39], recent works reveal that they are vul-
nerable to visually imperceptible perturbations that drasti-
cally alter the attributions and keep the model outputs un-
changed [6, 8].
Prior works [3, 4, 12, 23, 30, 32, 33] investigate the attri-
bution robustness based on empirical and statistical estima-
tions over entire dataset. However, current attribution ro-
bustness works are unable to evaluate how robust the model
is given any arbitrary test point, perturbed or unperturbed.
In this paper, we study the problem of finding the worst
attribution perturbation within certain predefined regions.
Specifically, given a trained model and an image sample,
we propose theoretical upper bounds of the attribution de-
viations from the unperturbed ones. As far as we know, this
is the first attempt to provide an upper bound of attribution
differences.
In this paper, the general upper bound for attribution de-
viation is first quantified as the maximum changes of attri-
butions after the samples are perturbed while classification
results remain the same. Two cases are analyzed, including
with and without label constraint, which refers to the classi-
fication labels being unchanged and changed, respectively,
after the original samples are attacked. For each case, two
mostly used perturbation constraints, ℓ2andℓ∞-norm, are
considered to compute the upper bound. For ℓ2-norm con-
straint, our approach is based on the first-order Taylor series
of model attribution, and a tight upper bound ignoring the
label constraint is computed from the singular value of the
attribution gradient. ℓ∞-norm constraint is more compli-
cated because the upper bound is a solution of a concave
quadratic programming with box constraints, which is an
NP-hard problem. Thus, two relaxation approaches are pro-
posed. Moreover, a more restricted bound constrained on
the unchanged label is also studied. In this study, Euclidean
distance and cosine distance, which are also employed in
the previous empirical studies [4, 30, 32], are used as dis-
similarity functions to measure attribution difference. We
This CVPR paper is the Open Access version, provided by the Co |
Wei_Text-Guided_Unsupervised_Latent_Transformation_for_Multi-Attribute_Image_Manipulation_CVPR_2023 | Abstract
Great progress has been made in StyleGAN-based im-
age editing. To associate with preset attributes, most ex-
isting approaches focus on supervised learning for seman-
tically meaningful latent space traversal directions, and
each manipulation step is typically determined for an in-
dividual attribute. To address this limitation, we propose a
Text-guided Unsupervised StyleGAN Latent Transformation
(TUSLT) model, which adaptively infers a single transfor-
mation step in the latent space of StyleGAN to simultane-
ously manipulate multiple attributes on a given input image.
Specifically, we adopt a two-stage architecture for a latent
mapping network to break down the transformation process
into two manageable steps. Our network first learns a di-
verse set of semantic directions tailored to an input image,
and later nonlinearly fuses the ones associated with the tar-
get attributes to infer a residual vector. The resulting tightly
interlinked two-stage architecture delivers the flexibility to
handle diverse attribute combinations. By leveraging the
cross-modal text-image representation of CLIP , we can per-
form pseudo annotations based on the semantic similarity
between preset attribute text descriptions and training im-
ages, and further jointly train an auxiliary attribute clas-
sifier with the latent mapping network to provide semantic
guidance. We perform extensive experiments to demonstrate
that the adopted strategies contribute to the superior perfor-
mance of TUSLT.
| 1. Introduction
Visual attributes represent semantically meaningful fea-
tures inherent in images, and attribute manipulation has ex-
Corresponding author.
Figure 1. Visually comparing TUSLT with StyleFlow (supervised)
and StyleCLIP (text-driven) in precisely manipulating multiple at-
tributes and preserving irrelevant attributes.
perienced great improvements, due to the advent of Gen-
erative Adversarial Network [13] (GAN)-based generative
models, e.g. StyleGAN [21, 22] and StarGAN [7, 8]. Re-
cent works [15,37,43] have discovered that the latent space
of StyleGAN possesses semantic disentanglement proper-
ties, enabling a variety of image editing operations via la-
tent transformations.
StyleGAN-based methods for image attribute manipula-
tion typically involve a large number of manual annotations
or well-trained attribute classifiers. Furthermore, the dis-
covered semantic latent directions are associated with indi-
vidual attributes. The editing on a target attribute is car-
ried out by moving the latent code of an input image along
one of the directions. For Ktarget attributes, these model-
s requireKtransformation steps to handle the translation.
As a result, they are not scalable to the increasing number
of target attributes in multi-attribute transformation tasks.
As shown in Figure 1, we test a state-of-the-art supervised
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19285
Figure 2. Overview of the proposed model, TUSLT, consisting of two learnable components: an auxiliary attribute classifier Atrained on
the CLIP-based labeled data, and a latent mapping network f ;g. infers latent directions f
(1)
w;:::;
(K)
wgfor preset attributes, and
transforms the target-related directions as indicated by mask Minto a residual vector
w, to which the initial latent code is added.
Precise multi-attribute transfer is allowed by such a single transformation step, and the generator Gsynthesizes a new image reflecting the
target attributes under the guidance of Aand CLIP encoders.
model, StyleFlow [2], and find that multiple transformation
steps lead to undesired deviation from the input image on
irrelevant attributes. Compared to the state-of-the-art text-
driven model, StyleCLIP [33], we can also achieve a better
manipulation result by seeking a single latent transforma-
tion step for the task.
More specifically, we propose a Text-guided Unsuper-
vised StyleGAN Latent Transformation (TUSLT) model
that supports simultaneous manipulation on multiple at-
tributes. As shown in Figure 2, the key is to jointly learn
a mapping network to infer the latent transformation and an
auxiliary attribute classifier to assess manipulation quality.
We employ the Contrastive Language-Image Pre-training
(CLIP) model [34] to generate pseudo-labeled data by mea-
suring the semantic similarities between attribute text de-
scriptions and training images. Compared to CLIP, the
jointly trained classifier extracts domain-specific informa-
tion to better characterize the differences among attributes.
This benefits the mapping network to seek more suitable
transformations, such that the synthesized images reflec-
t target attributes. Further, we adopt a two-stage architec-
ture for the mapping network: the earlier stage employs a
prediction subnetwork to infer a set of semantic direction-
s, and the latter stage operates on the resulting directions
and nonlinearly fuses the target-related ones. The inter-
mediate semantic directions are associated with preset at-
tributes and tailored for the input image. This design allows
us to deal with a wide range of attribute combinations in
a single transformation step. We perform extensive experi-
ments and provide both qualitative and quantitative results
in diverse multi-attribute transformation tasks, showing thesuperiority of our model over the competing methods.
In summary, the main contributions of this work are giv-
en as follows: (a) The existing image editing methods focus
on discovering semantic latent directions associated with
individual visual attributes, and a sequential manipulation
process is thus needed for multi-attribute manipulation. In
contrast, the proposed model infers a single step of latent s-
pace walk to simultaneously manipulate multiple attributes.
(b) Benefiting from the cross-modal text-image representa-
tion of CLIP, we jointly train a latent mapping network with
an auxiliary attribute classifier, which leads to more precise
attribute rendering without requiring additional manual an-
notations. (c) Due to the two-stage nature, our latent map-
ping network breaks down the challenging multi-attribute
manipulation task into sub-tasks: inferring diverse seman-
tic directions and integrating the target-related ones into a
single transformation vector. This design gives our model
interpretability and flexibility in dealing with a variety of
attribute combinations.
|
Wang_MoLo_Motion-Augmented_Long-Short_Contrastive_Learning_for_Few-Shot_Action_Recognition_CVPR_2023 | Abstract
Current state-of-the-art approaches for few-shot action
recognition achieve promising performance by conducting
frame-level matching on learned visual features. However,
they generally suffer from two limitations: i) the matching
procedure between local frames tends to be inaccurate due
to the lack of guidance to force long-range temporal percep-
tion; ii) explicit motion learning is usually ignored, leading
to partial information loss. To address these issues, we de-
velop a Motion-augmented Long-short Contrastive Learn-
ing (MoLo) method that contains two crucial components,
including a long-short contrastive objective and a motion
autodecoder. Specifically, the long-short contrastive objec-
tive is to endow local frame features with long-form tem-
poral awareness by maximizing their agreement with the
global token of videos belonging to the same class. The
motion autodecoder is a lightweight architecture to recon-
struct pixel motions from the differential features, which ex-
plicitly embeds the network with motion dynamics. By this
means, MoLo can simultaneously learn long-range tempo-
ral context and motion cues for comprehensive few-shot
matching. To demonstrate the effectiveness, we evaluate
MoLo on five standard benchmarks, and the results show
that MoLo favorably outperforms recent advanced meth-
ods. The source code is available at https://github.
com/alibaba-mmai-research/MoLo .
| 1. Introduction
Recently, action recognition has achieved remarkable
progress and shown broad prospects in many application
fields [1, 5, 8, 37, 67]. Despite this, these successes rely
heavily on large amounts of manual data annotation, which
greatly limits the scalability to unseen categories due to the
∗Intern at Alibaba DAMO Academy. †Corresponding authors.
Support video: “ Picking something up ”
Query video is m isclassified as “Picking something up”
Real label: “ Removing something, revealing something behind”Match? Match?Match?Match? Match? Match?
(a) Failure case one
Support video: “Pushing something from right to left”
Query video is m isclassified as “Pushing something from right to left ”
Real label: “ Tipping something over”Match? Match?Match?Match? Match? Match?
(b) Failure case two
Figure 1. Illustration of our motivation. We show that most
existing metric-based local frame matching methods, such as
OTAM [4], can be easily perturbed by some similar co-existing
video frames due to the lack of forced global context awareness
during the support-query temporal alignment process. Example
videos come from the commonly used SSv2 dataset [15].
high cost of acquiring large-scale labeled samples. To alle-
viate the reliance on massive data, few-shot action recogni-
tion [88] is a promising direction, aiming to identify novel
classes with extremely limited labeled videos.
Most mainstream few-shot action recognition ap-
proaches [4,21,44,74] adopt the metric-based meta-learning
strategy [61] that learns to map videos into an appropriate
feature space and then performs alignment metrics to pre-
dict query labels. Typically, OTAM [4] leverages a deep
network to extract video features and explicitly estimates
an ordered temporal alignment path to match the frames of
two videos. HyRSM [74] proposes to explore task-specific
semantic correlations across videos and designs a bidirec-
tional Mean Hausdorff Metric (Bi-MHM) to align frames.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18011
Though these works have obtained significant results, there
are still two limitations: first, existing standard metric-based
techniques mainly focus on local frame-level alignment and
are considered limited since the essential global informa-
tion is not explicitly involved. As shown in Figure 1, lo-
cal frame-level metrics can be easily affected by co-existing
similar video frames. We argue that it would be beneficial
to achieve accurate matching if the local frame features can
predict the global context in few-shot classification; sec-
ond, motion dynamics are widely regarded as a vital role
in the field of video understanding [6, 22, 25, 42, 50, 70],
while the existing few-shot methods do not explicitly ex-
plore the rich motion cues between frames for the matching
procedure, resulting in a sub-optimal performance. In the
literature [5, 67], traditional action recognition works intro-
duce motion information by feeding optical flow or frame
difference into an additional deep network, which leads to
non-negligible computational overhead. Therefore, an effi-
cient motion compensation method should be introduced to
achieve comprehensive few-shot matching.
Inspired by the above observations, we develop a
motion-augmented long-short contrastive learning (MoLo)
method to jointly model the global contextual information
and motion dynamics. More specifically, to explicitly in-
tegrate the global context into the local matching process,
we apply a long-short contrastive objective to enforce frame
features to predict the global context of the videos that be-
long to the same class. For motion compensation, we de-
sign a motion autodecoder to explicitly extract motion fea-
tures between frame representations by reconstructing pixel
motions, e.g., frame differences. In this way, our proposed
MoLo enables efficient and comprehensive exploitation of
temporal contextual dependencies and motion cues for ac-
curate few-shot action recognition. Experimental results
on multiple widely-used benchmarks demonstrate that our
MoLo outperforms other advanced few-shot techniques and
achieves state-of-the-art performance.
In summary, our contributions can be summarized as fol-
lows: (1) We propose a novel MoLo method for few-shot
action recognition, aiming to better leverage the global con-
text and motion dynamics. (2) We further design a long-
short contrastive objective to reinforce local frame features
to perceive comprehensive global information and a motion
autodecoder to explicitly extract motion cues. (3) We con-
duct extensive experiments across five widely-used bench-
marks to validate the effectiveness of the proposed MoLo.
The results demonstrate that MoLo significantly outper-
forms baselines and achieves state-of-the-art performance.
|
Xing_CodeTalker_Speech-Driven_3D_Facial_Animation_With_Discrete_Motion_Prior_CVPR_2023 | Abstract
Speech-driven 3D facial animation has been widely stud-
ied, yet there is still a gap to achieving realism and vividness
due to the highly ill-posed nature and scarcity of audio-
visual data. Existing works typically formulate the cross-
modal mapping into a regression task, which suffers from
the regression-to-mean problem leading to over-smoothed
facial motions. In this paper, we propose to cast speech-
driven facial animation as a code query task in a finite
proxy space of the learned codebook, which effectively pro-
motes the vividness of the generated motions by reducing
the cross-modal mapping uncertainty. The codebook is
learned by self-reconstruction over real facial motions and
thus embedded with realistic facial motion priors. Over the
discrete motion space, a temporal autoregressive model is
employed to sequentially synthesize facial motions from the
input speech signal, which guarantees lip-sync as well as
plausible facial expressions. We demonstrate that our ap-
proach outperforms current state-of-the-art methods both
qualitatively and quantitatively. Also, a user study fur-
ther justifies our superiority in perceptual quality. Code
and video demo are available at https://doubiiu.
github.io/projects/codetalker .
| 1. Introduction
3D facial animation has been an active research topic for
decades, as attributed to its broad applications in virtual re-
ality, film production, and games. The high correlation be-
tween speech and facial gestures (especially lip movements)
makes it possible to drive the facial animation with a speech
signal. Early attempts are mainly made to build the complex
mapping rules between phonemes and their visual counter-
part, which usually have limited performance [53,63]. With
the advances in deep learning, recent speech-driven facial
animation techniques push forward the state-of-the-art sig-
nificantly. However, it still remains challenging to generate
human-like motions.
*Corresponding Author.As an ill-posed problem, speech-driven facial animation
generally has multiple plausible outputs for every input.
Such ambiguity tends to cause over-smoothed results. Any-
how, person-specific approaches [29, 49] can usually ob-
tain decent facial motions because of the relatively consis-
tent talking style, but have low scalability to general ap-
plications. Recently, VOCA [10] extends these methods
to generalize across different identities, however, they gen-
erally exhibit mild or static upper face expressions. This
is because VOCA formulates the speech-to-motion map-
ping as a regression task, which encourages averaged mo-
tions, especially in the upper face that is only weakly or
even uncorrelated to the speech signal. To reduce the un-
certainty, FaceFormer [16] utilizes long-term audio context
through a transformer-based model and synthesizes the se-
quential motions in an autoregressive manner. Although
it gains important performance promotion, it still inherits
the weakness of one-to-one mapping formulation and suf-
fers from a lack of subtle high-frequency motions. Dif-
ferently, MeshTalk [50] models a categorical latent space
for facial animation that disentangles audio-correlated and
audio-uncorrelated information so that both aspects could
be well handled. Anyway, the employed quantization and
categorical latent space representation are not well-suited
for motion prior learning, rendering the training tricky and
consequently hindering its performance.
We get inspiration from 3D Face Morphable Model
(3DMM) [35], where general facial expressions are rep-
resented in a low-dimensional space. Accordingly, we
propose to formulate speech-driven facial animation as a
code query task in a finite proxy space of the learned dis-
crete codebook prior. The codebook is learned by self-
reconstruction over real facial motions using a vector-
quantized autoencoder (VQ-V AE) [57], which along with
the decoder stores the realistic facial motion priors. In
contrast to the continuous linear space of 3DMM, com-
binations of codebook items form a discrete prior space
with only finite cardinality. Still, in the context of the de-
coder, the code representation possesses high expressive-
ness. Through mapping the speech to the finite proxy space,
the uncertainty of the speech-to-motion mapping is signif-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12780
icantly attenuated and hence promotes the quality of mo-
tion synthesis. Conceptually, the proxy space approximates
the facial motion space, where the learned codebook items
serve as discrete motion primitives.
Based on the learned discrete codebook, we pro-
pose a code-query-based temporal autoregressive model
for speech-conditioned facial motion synthesis, called
CodeTalker . Specifically, taking a speech signal as input,
our model predicts the motion feature tokens in a temporal
recursive manner. Then, the feature tokens are used to query
the code sequence in the discrete space, followed by facial
motion reconstruction. Thanks to the contextual modeling
over history motions and cross-modal alignment, the pro-
posed CodeTalker shows the advantages of achieving accu-
rate lip motions and natural expressions. Extensive experi-
ments show that the proposed CodeTalker demonstrates su-
perior performance on existing datasets. Systematic studies
and experiments are conducted to demonstrate the merits of
our method over previous works. The contributions of our
work are as follows:
• We model the facial motion space with discrete prim-
itives in a novel way, which offers advantages to pro-
mote motion synthesis realism against cross-modal un-
certainty.
• We propose a discrete motion prior based temporal au-
toregressive model for speech-driven facial animation,
which outperforms existing state-of-the-art methods.
|
Wang_Generalist_Decoupling_Natural_and_Robust_Generalization_CVPR_2023 | Abstract
Deep neural networks obtained by standard training have
been constantly plagued by adversarial examples. Although
adversarial training demonstrates its capability to defend
against adversarial examples, unfortunately, it leads to an
inevitable drop in the natural generalization. To address the
issue, we decouple the natural generalization and the robust
generalization from joint training and formulate different
training strategies for each one. Specifically, instead of min-
imizing a global loss on the expectation over these two gen-
eralization errors, we propose a bi-expert framework called
Generalist where we simultaneously train base learners with
task-aware strategies so that they can specialize in their own
fields. The parameters of base learners are collected and
combined to form a global learner at intervals during the
training process. The global learner is then distributed to the
base learners as initialized parameters for continued train-
ing. Theoretically, we prove that the risks of Generalist will
get lower once the base learners are well trained. Extensive
experiments verify the applicability of Generalist to achieve
high accuracy on natural examples while maintaining con-
siderable robustness to adversarial ones. Code is available
athttps://github.com/PKU-ML/Generalist .
| 1. Introduction
Modern deep learning techniques have achieved remark-
able success in many fields, including computer vision
[14, 16], natural language processing [10, 31], and speech
recognition [28, 36]. Yet, deep neural networks (DNNs)
suffer a catastrophic performance degradation by human
imperceptible adversarial perturbations where wrong predic-
tions are made with extremely high confidence [13, 29, 34].
The vulnerability of DNNs has led to the proposal of var-
ious defense approaches [3, 24, 25, 33, 40] for protecting
DNNs from adversarial attacks. One of those representa-
*Work was done as an internship at Peking University. Now, he is a
Ph.D. student at the University of Hong Kong.
†Corresponding Author: Yisen Wang (yisen.wang@pku.edu.cn)
83 84 85 86 87 88 89
Natural Accuracy32.535.037.540.042.545.047.5Robust Accuracy (AA)
Madry
TRADES
FAT
GeneralistFigure 1. Comparison with other advanced adversarial training
methods. Both clean accuracy and robust accuracy (against Au-
toAttack [9]) are given for a handy reference. It is noted that current
adversarial training methods achieve high clean accuracy by greatly
sacrificing robustness. That means it is hard to obtain sufficient
robustness but maintain high clean accuracy in the joint training
framework. Our Generalist attains excellent clean accuracy while
staying competitively robust. The improvement of Generalist is
notable since we only use the naive cross-entropy loss with negligi-
ble computational overhead and even without increasing the model
size.
tive techniques is adversarial training (AT) [20, 21, 37, 38],
which dynamically injects perturbed examples that deceive
the current model but preserve the right label into the train-
ing set. Adversarial training has been demonstrated to be
the most effective method to improve adversarially robust
generalization [2, 39].
Despite these successes, such attempts of adversarial
training have found a tradeoff between natural and robust
accuracy, i.e., there exists an undesirable increase in the er-
ror on unperturbed images when the error on the worst-case
perturbed images decreases, as illustrated in Figure 1. Prior
works [30, 43] even argue that natural and robust accuracy
are fundamentally at odds, which indicates that a robust clas-
sifier can be achieved only when compromising the natural
generalization. However, the following works found that
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20554
the tradeoff may be settled in a roundabout way, such as
incorporating additional labeled/unlabeled data [1, 8, 22, 26]
or relaxing the magnitude of perturbations to generate suit-
able adversarial examples for better optimization [18, 44].
These works all focus on the data used for training while we
propose to tackle the tradeoff problem from the perspective
of the |
Wang_BAD-NeRF_Bundle_Adjusted_Deblur_Neural_Radiance_Fields_CVPR_2023 | Abstract
Neural Radiance Fields (NeRF) have received consider-
able attention recently, due to its impressive capability in
photo-realistic 3D reconstruction and novel view synthesis,
given a set of posed camera images. Earlier work usually
assumes the input images are of good quality. However, im-
age degradation (e.g. image motion blur in low-light con-
ditions) can easily happen in real-world scenarios, which
would further affect the rendering quality of NeRF . In this
paper, we present a novel bundle adjusted deblur Neural
Radiance Fields (BAD-NeRF), which can be robust to se-
vere motion blurred images and inaccurate camera poses.
Our approach models the physical image formation process
of a motion blurred image, and jointly learns the parameters
of NeRF and recovers the camera motion trajectories dur-
ing exposure time. In experiments, we show that by directly
modeling the real physical image formation process, BAD-
NeRF achieves superior performance over prior works on
both synthetic and real datasets. Code and data are avail-
able at https://github.com/WU-CVGL/BAD-NeRF .
| 1. Introduction
Acquiring accurate 3D scene geometry and appearance
from a set of 2D images has been a long standing problem
†Corresponding author.in computer vision. As a fundamental block for many vi-
sion applications, such as novel view image synthesis and
robotic navigation, great progress has been made over the
last decades. Classic approaches usually represent the 3D
scene explicitly, in the form of 3D point cloud [8, 52], tri-
angular mesh [4, 5, 10] or volumetric grid [31, 45]. Recent
advancements in implicit 3D representation by using a deep
neural network, such as Neural Radiance Fields (NeRF)
[27], have enabled photo-realistic 3D reconstruction and
novel view image synthesis, given well posed multi-view
images.
NeRF takes a 5D vector (i.e. for spatial location and
viewing direction of the sampled 3D point) as input and
predicts its radiance and volume density via a multilayer
perceptron. The corresponding pixel intensity or depth
can then be computed by differentiable volume rendering
[19, 25]. While many methods have been proposed to fur-
ther improve NeRF’s performance, such as rendering effi-
ciency [11,28], training with inaccurate poses [20] etc., lim-
ited work has been proposed to address the issue of training
with motion blurred images. Motion blur is one of the most
common artifacts that degrades images in practical appli-
cation scenarios. It usually occurs in low-light conditions
where longer exposure times are necessary. Motion blurred
images would bring two main challenges to existing NeRF
training pipeline: a) NeRF usually assumes the rendered
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4170
image is sharp (i.e. infinitesimal exposure time), motion
blurred image thus violates this assumption; b) accurate
camera poses are usually required to train NeRF, however,
it is difficult to obtain accurate poses from blurred images
only, since each of them usually encodes information of the
motion trajectory during exposure time. On the other hand,
it is also challenging itself to recover accurate poses (e.g.,
via COLMAP [41]) from a set of motion blurred images,
due to the difficulties of detecting and matching salient key-
points. Combining both factors would thus further degrade
NeRF’s performance if it is trained with motion blurred im-
ages.
In order to address those challenges, we propose to in-
tegrate the real physical image formation process of a mo-
tion blurred image into the training of NeRF. We also use
a linear motion model in the SE(3) space to represent the
camera motion trajectory within exposure time. During the
training stage, both the network weights of NeRF and the
camera motion trajectories are estimated jointly. In partic-
ular, we represent the motion trajectory of each image with
both poses at the start and end of the exposure time respec-
tively. The intermediate camera poses within exposure time
can be linearly interpolated in the SE(3) space. This as-
sumption holds in general since the exposure time is typ-
ically small. We can then follow the real physical image
formation model of a motion blurred image to synthesize
the blurry images. In particular, a sequence of sharp im-
ages along the motion trajectory within exposure time can
be rendered from NeRF. The corresponding motion blurred
image can then be synthesized by averaging those virtual
sharp images. Both NeRF and the camera motion trajec-
tories are estimated by minimizing the difference between
the synthesized blurred images and the real blurred images.
We refer this modified model as BAD-NeRF, i.e. bundle
adjusted deblur NeRF.
We evaluate BAD-NeRF with both synthetic and real
datasets. The experimental results demonstrate that BAD-
NeRF achieves superior performance compared to prior
state of the art works (e.g. as shown in Fig. 1), by explicitly
modeling the image formation process of the motion blurred
image. In summary, our contributions are as follows:
• We present a photo-metric bundle adjustment formula-
tion for motion blurred images under the framework of
NeRF, which can be potentially integrated with other
vision pipelines (e.g. a motion blur aware camera pose
tracker [21]) in future.
• We show how this formulation can be used to acquire
high quality 3D scene representation from a set of mo-
tion blurred images.
• We experimentally validate that our approach is able
to deblur severe motion blurred images and synthesize
high quality novel view images. |
Wang_Deep_Factorized_Metric_Learning_CVPR_2023 | Abstract
Learning a generalizable and comprehensive similarity
metric to depict the semantic discrepancies between images
is the foundation of many computer vision tasks. While ex-
isting methods approach this goal by learning an ensemble
of embeddings with diverse objectives, the backbone net-
work still receives a mix of all the training signals. Differ-
ently, we propose a deep factorized metric learning (DFML)
method to factorize the training signal and employ different
samples to train various components of the backbone net-
work. We factorize the network to different sub-blocks and
devise a learnable router to adaptively allocate the training
samples to each sub-block with the objective to capture the
most information. The metric model trained by DFML cap-
ture different characteristics with different sub-blocks and
constitutes a generalizable metric when using all the sub-
blocks. The proposed DFML achieves state-of-the-art per-
formance on all three benchmarks for deep metric learn-
ing including CUB-200-2011, Cars196, and Stanford On-
line Products. We also generalize DFML to the image clas-
sification task on ImageNet-1K and observe consistent im-
provement in accuracy/computation trade-off. Specifically,
we improve the performance of ViT-B on ImageNet (+0.2%
accuracy) with less computation load (-24% FLOPs).1
| 1. Introduction
Learning good representations for images has always
been the core of computer vision, yet measuring the sim-
ilarity between representations after obtaining them is an
equally important problem. Focusing on this, metric learn-
ing aims to learn a discriminative similarity metric un-
der which the interclass distances are large and the intra-
class distances are small. Using a properly learned simi-
larity metric can improve the performance of downstream
tasks and has been employed in many applications such
*Equal contribution.
†Corresponding author.
1Code is available at: https://github.com/wangck20/DFML .
Ensemble
-
based DML
Backbone
DFML
……
Diverse
Objectives
Factorized MetricsFigure 1. Comparisons between ensemble-based deep metric
learning methods and DFML. Ensemble-based DML learns an
ensemble of embeddings where diverse objectives are employed.
Differently, DFML factorizes the backbone and learns a certain
routine for each sample to achieve the diversity of features, which
further boosts the generalization ability of the model on unseen
classes. (Best viewed in color.)
as semantic instance segmentation [7, 21, 37], remote sens-
ing [5, 10, 31], and room layout estimation [77].
Modern metric learning methods [44, 55, 56, 78] usually
exploit deep neural networks to map an image to a single
embedding and use the Euclidean distance or cosine simi-
larity between embeddings to measure the similarity. As a
single embedding might not be able to fully characterize an
image, a number of methods [1, 43, 47, 49, 72, 79, 80] begin
to explore using an ensemble of embeddings to represent
an image, where each embedding describes one attribute
of the image. The key to ensemble-based methods lies in
how to enforce diversity in the ensemble of embeddings so
that they can capture more characteristics. They achieve
this by using a diversity loss [47, 49], selecting different
samples [53, 72, 80], and adopting various tasks [43, 79],
etc. Most existing methods adopt a shared backbone net-
work to extract a common feature and only apply a single
fully connected layer to obtain each specialized embedding.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7672
However, the shared backbone limits the diversity of the en-
semble and hinders its ability to capture more generalizable
features. It still receives a mix of all the training signals and
can hardly produce diverse embeddings.
To address this, we propose a deep factorized metric
learning (DFML) method to adaptively factorize the train-
ing signals to learn more generalizable features, as shown
in 1. We first factorize each block of the metric back-
bone model to a number of sub-blocks, where we make
the summed features of all the sub-blocks to be equal to
that of the full block. As different samples may possess
distinct characteristics [80], we devise a learnable router to
adaptively allocate the training samples to the correspond-
ing sub-blocks. We learn the router using a reconstruction
objective to encourage each sample to be processed by the
most consistent sub-block. We demonstrate the proposed
DFML framework is compatible with existing deep met-
ric learning methods with various loss functions and sam-
pling strategies and can be readily applied to them. Due to
the better modularity of vision transformers (ViTs) [15,61],
we mainly focus on factorizing ViTs and further bench-
mark various existing deep metric learning methods on
ViTs. Extensive experiments on the widely used CUB-200-
2011 [63], Cars196 [35], and Stanford Online Products [56]
datasets show consistent improvements of DFML over ex-
isting methods. We also provide an in-depth analysis of
the proposed DFML framework to verify its effectiveness.
Specifically, we show that backbone models trained by our
DFML achieve better accuracy/computation trade-off than
the original model on ImageNet-1K [52] and even improve
the performance of ViT-B (+0.2% accuracy) with less com-
putation load (-24% FLOPs).
|
Wang_FrustumFormer_Adaptive_Instance-Aware_Resampling_for_Multi-View_3D_Detection_CVPR_2023 | Abstract
The transformation of features from 2D perspective
space to 3D space is essential to multi-view 3D object de-
tection. Recent approaches mainly focus on the design of
view transformation, either pixel-wisely lifting perspective
view features into 3D space with estimated depth or grid-
wisely constructing BEV features via 3D projection, treat-
ing all pixels or grids equally. However, choosing what to
transform is also important but has rarely been discussed
before. The pixels of a moving car are more informative
than the pixels of the sky. To fully utilize the informa-
tion contained in images, the view transformation should
be able to adapt to different image regions according to
their contents. In this paper, we propose a novel framework
named FrustumFormer , which pays more attention to the
features in instance regions via adaptive instance-aware re-
sampling. Specifically, the model obtains instance frustums
on the bird’s eye view by leveraging image view object pro-
posals. An adaptive occupancy mask within the instance
frustum is learned to refine the instance location. More-
over, the temporal frustum intersection could further reduce
the localization uncertainty of objects. Comprehensive ex-
periments on the nuScenes dataset demonstrate the effec-
tiveness of FrustumFormer, and we achieve a new state-of-
the-art performance on the benchmark. Codes and mod-
els will be made available at https://github.com/
Robertwyq/Frustum .
| 1. Introduction
Perception in 3D space has gained increasing attention in
both academia and industry. Despite the success of LiDAR-
based methods [14, 33, 41, 44], camera-based 3D object de-
tection [19, 35, 36, 43] has earned a wide audience, due to
the low cost for deployment and advantages for long-rangedetection. Recently, multi-view 3D detection in Bird’s-Eye-
View (BEV) has made fast progresses. Due to the unified
representation in 3D space, multi-view features and tem-
poral information can be fused conveniently, which leads
to significant performance improvement over monocular
methods [5, 28, 35, 39].
Transforming perspective view features into the bird’s-
eye view is the key to the success of modern BEV 3D de-
tectors [12,18,19,22]. As shown in Fig. 1, we categorize the
existing methods into lifting-based ones like LSS [30] and
BEVDet [12] and query-based ones like BEVFormer [19]
and Ego3RT [25]. However, these methods mainly focus
on the design of view transformation strategies while over-
looking the significance of choosing the right features to
transform during view transformation. Regions containing
objects like vehicles and pedestrians are apparently more in-
formative than the empty background like sky and ground.
But all previous methods treat them with equal importance.
We suggest that the view transformation should be adaptive
with respect to the image content. Therefore, we propose
Adaptive Instance-aware Resampling (AIR) , an instance-
aware view transformation, as shown in Fig. 1c. The core
idea of AIR is to reduce instance localization uncertainty by
focusing on a selective part of BEV queries. Localizing in-
stance regions is difficult directly on the BEV plane but rel-
atively easy in the image view. Therefore, the instance frus-
tum, lifting from instance proposals in image views, gives
geometrical hints of the possible locations of objects in the
3D space. Though the instance frustum has provided initial
prior locations, it is still a large uncertain area. We propose
anoccupancy mask predictor and a temporal frustum fusion
module to further reduce the localization uncertainty. Our
model learns an occupancy mask for frustum queries on the
BEV plane, predicting the possibility that a region might
contain objects. We also fuse instance frustums across dif-
ferent time steps, where the intersection area poses geomet-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5096
(a) Grid Sampling in Image.
(b) Grid Sampling in BEV .
(c) Instance-aware Sampling in Frustum.
Figure 1. Comparison of different sampling strategies for the feature transformation from image view to bird’s eye view. (a)
represents the sampling in image view and lift features [12] to BEV plane with pixel-wise depth estimation. (b) shows the grid sampling
in BEV and queries back [19] to obtain image features via cross-attention. (c) illustrates our proposed instance-aware sampling strategy in
the frustum, which adapts to the view content by focusing more attention on instance regions. This approach is designed to enhance the
learning of instance-aware BEV features.
ric constraints for actual locations of objects.
We propose a novel framework called FrustumFormer
based on the insights mentioned previously, which effec-
tively enhances the learning of instance-aware BEV features
via Adaptive Instance-aware Resampling. FrustumFormer
utilizes the instance frustum to establish the connection be-
tween perspective and bird’s eye view regions, which con-
tains two key designs: (1) A frustum encoder that enhances
instance-aware features via adaptive instance-aware resam-
pling. (2) A temporal frustum fusion module that aggre-
gates historical instance frustum features for accurate local-
ization and velocity prediction. In conclusion, the contribu-
tions of this work are as follows:
• We propose FrustumFormer , a novel framework that
exploits the geometric constraints behind perspective
view and birds’ eye view by instance frustum.
• We propose that choosing what to transform is also im-
portant during view transformation. The view transfor-
mation should adapt to the view content. Instance re-
gions should gain more attention rather than be treated
equally. Therefore, we design Adaptive Instance-
aware Resampling (AIR) to focus more on the instance
regions, leveraging sparse instance queries to enhance
the learning of instance-aware BEV features.
• We evaluate the proposed FrustumFormer on the
nuScenes dataset. We achieve improved performance
compared to prior arts. FrustumFormer achieves 58.9
NDS and 51.6 mAP on nuScenes test set without bells
and whistles. |
Wang_Generalized_UAV_Object_Detection_via_Frequency_Domain_Disentanglement_CVPR_2023 | Abstract
When deploying the Unmanned Aerial Vehicles object
detection (UAV-OD) network to complex and unseen real-
world scenarios, the generalization ability is usually re-
duced due to the domain shift. To address this issue, this
paper proposes a novel frequency domain disentanglement
method to improve the UAV-OD generalization. Specifi-
cally, we first verified that the spectrum of different bands
in the image has different effects to the UAV-OD general-
ization. Based on this conclusion, we design two learnable
filters to extract domain-invariant spectrum and domain-
specific spectrum, respectively. The former can be used
to train the UAV-OD network and improve its capacity for
generalization. In addition, we design a new instance-level
contrastive loss to guide the network training. This loss
enables the network to concentrate on extracting domain-
invariant spectrum and domain-specific spectrum, so as
to achieve better disentangling results. Experimental re-
sults on three unseen target domains demonstrate that our
method has better generalization ability than both the base-
line method and state-of-the-art methods.
| 1. Introduction
Unmanned Aerial Vehicles (UA V) equipped with cam-
eras have been exploited in a wide variety of applications,
opening up a new frontier for computer vision applications
[7, 11, 22, 28]. As one of the fundamental functions for the
UA V-based applications, UA V object detection (UA V-OD)
has garnered considerable interest [23, 31, 38]. However,
the large mobility of UA V-mounted cameras leads to an un-
predictable operating environment. The domain shift that
occurs when applying a UA V-OD network that has been
trained on a given dataset ( i.e., source domain) to unseen
real-world data ( i.e., target domain) typically results in in-
∗Corresponding author. This work was supported by the National
Key R&D Program of China under Grant 2020AAA0105702, the National
Natural Science Foundation of China (NSFC) under Grants 62225207,
62276243 and U19B2038, the University Synergy Innovation Program of
Anhui Province under Grant GXXT-2019-025.
(a) Baseline (b) Ours
Figure 1. Detection results on unseen target domains. UA V-OD
network is trained on daylight images and tested on images with
various scene structures (1st row), diverse illumination conditions
(2nd row), and adverse weather conditions (3rd row). Green rect-
angular boxes denote new correct detections beyond the baseline.
adequate performance. In particular, unseen real-world data
consists of unexpected and unknown samples, such as im-
ages taken in various scene structures, diverse illumination
conditions, and adverse weather conditions. Therefore, it is
crucial to improve the generalization ability of UA V-OD.
To alleviate the domain shift impact, existing methods
broadly come in two flavors: Domain Adaptation (DA)
[3, 5, 8, 16, 17, 26, 37] and Domain Generalization (DG)
[19,20,27,30,40]. In general, DA aims to tackle the domain
shift problem by learning domain-invariant/aligned features
between the source and target domains. However, DA meth-
ods cannot be readily employed when it is hard to guarantee
the accessibility of the target data. The requirement to ac-
cess both source and target data restricts the applicability of
DA approaches.
Recently, considerable attention has been drawn to the
field of DG. The goal of DG is to learn a model using
data from a single or multiple related but distinct source
domains so that the model can generalize well under distri-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1064
Reject bandVarious Scene Diverse Illumination Adverse Weather Average
AP50 AP75 AP AP50 AP75 AP AP50 AP75 AP AP50 AP75 AP
Null (full band) 66.0 37.6 36.7 11.1 3.4 4.8 42.3 14.9 19.6 39.8 18.6 20.4
α= 0, β= 0.01 60.0 30.2 32.6 6.4 1.9 2.75 39.5 16.1 19.0 35.3 16.1 18.1
α= 0.01, β= 0.1 61.4 30.3 32.8 39.1 15.9 19.8 42.6 18.6 20.8 47.7 21.6 24.5
α= 0.1, β= 1 70.2 35.1 37.1 29.4 10.2 13.6 38.2 10.6 16.7 45.9 18.6 22.5
Table 1. We conduct preliminary experiments to explore whether different spectral bands contribute equally to the UA V-OD network’s
generalization ability. The specified bands of source domain images are filtered out for training according to the reject band. For testing,
the generalization performance of the UA V-OD network is evaluated on three unseen target domains. We adopt the evaluation protocols
AP50, AP 75, and AP. ”Average” refers to the average generalization performance across three unseen target domains. We can conclude
that eliminating various bands has distinct effects on the generalization of unseen target domains for UA V-OD network.
bution shifts [43]. Most existing DG methods [19, 30, 40]
focus on decoupling object-related features from global
features via spatial vanilla convolution. However, unlike
generic object detection scenarios based on surveillance or
other ground-based cameras, the rapid movement of UA V-
mounted cameras leads to severe changes in the global ap-
pearance. For UA V-OD scenarios where the global appear-
ance changes, it is essential to explore global dependency
for better disentanglement. The spatial vanilla convolution,
which only emphasizes local pixel attention, cannot fully
explore global dependency, leading to suboptimal disentan-
glement and generalization results.
Inspired by the spectral theorem that the frequency do-
main obeys the nature of global modeling, we propose to
improve the UA V-OD generalization ability via frequency
domain disentanglement. We first conduct preliminary ex-
periments, i.e., exploring whether all spectrum bands con-
tribute equally to the generalization for the UA V-OD task, to
gain insight into how to implement our idea. If not, we can
extract the spectrum that is conducive to generalization and
use it to train the UA V-OD network to enhance its general-
ization. Specifically, we first convert each source domain
image x∈RH×W×Cinto frequency space through Fast
Fourier Transform (FFT) [24]:
F(x)(u, v) =H−1X
h=0W−1X
w=0x(h, w)e−j2π(h
Hu+w
Wv).(1)
The frequency space signal F(x)can be further decom-
posed to an amplitude spectrum A(x)and a phase spectrum
P(x), which is expressed as:
A(x)(u, v) =
R2(x)(u, v) +I2(x)(u, v)1/2,
P(x)(u, v) = arctanI(x)(u, v)
R(x)(u, v)
,(2)
where R(x)andI(x)represent the real and imaginary part
ofF(x). For each source image, we filter out the bands
of the amplitude spectrum A(x)between a certain upper
threshold αand lower threshold β(’Reject band’ in Tab. 1)with a band reject filter fs∈RH×W×Cand obtain the re-
maining amplitude spectrum ˆA(x):
fs(i, j) =
1,i∈[αH
2,βH
2]∪[(1−α)H
2,(1−β)H
2]
j∈[αW
2,βW
2]∪[(1−α)W
2,(1−β)W
2]
0,otherwise(3)
A(x) = ˆA(x)⊗fs, (4)
where ⊗denotes element-wise multiplication. ˆA(x)is then
fed to Inverse Fast Fourier Transform (IFFT) with P(x)to
generate the remaining image ˆxwhich is utilized to train the
UA V-OD network. After training, we apply the UA V-OD
network to three unseen target domains to evaluate the gen-
eralization ability. The experimental results are presented
in Tab. 1. We can observe that removing different bands
has varying effects on generalization to three unseen target
domains. Therefore, we can conclude that different bands
contribute differently to the UA V-OD generalization.
Based on the above observation, we employ two learn-
able filters to identify and extract the domain-invariant and
domain-specific spectrums. The former contributes posi-
tively to generalization, while the latter is the opposite. Fur-
thermore, we design a new instance-level contrastive loss
to aid in learning the learnable filters, enabling them to
concentrate on disentangling the two different spectrums.
By optimizing the instance-level contrastive loss, the in-
stance features of those are encouraged to contain domain-
invariant characteristics shared by target objects, and the
domain-specific characteristics shared in the source do-
main, respectively. In this way, the UA V-OD network can
generalize well on unseen target domains. For experiment
settings, we focus on learning a single-domain generalized
UA V-OD network, which is more challenging [30]. We fur-
ther validate the network on three unseen target domains, in-
cluding various scene structures, diverse illumination con-
ditions, and adverse weather conditions, demonstrating su-
perior generalization ability, as shown in Fig. 1.
Our main contributions are highlighted as follows:
1065
• We provide a new perspective to improve the general-
ization ability of the UA V-OD network on unseen tar-
get domains. To our best knowledge, this is the first
attempt to learn generalized UA V-OD via frequency
domain disentanglement.
• Based on the frequency domain disentanglement, we
propose a new framework that utilizes two learnable
filters to extract the domain-invariant and domain-
specific spectrum and design an instance-level con-
trastive loss to guide the disentangling process.
• Extensive experiments on three unseen target domains
reveal that our method enables the UA V-OD network to
achieve superior generalization performance in com-
parison to the baseline and state-of-the-art methods.
|