title
stringlengths 28
135
| abstract
stringlengths 0
12k
| introduction
stringlengths 0
12k
|
---|---|---|
Li_SGLoc_Scene_Geometry_Encoding_for_Outdoor_LiDAR_Localization_CVPR_2023 | Abstract
LiDAR-based absolute pose regression estimates the
global pose through a deep network in an end-to-end manner,
achieving impressive results in learning-based localization.
However, the accuracy of existing methods still has room
to improve due to the difficulty of effectively encoding the
scene geometry and the unsatisfactory quality of the data.
In this work, we propose a novel LiDAR localization frame-
work, SGLoc, which decouples the pose estimation to point
cloud correspondence regression and pose estimation via
this correspondence. This decoupling effectively encodes
the scene geometry because the decoupled correspondence
regression step greatly preserves the scene geometry, lead-
ing to significant performance improvement. Apart from this
decoupling, we also design a tri-scale spatial feature aggre-
gation module and inter-geometric consistency constraint
loss to effectively capture scene geometry. Moreover, we
empirically find that the ground truth might be noisy due
to GPS/INS measuring errors, greatly reducing the pose
estimation performance. Thus, we propose a pose quality
evaluation and enhancement method to measure and cor-
rect the ground truth pose. Extensive experiments on the
Oxford Radar RobotCar and NCLT datasets demonstrate the
effectiveness of SGLoc, which outperforms state-of-the-art
regression-based localization methods by 68.5% and 67.6%
on position accuracy, respectively.
| 1. Introduction
Estimating the position and orientation of LiDAR from
point clouds is a fundamental component of many applica-
tions in computer vision, e.g., autonomous driving, virtual
reality, and augmented reality.
Contemporary state-of-the-art LiDAR-based localization
methods explicitly use maps, which match the query point
*Equal contribution.
†Corresponding author.
Figure 1. LiDAR Localization results of our method and
PosePN++ [51] (state-of-the-art method) in urban (left) and school
(right) scenes from Oxford Radar RobotCar [2] and NCLT [34]
datasets. The star indicates the starting position.
cloud with a pre-built 3D map [18, 23,27,49]. However,
these methods usually require expensive 3D map storage
and communication. One alternative is the regression-
based approach, absolute pose regression (APR), which di-
rectly estimates the poses in the inference stage without
maps [8, 24,25,40,45]. APR methods typically use a CNN
to encode the scene feature and a multi-layer perceptron to
regress the pose. Compared to map-based methods, APR
does not need to store the pre-built maps, accordingly reduc-
ing communications.
For (1), APR networks learn highly abstract global scene
representations, which allow the network to classify the
scene effectively [25]. However, the global features usually
cannot encode detailed scene geometry, which is the key
to achieving an accurate pose estimation [10, 11,38,39].
Prior efforts have tried to minimize the relative pose or
photometric errors to add geometry constraints by pose
graph optimization (PGO) [4, 21] or novel view synthesis
(NVS) [10, 11]. However, this introduces additional computa-
tions, limiting its wide applications. For (2), we empirically
find current large-scale outdoor datasets suffer from various
errors in the data due to GPS/INS measuring errors. It affects
the APR learning process and makes it difficult to evaluate
the localization results accurately. To our knowledge, the
impact of data quality on localization has not been carefully
investigated in the existing literature.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9286
This paper proposes a novel framework, SGLoc, which
can (1) effectively capture the scene geometry; In addition,
we propose a data pre-processing method, Pose Quality Eval-
uation and Enhancement (PQEE), which can (2) improve
data quality. (1) Existing APR methods conduct end-to-
end regression from the point cloud in LiDAR coordinate
to pose. Unlike them, SGLoc decouples this process to
(a) regression from the point cloud in LiDAR coordinate
to world coordinate and (b) pose estimation via the point
cloud correspondence in LiDAR and world coordinate us-
ing RANSAC [17]. Importantly, step (a) can effectively
preserve the scene geometry, which is key for pose estima-
tion [10, 11,38,39]. To achieve high accuracy in step (a), we
design a Tri-scale Spatial Feature Aggregation (TSFA) mod-
ule and an Inter-Geometric Consistency Constraint (IGCC)
loss to effectively capture scene geometry. (2) We empir-
ically find that pose errors in the data greatly degrade the
pose estimation performance. For example, the ground truth
pose obtained by GPS/INS suffers from measuring errors.
To address this problem, we proposed a PQEE method which
can measure the errors in the pose and correct them after-
ward. We conduct extensive experiments on Oxford Radar
RobotCar [2] and NCLT [34] datasets, and results show that
our method has great advantages over the state-of-the-art, as
demonstrated in Fig. 1.
Our contributions can be summarized as follows:
•SGLoc is the first work to decouple LiDAR localization
into point cloud correspondences regression and pose
estimation via predicted correspondences, which can ef-
fectively capture scene geometry, leading to significant
performance improvement.
•We propose a novel Tri-Scale Spatial Feature Aggre-
gation (TSFA) module and an Inter-Geometric Consis-
tency Constraint (IGCC) loss to further improve the
encoding of scene geometry.
•We propose a generalized pose quality evaluation and
enhancement (PQEE) method to measure and correct
the pose errors in the localization data, improving
34.2%/16.8% on position and orientation for existing
LiDAR localization methods.
•Extensive experiments demonstrate the effectiveness
of SGLoc, which outperforms state-of-the-art LiDAR
localization methods by 68.1% on position accuracy. In
addition, to our knowledge, we are the first to reduce the
error to the level of the sub-meter on some trajectories.
|
Li_One-to-Few_Label_Assignment_for_End-to-End_Dense_Detection_CVPR_2023 | Abstract
One-to-one (o2o) label assignment plays a key role for
transformer based end-to-end detection, and it has been re-
cently introduced in fully convolutional detectors for end-
to-end dense detection. However, o2o can degrade the fea-
ture learning efficiency due to the limited number of posi-
tive samples. Though extra positive samples are introduced
to mitigate this issue in recent DETRs, the computation of
self- and cross- attentions in the decoder limits its practi-
cal application to dense and fully convolutional detectors.
In this work, we propose a simple yet effective one-to-few
(o2f) label assignment strategy for end-to-end dense de-
tection. Apart from defining one positive and many neg-
ative anchors for each object, we define several soft an-
chors, which serve as positive and negative samples simul-
taneously. The positive and negative weights of these soft
anchors are dynamically adjusted during training so that
they can contribute more to “representation learning” in
the early training stage, and contribute more to “duplicated
prediction removal” in the later stage. The detector trained
in this way can not only learn a strong feature representa-
tion but also perform end-to-end dense detection. Exper-
iments on COCO and CrowdHuman datasets demonstrate
the effectiveness of the o2f scheme. Code is available at
https://github.com/strongwolf/o2f .
| 1. Introduction
Object detection [31, 36, 38, 48] is a fundamental com-
puter vision task, aiming to localize and recognize the ob-
jects of predefined categories in an image. Owing to the
rapid development of deep neural networks (DNN) [14–16,
41,44–46], the detection performance has been significantly
improved in the past decade. During the evolution of object
detectors, one important trend is to remove the hand-crafted
components to achieve end-to-end detection.
One hand-crafted component in object detection is the
design of training samples. For decades, anchor boxes have
*Corresponding author.
A
B푙=−푡×푙표푔푝−(1−푡)×푙표푔(1−푝)
DCA B C D
o2o
o2f
o2m
o2o
o2f
o2mEarly stage
LaterstageFigure 1. The positive and negative weights of different anchors
(A, B, C and D) in the classification loss during early and later
training stages. Each anchor has a positive loss weight t(in orange
color) and a negative loss weight 1−t(in blue color). In our
method, A is a fully positive anchor, D is a fully negative anchor,
and B and C are ambiguous anchors. One can see that for o2o
and o2m label assignment schemes, the weights for all anchors are
fixed during the training process, while for our o2f scheme, the
weights for ambiguous anchors are dynamically adjusted.
been dominantly used in modern object detectors such as
Faster RCNN [38], SSD [31] and RetinaNet [28]. How-
ever, the performance of anchor-based detectors is sensitive
to the shape and size of anchor boxes. To mitigate this is-
sue, anchor-free [19,48] and query-based [5,8,34,61] detec-
tors have been proposed to replace anchor boxes by anchor
points and learnable positional queries, respectively.
Another hand-crafted component is non-maximum sup-
pression (NMS) to remove duplicated predictions. The ne-
cessity of NMS comes from the one-to-many (o2m) label
assignment [4,13,18,25,26,58], which assigns multiple pos-
itive samples to each GT object during the training process.
This can result in duplicated predictions in inference and
impede the detection performance. Since NMS has hyper-
parameters to tune and introduces additional cost, NMS-
free end-to-end object detection is highly desired.
With a transformer architecture, DETR [5] achieves
competitive end-to-end detection performance. Subsequent
studies [43, 51] find that the one-to-one (o2o) label as-
signment in DETR plays a key role for its success. Con-
sequently, the o2o strategy has been introduced in fully
convolutional network (FCN) based dense detectors for
lightweight end-to-end detection. However, o2o can im-
pede the training efficiency due to the limited number of
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7350
positive samples. This issue becomes severe in dense de-
tectors, which usually have more than 10k anchors in an im-
age. What’s more, two semantically similar anchors can be
adversely defined as positive and negative anchors, respec-
tively. Such a ‘label conflicts’ problem further decreases
the discrimination of feature representation. As a result, the
performance of end-to-end dense detectors still lags behind
the ones with NMS. Recent studies [7, 17, 22] on DETR try
to overcome this shortcoming of o2o scheme by introducing
independent query groups to increase the number of positive
samples. The independency between different query groups
is ensured by the self-attention computed in the decoder,
which is however infeasible for FCN-based detectors.
In this paper, we aim to develop an efficient FCN-based
dense detector, which is NMS-free yet end-to-end trainable.
We observe that it is inappropriate to set the ambiguous an-
chors that are semantically similar to the positive sample as
fully negative ones in o2o. Instead, they can be used to com-
pute both positive and negative losses during training, with-
out influencing the end-to-end capacity if the loss weights
are carefully designed. Based on the above observation,
we propose to assign dynamic soft classification labels for
those ambiguous anchors. As shown in Fig. 1, unlike o2o
which sets an ambiguous anchor (anchor B or C) as a fully
negative sample, we label each ambiguous anchor as par-
tially positive and partially negative. The degrees of positive
and negative labels are adaptively adjusted during training
to keep a good balance between ‘representation learning’
and ‘duplicated prediction removal’. In particular, we be-
gin with a large positive degree and a small negative degree
in the early training stage so that the network can learn the
feature representation ability more efficiently, while in the
later training stage, we gradually increase the negative de-
grees of ambiguous anchors to supervise the network learn-
ing to remove duplicated predictions. We name our method
as a one-to-few (o2f) label assignment since one object can
have a few soft anchors. We instantiate the o2f LA into
dense detector FCOS, and our experiments on COCO [29]
and CrowHuman [40] demonstrate that it achieves on-par or
even better performance than the detectors with NMS.
|
Luo_Leverage_Interactive_Affinity_for_Affordance_Learning_CVPR_2023 | Abstract
Perceiving potential “action possibilities” (i.e., affor-
dance) regions of images and learning interactive func-
tionalities of objects from human demonstration is a chal-
lenging task due to the diversity of human-object inter-
actions. Prevailing affordance learning algorithms often
adopt the label assignment paradigm and presume that
there is a unique relationship between functional region and
affordance label, yielding poor performance when adapt-
ing to unseen environments with large appearance varia-
tions. In this paper, we propose to leverage interactive affin-
ity for affordance learning, i.e.extracting interactive affinity
from human-object interaction and transferring it to non-
interactive objects. Interactive affinity, which represents the
contacts between different parts of the human body and lo-
cal regions of the target object, can provide inherent cues of
interconnectivity between humans and objects, thereby re-
ducing the ambiguity of the perceived action possibilities.
Specifically, we propose a pose-aided interactive affinity
learning framework that exploits human pose to guide the
network to learn the interactive affinity from human-object
interactions. Particularly, a keypoint heuristic perception
(KHP) scheme is devised to exploit the keypoint association
of human pose to alleviate the uncertainties due to interac-
tion diversities and contact occlusions. Besides, a contact-
driven affordance learning (CAL) dataset is constructed by
collecting and labeling over 5,000images. Experimental
results demonstrate that our method outperforms the rep-
resentative models regarding objective metrics and visual
quality. Code and dataset: github.com/lhc1224/PIAL-Net.
| 1. Introduction
The objective of affordance learning is to locate the “ac-
tion possibilities” regions of an object [15, 18]. For an in-
*Corresponding author.‡Equal contributions.
Figure 1. (a)Interaction affinity refers to the contact between dif-
ferent parts of the human body and the local regions of a target
object. (b)The interactive affinity provides rich cues to guide the
model to acquire invariant features of the object’s local regions in-
teracting with the body part, thus counteracting the multiple pos-
sibilities caused by diverse interactions.
telligent agent, it is vital to perceive not only the object se-
mantics but also how to interact with various objects’ local
regions. Perceiving and reasoning about the object’s inter-
actable regions is a critical capability for embodied intelli-
gent systems to interact with the environment actively, dis-
tinct from passive perception systems [3, 38, 39, 44]. More-
over, affordance learning has a wide range of applications
in fields such as action recognition [13, 24, 43], scene un-
derstanding [9, 69], human-robot interaction [51, 63], au-
tonomous driving [7] and VR/AR [50, 53].
Affordance is a dynamic property closely related to hu-
mans and the environment [18]. Previous works [11,37,40,
46] focus on establishing mapping relationships between
appearances and labels for affordance learning. However,
they neglect the multiple possibilities of affordance brought
about by changes in the environment and actors, leading
to an incorrect perception. Recent studies [39, 48] uti-
lize reinforcement learning to allow intelligent agents to
perceive the environment through numerous interactions in
simulated/actual scenarios. Such approaches are mainly
limited by their high cost and struggle to generalize to
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6809
Figure 2. Motivation. (a) This paper explores the associations of interactable regions between diverse images by considering the context
of contact regions with different body parts. (b)This paper considers leveraging the connection of human pose keypoints to alleviate the
uncertainties due to interaction diversities and contact occlusions.
unseen scenarios [58]. To this end, researchers consider
learning from human demonstration in an action-free man-
ner [14, 29, 30, 38]. Nonetheless, they only roughly seg-
ment the whole object/interaction regions in a general way,
which is still challenging to understand how the object is
used. The multiple possibilities due to different local re-
gions interacting with humans in various ways are not fully
resolved. In this paper, we propose to leverage interac-
tive affinity for affordance learning, i.e.extracting interac-
tive affinity from human-object interaction and transferring
it to non-interactive objects. The interactive affinity (Fig.
1 (a)) denotes the contacts between different human body
parts and objects’ local regions, which can provide inher-
ent cues of interconnectivity between humans and objects,
thereby reducing the ambiguity of the perceived action pos-
sibilities (Fig. 1 (b)).
However, it faces the challenges of interaction diversi-
ties and contact occlusions, leading to difficulties in acquir-
ing a good interactive affinity representation. The human
pose is independent of background, and the same interac-
tion corresponds to approximately similar poses. Thus, it
makes sense to use the association between pose keypoints
to overcome the difficulty of obtaining interactive affinity
representations. Moreover, it is challenging to transfer the
interactive affinity to non-interactive object images due to
variations in views, scales, and appearances. The context
between the different body part contact regions (Fig. 2 (a))
provides the model with the possibility to explore the as-
sociations between the interactable regions of the various
images to counteract transfer difficulties.
In this paper, we present a pose-aided interactive affin-
ity learning framework. First, an Interactive Feature
Enhancement ( IFE) module is introduced to explore the
connections between different interactable regions of the
images. Then, a Keypoint Heuristic Perception ( KHP )
scheme is devised to mine the interactive affinity repre-
sentation from interaction and transfer it to non-interactiveobjects. Specifically, the IFE module leverages the trans-
former to fully extract global contextual cues by exploiting
the common relationships between their local interactable
regions (Fig. 2 (a)). Then, they are used to establish asso-
ciations between the object interactable regions in different
images. Subsequently, the KHP scheme exploits the corre-
lation between the human body keypoints and the contact
region to guide the network to mine the object’s local in-
variant features interacting with the body parts (Fig. 2 (b)).
Although the numerous related datasets [9, 10, 19, 30,
36, 57, 67] that emerged during the development of affor-
dance learning, there is still a lack of relevant datasets suited
for leveraging interactive affinity. To carry out a thorough
study, this paper constructs an Contact-driven Affordance
Learning ( CAL ) dataset, consisting of 5,258images from
23affordance and 47object categories. We conduct con-
trastive studies on the CAL dataset against six representa-
tive models in several related fields. Experimental results
validate the effectiveness of our method in solving the mul-
tiple possibilities of affordance.
Contributions: 1) We propose leveraging interactive
affinity for affordance learning and establishing a CAL
benchmark to facilitate the study of obtaining interactive
affinity to counteract the multiple possibilities of affor-
dance. 2)We propose a pose-aided interactive affinity learn-
ing framework that exploits pose data to guide the network
to mine the interactive affinity of body parts and object re-
gions from human-object interaction. 3)Experiments on
the CAL dataset demonstrate that our model outperforms
state-of-the-art methods and can serve as a strong baseline
for future affordance learning research.
|
Nauta_PIP-Net_Patch-Based_Intuitive_Prototypes_for_Interpretable_Image_Classification_CVPR_2023 | Abstract
Interpretable methods based on prototypical patches rec-
ognize various components in an image in order to explain
their reasoning to humans. However, existing prototype-
based methods can learn prototypes that are not in line with
human visual perception, i.e., the same prototype can refer
to different concepts in the real world, making interpretation
not intuitive. Driven by the principle of explainability-by-
design, we introduce PIP-Net (Patch-based Intuitive Proto-
types Network): an interpretable image classification model
that learns prototypical parts in a self-supervised fashion
which correlate better with human vision. PIP-Net can
be interpreted as a sparse scoring sheet where the pres-
ence of a prototypical part in an image adds evidence for a
class. The model can also abstain from a decision for out-of-
distribution data by saying “I haven’t seen this before”. We
only use image-level labels and do not rely on any part an-
notations. PIP-Net is globally interpretable since the set of
learned prototypes shows the entire reasoning of the model.
A smaller local explanation locates the relevant prototypes
in one image. We show that our prototypes correlate with
ground-truth object parts, indicating that PIP-Net closes
the “semantic gap” between latent space and pixel space.
Hence, our PIP-Net with interpretable prototypes enables
users to interpret the decision making process in an intuitive,
faithful and semantically meaningful way. Code is available
athttps://github.com/M-Nauta/PIPNet .
| 1. Introduction
Deep neural networks are dominant in computer vision,
but there is a high demand for understanding the reasoning of
such complex models [23,30]. Consequently, interpretability
and explainability have grown in importance. In contrast to
the common post-hoc explainability that reverse-engineersa black box, we argue that we should take interpretability
as a design starting point for in-model explainability. The
recognition-by-components theory [1] describes how hu-
mans recognize objects by segmenting them into multiple
components. We mimic this intuitive line of reasoning in
an intrinsically interpretable image classifier. Specifically,
our PIP-Net (Patch-based Intuitive Prototypes Network) au-
tomatically identifies semantically meaningful components,
while only having access to image-level class labels and
not relying on additional part annotations. The components
are “prototypical parts” (prototypes) visualized as image
patches, since exemplary natural images are more informa-
tive to humans than generated synthetic images [2]. PIP-Net
is globally interpretable and designed to be highly intuitive
as it uses simple scoring-sheet reasoning: the more relevant
prototypical parts for a specific class are present in an image,
the more evidence for that class is found, and the higher its
score. When no relevant prototypes are present in the image,
with e.g. out-of-distribution data, PIP-Net will abstain from a
decision. PIP-Net is therefore able to say “I haven’t seen this
before” (see Fig. 2). Additionally, following the principle
of isolation of functional properties for aligning human and
machine vision [5], the reasoning of PIP-Net is separated
into multiple steps. This simplifies human identification of
reasons for (mis)classification.
Recent interpretable part-prototype models are ProtoP-
Net [3], ProtoTree [24], ProtoPShare [29] and ProtoPool [28].
These part-prototype models are only designed for fine-
grained image recognition tasks (birds and car types) and
lack “semantic correspondence” [17] between learned proto-
types and human concepts. This “semantic gap” in prototype-
based methods between similarity in latent space and input
space was also found by others [9, 14]. We hypothesize that
the main cause of the semantic gap is the fact that exist-
ing part-prototype models only regularize interpretability on
class-level, since their underlying assumption is that (parts
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2744
Sun OR DogNOT (sun OR dog)
Image patches similar to prototype 1
Image patches similar to prototype 1 (“dog”)Existing prototype-based modelsOur model
Image patches similar to prototype 2 (“sun”)
Figure 1. Toy dataset with two classes (left). Existing models can learn representations of prototypes that do not align with human visually
perceived similarity (center). Our objective is to learn prototypes that represent concepts that also look similar to humans (right).
of) images from the same class have the same prototypes.
This assumption may however not hold, leading to similarity
in latent space which does not correspond to visually per-
ceived similarity. Consider the example in Fig. 1, where we
have re-labeled images from a clipart dataset [43] to create a
binary classification task: the two kids are happy when the
sun or dog is present, and sad when there is neither a sun
nor a dog. Hence, the classes are ‘ sun OR dog ’ and ‘ NOT
(sun OR dog) ’. Intuitively, an easy-to-interpret model should
learn two prototypes: one for the sun and one for the dog.
However, existing interpretable part-prototype models, such
as ProtoPNet [3] and ProtoTree [24], optimize images of the
same class to have the same prototypes. They could, there-
fore, learn a single prototype that represents both the sun
and the dog, especially when the model is optimized to have
few prototypes (see Fig. 1, center). The model’s perception
of patch similarity may thus not be in line with human visual
perception, leading to the perceived “semantic gap”.
To address the gap between latent and pixel space, we
present PIP-Net: an interpretable model that is designed to
be intuitive and optimized to correlate with human vision.
A sparse linear layer connects learned interpretable proto-
typical parts to classes. A user only needs to inspect the
prototypes and their relation to the classes in order to inter-
pret the model. We restrict the weights of the linear layer to
be non-negative, such that the presence of a class-relevant
prototype increases the evidence for a class. The linear layer
can be interpreted as a scoring sheet: the score for a class is
the sum of all present prototypes multiplied by their weights.
Alocal explanation (Fig. 2 and Fig. 3) explains a specific
prediction and shows which prototypes were found at which
locations in the image. The global explanation provides an
overall view of the model’s decision layer, consisting of the
sparse weights between classes and their relevant prototypes.
Because of this interpretable andpredictive linear layer, we
ensure a direct relation between the prototypes and the clas-
sification, and thereby prevent unfaithful explanations which
can arise with local or post-hoc XAI methods [16].Our Contributions:
1.We present the Patch-based Intuitive Prototypes Net-
work (PIP-Net): an intrinsically interpretable image
classifier, driven by three explainability requirements:
the model should be intuitive, compact and able to han-
dle out-of-distribution data.
2.PIP-Net has a surprisingly simple architecture and is
trained with novel regularization for learning prototype
similarity that better correlates with human visual per-
ception , thereby closing a perceived semantic gap.
3.PIP-Net acts as a scoring sheet and therefore can detect
that an image does not belong to anyclass or that it
belongs to multiple classes.
4.Instead of specifying the number of prototypes be-
forehand as in ProtoPNet [3], ProtoPool [28] and Tes-
Net [35], PIP-Net only needs an upper bound on the
number of prototypes and selects as few prototypes as
possible for good classification accuracy with compact
explanations, reaching sparsity ratios >99%.
|
Ma_DiGeo_Discriminative_Geometry-Aware_Learning_for_Generalized_Few-Shot_Object_Detection_CVPR_2023 | Abstract
Generalized few-shot object detection aims to achieve
precise detection on both base classes with abundant an-
notations and novel classes with limited training data. Ex-
isting approaches enhance few-shot generalization with the
sacrifice of base-class performance, or maintain high pre-
cision in base-class detection with limited improvement in
novel-class adaptation. In this paper, we point out the rea-
son is insufficient Di scriminative feature learning for all of
the classes. As such, we propose a new training frame-
work, DiGeo, to learn Geo metry-aware features of inter-
class separation and intra-class compactness. To guide
the separation of feature clusters, we derive an offline sim-
plex equiangular tight frame (ETF) classifier whose weights
serve as class centers and are maximally and equally sep-
arated. To tighten the cluster for each class, we include
adaptive class-specific margins into the classification loss
and encourage the features close to the class centers. Ex-
perimental studies on two few-shot benchmark datasets
(VOC, COCO) and one long-tail dataset (LVIS) demon-
strate that, with a single model, our method can effectively
improve generalization on novel classes without hurting the
detection of base classes. Our code can be found here.
| 1. Introduction
Recent years have witnessed the tremendous growth of
object detection through deep neural models and large-scale
training [2, 13–15, 40, 42, 45, 60, 65]. However, the success
of detection models heavily relies on the amount and qual-
ity of annotations, which requires expensive annotation cost
and time. In addition, traditional object detection models
perform worse on the classes with a limited number of an-
notations [11, 52, 56], while human are able to learn from
few observations. In order to close the gap between human
vision system and detection models, recent studies have
investigated how to generalize well on rare classes under
Corresponding Author.
10-shot5-shot3-shotNovel class detection (nAP50)
4055708530405060
20Base class detection (bAP50)FsDetViewMeta R-CNNFSRWFRCNft-fullTFARetentiveRCNNOursMPSRTransfer LearningOursMeta Learning10-shot5-shot3-shot
(base-onlyPre-training)Figure 1. Performance on few-shot object detection on Pascal
VOC [3]. Previous transfer-learning approaches (blue) balancing
the training data by aggressively down-sampling the base set and
may result in overfitting. Instead, we (red) use the full train set,
aiming to both maintain precise base detection but learn discrimi-
native features from the limited annotations for few-shot classes.
the few-shot object detection (FSOD) setting. Specifically,
given many-shot ( base) classes with plenty of training data
and few-shot ( novel ) classes with extremely limited training
data ( e.g., 5 annotated instances per class), FSOD expects
the model to detect the objects in the novel classes well.
To improve the generalization ability on novel-class de-
tection, recent studies [6, 44, 52] conduct transfer learning
in a two-step manner. In detail, the model is pre-trained
on the whole set of base classes, and then fine-tuned on the
union of the set of novel classes and an aggressively down-
sampled base subset. However, the efficient few-shot adap-
tation is often achieved at the expense of sacrificing preci-
sion on base detection (Fig. 1). Being aware of this limita-
tion, Fan et al. [6] proposed to evaluate the performance of
both base and novel classes in the generalized few-shot ob-
ject detection (GFSOD) setting. In addition, they proposed
a consistency regularization to emphasize the pre-trained
base knowledge during fine-tuning and employed an ensem-
bling strategy. However, they design different classifiers for
base and novel classes, and the adaptation on novel classes
is impeded due to a complex ensembling process.
In this paper, we pointed out that the devil is in in-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3208
sufficient discriminative feature learning for few-shot ob-
ject detection, including inefficient knowledge adaptation to
novel classes and unexpected knowledge forgetting of base
classes. First, as the novel instances are extremely limited
during training, it is hard to capture the representative vi-
sual information of novel classes and adapt the knowledge
learned from base classes to novel classes. As a result, the
model cannot distinguish between the novel classes, which
weakens the few-shot adaptation. Secondly, balanced train-
ing strategies such as down-sampling fail to utilize the di-
verse training samples from base set. Thus, it is hard to pre-
serve the complete knowledge of base classes, which leads
to overfitting and further decreases the detection scores.
To tackle these challenges, we proposed a new training
framework, DiGeo , to make the best of both worlds for
generalized few-shot object detection, i.e., improving gen-
eralization on novel classes without hurting the detection
of base classes. Our motivation is to learn Discriminative
Geometry-aware features via inter-class separation and
intra-class compactness . For inter-class separation, we ex-
pect the class centers [53] to be well distinct from each
other. Motivated by the symmetric geometry of simplex
equiangular tight frame (ETF) [36], we proposed to use ETF
as classifier to guide the separation of features. To be spe-
cific, we derive an offline ETF whose weights are maxi-
mally & equivalently separated ( i.e., independent from the
training data distribution) and are assigned as fixed centers
for all classes. For intra-class compactness, we expect the
features to be closed to the class centers for a clear deci-
sion boundary. In practice, we add class-specific margins
to output logits during training to push the features close to
the class centers. The margins are based on instance dis-
tribution prior and are then adaptively adjusted though self-
distillation. Meanwhile, we consider the huge imbalance
between base set and novel set, and up-sample the novel set
to facilitate the feature extraction.
We validate the effectiveness of DiGeo under the GF-
SOD setting on Pascal VOC [3, 4] and MS COCO [27].
Compared to existing methods, we can both achieve pre-
cise detection on base classes and sufficiently improve the
adaptation efficiency on novel classes using a single model.
Furthermore, our DiGeo can be intuitively extended to
long-tailed object detection. Experimental results on LVIS
datasets demonstrate the generalizibility of our approach.
Our contributions are summarized as follows:
We revisit few-shot object detection from a perspective of
discriminative feature learning, and point out that exist-
ing methods fail in knowledge adaptation to novel classes
and suffer from knowledge forgetting of base classes.
We propose DiGeo to pursue an desired feature geom-
etry, i.e., inter-class separation and intra-class compact-
ness, which consistently improves the performance on
both base and novel classes.We conduct extensive experiments on three benchmark
datasets for few-shot object detection and long-tailed ob-
ject detection to verify the generalizability of DiGeo.
|
Qiao_Fuzzy_Positive_Learning_for_Semi-Supervised_Semantic_Segmentation_CVPR_2023 | Abstract
Semi-supervised learning (SSL) essentially pursues class
boundary exploration with less dependence on human an-
notations. Although typical attempts focus on ameliorat-
ing the inevitable error-prone pseudo-labeling, we think
differently and resort to exhausting informative semantics
from multiple probably correct candidate labels. In this pa-
per, we introduce Fuzzy Positive Learning (FPL) for accu-
rate SSL semantic segmentation in a plug-and-play fash-
ion, targeting adaptively encouraging fuzzy positive pre-
dictions and suppressing highly-probable negatives. Be-
ing conceptually simple yet practically effective, FPL can
remarkably alleviate interference from wrong pseudo la-
bels and progressively achieve clear pixel-level semantic
discrimination. Concretely, our FPL approach consists
of two main components, including fuzzy positive assign-
ment (FPA) to provide an adaptive number of labels for
each pixel and fuzzy positive regularization (FPR) to re-
strict the predictions of fuzzy positive categories to be
larger than the rest under different perturbations. Theo-
retical analysis and extensive experiments on Cityscapes
and VOC 2012 with consistent performance gain justify
the superiority of our approach. Codes are provided in
https://github.com/qpc1611094/FPL .
| 1. Introduction
Semantic segmentation models enable accurate scene
understanding [1, 29,45] with the help of fine pixel-level
annotations. Yet, collecting labeled segmentation datasets
is time-consuming and labor-costing [6]. Considering unla-
beled data are annotation-free and easily accessible, semi-
*Equal contribution.
†Corresponding author.supervised learning (SSL) is introduced into semantic seg-
mentation [5, 34, 43, 49, 51, 53] to encourage the model to
generalize better on unseen data with less dependence on
artificial annotations.
Figure 1. (a) Existing methods using pseudo label to utilize un-
labeled data. (b) The proposed FPL that provides multiple fuzzy
positive labels for each pixel to utilize unlabeled data. The exam-
ple of ‘Truck’ shows that our method covers ground truth (GT)
more comprehensively than vanilla positive learning.
The semi-supervised segmentation task faces a scenario
where only a subset of training images are assigned seg-
mentation labels while the others remain unlabeled. Cur-
rent state-of-the-art (SOTA) methods utilize unlabeled data
via consistency regularization, which aims to obtain invari-
ant predictions for unlabeled pixels under various perturba-
tions [5, 34, 49, 53]. Their general paradigm is to use the
pseudo label generated under weak (or none) perturbations
as the learning target of predictions under strong perturba-
tions. Though achieving promising results, errors are in-
evitable in the pseudo label used in these methods, misguid-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15465
ing the training of their models [24,33]. An intuitive exam-
ple is that some pixels may be confused in categories with
similar semantics. As Fig. 1 (a), some pixels belonging to
‘Truck’ are wrongly classified into the ‘Car’ category (e.g.,
white boxed pixel). To mitigate this problem, typical meth-
ods focus on ameliorating the learning of pseudo labels by
filtering low-confidence pseudo labels out [14,21,38,51,53]
and generating pseudo labels more accurately [8,20,26,48].
However, the semantics of ground truth buried in other un-
selected labels are ignored in existing methods.
In this paper, we propose Fuzzy Positive Learning
(FPL), a new SSL segmentation method that exhausts in-
formative semantics from multiple probably correct candi-
date labels. We name these labels “fuzzy positive” labels
since each of them has the probability to be the ground
truth. As shown in Fig. 1 (b), our fuzzy positive labels
cover the ground truth more comprehensively, facilitating
our FPL to exploit the semantics of ground truth better. Ex-
tending learning from one pseudo label to learning from
multiple fuzzy positive labels is not a simple implemen-
tation, which contains two pending issues. One is how to
provide an adaptive number of labels for each pixel. And
the other one is how to exploit the possible GT semantics
from fuzzy positive labels. For these two issues, a fuzzy
positive assignment (FPA) algorithm is first proposed to se-
lect which labels should be appended to the fuzzy positive
label set of each pixel. Afterward, a fuzzy positive regular-
ization (FPR) is developed to regularize the predictions of
fuzzy positive categories to be larger than the predictions of
the rest negative categories under different perturbations.
Our FPL achieves consistent performance gain on
Cityscapes and Pascal VOC 2012 datasets using CPS [5]
and AEL [14] as baselines. Moreover, we theoretically and
empirically analyze that the superiority of FPL lies in re-
vising the gradient of learning ground truth when pseudo-
labels are wrongly-assigned. Our main contributions are:
• FPL provides a new perspective for SSL segmentation,
that is, learning informative semantics from multiple
fuzzy positive labels instead of only one pseudo label.
• A fuzzy positive assignment is proposed to provide an
adaptive number of labels for each pixel. Besides, a
fuzzy positive regularization is developed to learn the
semantics of ground truth from fuzzy positive labels.
• FPL is easy to implement and could bring stable per-
formance gains on existing SSL segmentation methods
in a plug-and-play fashion.
|
Li_Spatially_Adaptive_Self-Supervised_Learning_for_Real-World_Image_Denoising_CVPR_2023 | Abstract
Significant progress has been made in self-supervised
image denoising (SSID) in the recent few years. However ,most methods focus on dealing with spatially independentnoise, and they have little practicality on real-world sRGB
images with spatially correlated noise. Although pixel-shuffle downsampling has been suggested for breaking the
noise correlation, it breaks the original information of im-ages, which limits the denoising performance. In this paper ,
we propose a novel perspective to solve this problem, i.e.,
seeking for spatially adaptive supervision for real-world
sRGB image denoising. Specifically, we take into accountthe respective characteristics of flat and textured regions
in noisy images, and construct supervisions for them sepa-rately. F or flat areas, the supervision can be safely derived
from non-adjacent pixels, which are much far from the cur-rent pixel for excluding the influence of the noise-correlatedones. And we extend the blind-spot network to a blind-neighborhood network (BNN) for providing supervision onflat areas. F or textured regions, the supervision has to beclosely related to the content of adjacent pixels. And wepresent a locally aware network (LAN) to meet the require-ment, while LAN itself is selectively supervised with the out-put of BNN. Combining these two supervisions, a denoising
network ( e.g., U-Net) can be well-trained. Extensive exper-
iments show that our method performs favorably against
state-of-the-art SSID methods on real-world sRGB pho-
tographs. The code is available at https://github.
com/nagejacob/SpatiallyAdaptiveSSID .
| 1. Introduction
Image denoising aims to restore clean images from noisy
observations [ 5,11,14], and it has achieved noticeable im-
provement with the advances in deep networks [ 2,10,21,
29–31,33,38,40,41,46–48,50,51]. However, the mod-
els trained with synthetic noise usually perform poorly in
real-world scenarios in which noise is complex and change-
(a) Noisy Input
(b) CVF-SID [ 35]
(c) AP-BSN+R3[27]
(d) Ours
Figure 1. Visual comparison between self-supervised denoising
methods on the DND dataset [ 37]. PSNR (dB) and SSIM with
respect to the ground-truth are marked on the result for quantitativecomparison. Our method performs better in removing spatiallycorrelated noise from real-world sRGB photographs.
able. A feasible solution is to collect real-world clean-
noisy image pairs [ 1,37] and take them for model train-
ing [ 2,15,21,46,47]. But building such datasets generally
requires strictly controlled environment as well as compli-
cated photographing and post-processing, which is time-consuming and labor-intensive. Moreover, the noise statis-
tics vary under different cameras and illuminating condi-
tions [ 43,54], and it is impractical to capture pairs for every
device and scenario.
To circumvent the limitations of noisy-clean pairs
collection, self-supervised image denoising (SSID) ap-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9914
proaches [ 3,17,19,20,24,26–28,34–36,42,45] have been
proposed, which can be trained merely on noisy im-ages. However, the noise model assumptions of a largeamount of SSID methods do not match the characteristicsof real-world noise in sRGB space. For instance, HQ-
SSL [ 26] improves the denoising performance with poste-
rior inference, but requires explicit noise probability den-sity. Noise2Score [ 20] and its extension [ 19] propose a
closed-form image denoising schema with score matchingfollowed by noise model and noise level estimation, butthe noise is bounded to Tweedie distribution. Althoughsome methods [ 17,42] are designed for distribution agnostic
noise, they can only deal with spatially independent noise.
Recently, a few attempts have been explored to remove
spatially correlated noise in a self-supervised manner. CVF-
SID [ 35] disentangles the image and noise components
from noisy images, but the difficulty of optimization lim-
its its performance. Some methods [ 44,57] break the spa-
tial noise correlation with pixel-shuffle downsampling (PD),
then utilize spatially independent denoisers ( e.g., blind-spot
network [ 6,26,44]) to remove the uncorrelated noise. How-
ever, PD breaks the original information of the images andleads to aliasing artifacts, which largely degrade the imagequality. AP-BSN [ 27] applies asymmetric PD factors and
post-refinement processing to seek for a better trade-off be-tween noise removal and aliasing artifacts, but it is time-
consuming during inference.
In this paper, we present a novel perspective for SSID
by considering the respective characteristics of flat and tex-tured regions in noisy images, resulting in a spatially adap-
tive SSID method for real-world sRGB images. Insteadof utilizing pixel-shuffle downsampling and blind-spot net-work to learn denoising results directly, we seek for spa-
tially adaptive supervision for a denoising network ( e.g., U-
Net [ 39]). Concretely, for flat areas, the supervision can be
safely derived from non-adjacent pixels, which are much far
from the current pixel for excluding the influence of noisecorrelation. We achieve it by extending the blind-spot net-
work (BSN) [ 26] to a blind-neighborhood network (BNN).
BNN modifies the architecture of BSN to expand the sizeof blind region, and takes the same self-supervised train-ing schema as BSN. Note that it is difficult to determine
whether an area is flat or not from the noisy images, so we
directly apply BNN to the whole image and it has little ef-
fect on the handling of flat areas. Moreover, such an op-
eration can give us a chance to detect textured areas fromthe output of BNN, whose variance is usually higher. For
textured areas, neighboring pixels are essential for predict-ing the details and they can not be ignored. To this end,
we present a locally aware network (LAN), which focuses
on recovering the texture details solely from adjacent pix-
els. LAN is supervised by flat areas of BNN output. Whentraining is done, LAN will be applied to textured areas togenerate supervision information for these areas.
Combining the learned supervisions for flat and textured
areas, a denoising network can be readily trained. Dur-ing inference, BNN and LAN can be detached, only theultimate denoising network is used to restore clean im-ages. Extensive experiments are conducted on SIDD [ 1] and
DND [ 37] datasets. The results demonstrate our method is
not only effective but also efficient. In comparison to state-of-the-art self-supervised denoising methods, our method
behaves favorably in terms of both quantitative metrics and
perceptual quality. The contributions of this paper can be
summarized as follows:
• We propose a novel perspective for self-supervised
real-world image denoising, i.e., learning spatially
adaptive supervision for a denoising network accord-
ing to the image characteristics.
• For flat areas, we extend the blind-spot network to a
blind-neighborhood network (BNN) for providing su-
pervision information. For texture areas, we present alocally aware network (LAN) to learn that from neigh-
boring pixels.
• Extensive experiments show our method has superior
performance and inference efficiency against state-of-the-art SSID methods on real-world sRGB noise re-
moval.
|
Muller_DiffRF_Rendering-Guided_3D_Radiance_Field_Diffusion_CVPR_2023 | Abstract
We introduce DiffRF , a novel approach for 3D radiance
field synthesis based on denoising diffusion probabilistic
models. While existing diffusion-based methods operate on
images, latent codes, or point cloud data, we are the first
to directly generate volumetric radiance fields. To this end,
we propose a 3D denoising model which directly operates
on an explicit voxel grid representation. However, as radi-
ance fields generated from a set of posed images can be am-
biguous and contain artifacts, obtaining ground truth radi-
ance field samples is non-trivial. We address this challenge
by pairing the denoising formulation with a rendering loss,
enabling our model to learn a deviated prior that favours
good image quality instead of trying to replicate fitting er-rors like floating artifacts. In contrast to 2D-diffusion mod-
els, our model learns multi-view consistent priors, enabling
free-view synthesis and accurate shape generation. Com-
pared to 3D GANs, our diffusion-based approach naturally
enables conditional generation such as masked completion
or single-view 3D synthesis at inference time.
| 1. Introduction
In recent years, Neural Radiance Fields (NeRFs) [37]
have emerged as a powerful representation for fitting indi-
vidual 3D scenes from posed 2D input images. The ability
to photo-realistically synthesize novel views from arbitrary
viewpoints while respecting the underlying 3D scene geom-
Project page: https://sirwyver.github.io/DiffRF/.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4328
etry has the potential to disrupt and transform applications
like AR/VR, gaming, mapping, navigation, etc. A num-
ber of recent works have introduced extensions for making
NeRFs more sophisticated, by e.g., showing how to incor-
porate scene semantics [18, 30], training models from het-
erogeneous data sources [35], or scaling them up to repre-
sent large-scale scenes [62, 64]. These advances are testa-
ment to the versatility of ML-based scene representations;
however, they still fit to specific, individual scenes rather
than generalizing beyond their input training data.
In contrast, neural field representations that general-
ize to multiple object categories or learn priors for scenes
across datasets appear much more limited to date, despite
enabling applications like single-image 3D object gener-
ation [7, 39, 49, 66, 72] and unconstrained scene explo-
ration [15]. These methods explore ways to disentangle ob-
ject priors into shape and appearance-based components, or
to decompose radiance fields into several small and locally-
conditioned radiance fields to improve scene generation
quality; however, their results still leave significant gaps
w.r.t. photorealism and geometric accuracy.
Directions involving generative adversarial networks
(GANs) that have been extended from the 2D domain to 3D-
aware neural fields generation are demonstrating impressive
synthesis results [8]. Like regular 2D GANs, the training
objective is based on discriminating 2D images, which are
obtained by rendering synthesized 3D radiance fields.
At the same time, diffusion-based models [52] have re-
cently taken the computer vision research community by
storm, performing on-par or even surpassing GANs on
multiple 2D benchmarks, and are producing photo-realistic
images that are almost indistinguishable from real pho-
tographs. For multi-modal or conditional settings such
as text-to-image synthesis, we currently observe unprece-
dented output quality and diversity from diffusion-based ap-
proaches. While several works address purely geometric
representations [33, 75], lifting the denoising-diffusion for-
mulation directly to 3D volumetric radiance fields remains
challenging. The main reason lies in the nature of diffu-
sion models, which require a one-to-one mapping between
the noise vector and the corresponding ground truth data
samples. In the context of radiance fields, such volumetric
ground truth data is practically infeasible to obtain, since
even running a costly per-sample NeRF optimization results
in incomplete and imperfect radiance field reconstructions.
In this work, we present the first diffusion-based gen-
erative model that directly synthesizes 3D radiance fields,
thus unlocking high-quality 3D asset generation for both
shape and appearance. Our goal is to learn such a gener-
ative model trained across objects, where each sample is
given by a set of posed RGB images.
To this end, we propose a 3D denoising model directly
operating on an explicit voxel grid representation (Fig. 1,left) producing high-frequency noise estimates. To address
the ambiguous and imperfect radiance field representation
for each training sample, we propose to bias the noise pre-
diction formulation from Denoising Diffusion Probabilistic
Models (DDPMs) towards synthesizing higher image qual-
ity by an additional volumetric rendering loss on the esti-
mates. This enables our method to learn radiance field pri-
ors less prone to fitting artifacts or noise accumulation dur-
ing the sampling process. We show that our formulation
leads to diverse and geometrically-accurate radiance field
synthesis producing efficient, realistic, and view-consistent
renderings. Our learned diffusion prior can be applied in an
unconditional setting where 3D object synthesis is obtained
in a multi-view consistent way, generating highly-accurate
3D shapes and allowing for free-view synthesis. We further
introduce the new task of conditional masked completion –
analog to shape completion – for radiance field completion
at inference time. In this setting, we allow for realistic 3D
completion of partially-masked objects without the need for
task-specific model adaptation or training (see Fig. 1, right).
We summarize our contributions as follows:
• To the best of our knowledge, we introduce the first
diffusion model to operate directly on 3D radiance
fields, enabling high-quality, truthful 3D geometry and
image synthesis.
• We introduce the novel application of 3D radiance field
masked completion, which can be interpreted as a nat-
ural extension of image inpainting to the volumetric
domain.
• We show compelling results in unconditional and con-
ditional settings, e.g., by improving over GAN-based
approaches on image quality (from 16.54 to 15.95 in
FID) and geometry synthesis (improving MMD from
5.62 to 4.42), on the challenging PhotoShape Chairs
dataset [44].
|
Peng_Representing_Volumetric_Videos_As_Dynamic_MLP_Maps_CVPR_2023 | Abstract
This paper introduces a novel representation of volumet-
ric videos for real-time view synthesis of dynamic scenes.
Recent advances in neural scene representations demon-
strate their remarkable capability to model and render com-
plex static scenes, but extending them to represent dynamic
scenes is not straightforward due to their slow rendering
speed or high storage cost. To solve this problem, our key
idea is to represent the radiance field of each frame as a
set of shallow MLP networks whose parameters are stored
in 2D grids, called MLP maps, and dynamically predicted
by a 2D CNN decoder shared by all frames. Represent-
ing 3D scenes with shallow MLPs significantly improves
the rendering speed, while dynamically predicting MLP pa-
rameters with a shared 2D CNN instead of explicitly stor-
ing them leads to low storage cost. Experiments show that
the proposed approach achieves state-of-the-art rendering
quality on the NHR and ZJU-MoCap datasets, while being
efficient for real-time rendering with a speed of 41.7 fps for
512×512images on an RTX 3090 GPU. The code is avail-
able at https://zju3dv.github.io/mlp maps/.
| 1. Introduction
V olumetric video captures a dynamic scene in 3D which
allows users to watch from arbitrary viewpoints with im-
mersive experience. It is a cornerstone for the next gener-
ation media and has many important applications such as
video conferencing, sport broadcasting, and remote learn-
ing. The same as 2D video, volumetric video should be ca-
pable of high-quality and real-time rendering as well as be-
ing compressed for efficient storage and transmission. De-
signing a proper representation for volumetric video to sat-
isfy these requirements remains an open problem.
Traditional image-based rendering methods [1,12,25,74]
build free-viewpoint video systems based on dense camera
arrays. They record dynamic scenes with many cameras
and then synthesize novel views by interpolation from in-
put nearby views. For these methods, the underlying scene
∗Equal contribution.†Corresponding author.
2D CNN decoderMLP parameters
MLP mapslatent vector
+Figure 1. The basic idea of dynamic MLP maps. Instead of
modeling the volumetric video with a big MLP network [26], we
exploit a 2D convolutional neural network to dynamically gener-
ate 2D MLP maps at each video frame, where each pixel storing
the parameter vector of a small MLP network. This enables us
to represent volumetric videos with a set of small MLP networks,
thus significantly improving the rendering speed.
representation is the original multi-view video. While there
have been many multi-view video coding techniques, the
storage and transmission cost is still huge which cannot
satisfy real-time video applications. Another line of work
[11, 13] utilizes RGB-D sensors to reconstruct textured
meshes as the scene representation. With mesh compression
techniques, this representation can be very compact and en-
able streamable volumetric videos, but these methods can
only capture humans and objects in constrained environ-
ments as reconstructing a high-quality renderable mesh for
general dynamic scenes is still a very challenging problem.
Recent advances in neural scene representations [26, 33,
61] provide a promising solution for this problem. They
represent 3D scene with neural networks, which can be ef-
fectively learned from multi-view images through differen-
tiable renderers. For instance, Neural V olumes [33] rep-
resents volumetric videos with a set of RGB-density vol-
umes predicted by 3D CNNs. Since the volume prediction
easily consumes large amount of GPU memory, it strug-
gles to model high-resolution 3D scenes. NeRF [37] in-
stead represent 3D scenes with MLP networks regressing
density and color for any 3D point, thereby enabling it to
synthesize high-resolution images. DyNeRF [26] extends
NeRF to model volumetric videos by introducing a tempo-
ral latent code as additional input of the MLP network. A
major issue of NeRF models is that their rendering is gen-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4252
erally quite slow due to the costly network evaluation. To
increase the rendering speed, some methods [17,61,75] uti-
lize caching techniques to pre-compute a discrete radiance
volume. This strategy typically leads to high storage cost,
which is acceptable for a static scene, but not scalable to
render a volumetric video of dynamic scenes.
In this paper, we propose a novel representation of volu-
metric video, named dynamic MLP maps, for efficient view
synthesis of dynamic scenes. The basic idea is illustrated in
Figure 1. Instead of modeling a volumetric video with a sin-
gle MLP network, we represent each video frame as a set of
small MLP networks whose parameters are predicted by a
per-scene trained 2D CNN decoder with a per-frame latent
code. Specifically, given a multi-view video, we choose a
subset of views and feed them into a CNN encoder to obtain
a latent code for each frame. Then, a 2D CNN decoder re-
gresses from the latent code to 2D maps, where each pixel
in the maps stores a vector of MLP parameters. We call
these 2D maps as MLP maps. To model a 3D scene with
the MLP maps, we project a query point in 3D space onto
the MLP maps and use the corresponding MLP networks to
infer its density and color values.
Representing 3D scenes with many small MLP networks
decreases the cost of network evaluation and increases the
rendering speed. This strategy has been proposed in pre-
vious works [48, 49], but their networks need to be stored
for each static scene, which easily consumes a lot of stor-
age to represent a dynamic scene. In contrast to them, we
use shared 2D CNN encoder and decoder to predict MLP
parameters on the fly for each video frame, thereby effec-
tively compressing the storage along the temporal domain.
Another advantage of the proposed representation is that
MLP maps represent 3D scenes with 2D maps, enabling us
to adopt 2D CNNs as the decoder instead of 3D CNNs in
Neural V olumes [33]. This strategy leverages the fast infer-
ence speed of 2D CNNs and further decreases the memory
requirement.
We evaluate our approach on the NHR and ZJU-MoCap
datasets, which present dynamic scenes with complex mo-
tions. Across all datasets, our approach exhibits state-of-
the-art performance in terms of rendering quality and speed,
while taking up low storage. Experiments demonstrate that
our approach is over 100 times faster than DyNeRF [26].
In summary, this work has the following contributions:
• A novel representation of volumetric video named dy-
namic MLP maps, which achieves compact represen-
tation and fast inference.
• A new pipeline for real-time rendering of dynamic
scenes based on dynamic MLP maps.
• State-of-the-art performance in terms of the render-
ing quality, speed, and storage on the NHR and ZJU-
MoCap datasets. |
Miao_FedSeg_Class-Heterogeneous_Federated_Learning_for_Semantic_Segmentation_CVPR_2023 | Abstract
Federated Learning (FL) is a distributed learning
paradigm that collaboratively learns a global model by
multiple clients with data privacy-preserving. Although
many FL algorithms have been proposed for classification
tasks, few works focus on more challenging semantic seg-
mentation tasks, especially in the class-heterogeneous FL
situation. Compared with classification, the issues from het-
erogeneous FL for semantic segmentation are more severe:
(1) Due to the non-IID distribution, different clients may
contain inconsistent foreground-background classes, result-
ing in divergent local updates. (2) Class-heterogeneity for
complex dense prediction tasks makes the local optimum
of clients farther from the global optimum. In this work,
we propose FedSeg, a basic federated learning approach
for class-heterogeneous semantic segmentation. We first
propose a simple but strong modified cross-entropy loss to
correct the local optimization and address the foreground-
background inconsistency problem. Based on it, we intro-
duce pixel-level contrastive learning to enforce local pixel
embeddings belonging to the global semantic space. Ex-
tensive experiments on four semantic segmentation bench-
marks (Cityscapes, CamVID, PascalVOC and ADE20k)
demonstrate the effectiveness of our FedSeg. We hope this
work will attract more attention from the FL community to
the challenging semantic segmentation federated learning.
| 1. Introduction
Semantic segmentation is the task of assigning a unique
semantic label to every pixel in a given image, which is
a fundamental research topic in computer vision and has
many potential applications, such as autonomous driving,
image editing and robotics [30]. Training a semantic seg-
mentation model usually needs vast of data with pixel-level
annotations, which is extremely hard to acquire. Collabo-
rative training on multiple clients is a feasible way to solve
†Corresponding author.
Client 1Client 2Client 3
Server
(a)
(b)𝒘𝟏𝒕𝒘𝒘𝟏𝒕#𝟏𝒘𝟐𝒕𝒘𝟑𝒕𝒘𝟐𝒕#𝟏𝒘𝟑𝒕#𝟏Pixel Embedding Space
(motorbike)(person)(cat)
Class-Heterogeneous Dense PredictionClients’Optimization Divergence??????ClientsServerFigure 1. (a) The foreground-background inconsistency for class-
heterogeneous semantic segmentation. (b) Local optimization di-
vergence problem for the heterogeneous dense prediction task.
the problem. However, collaborative training has the risk
of leaking sensitive information. For example, for the au-
tonomous driving task, the training images may include pri-
vate information such as where the user arrived, where the
user lives and what the user’s house looks like. Thus, a
privacy-preserving collaborative training method is requi-
site for semantic segmentation.
Federated Learning (FL) [31] is an emerging distributed
machine learning paradigm that jointly trains a shared
global model by multiple clients without exchanging their
raw data. FedAvg [31] is a basic FL algorithm that learns
local models with raw data on clients separately while ag-
gregating weights to a global model on a server. One key
problem of FL is the statistical heterogeneity of data dis-
tribution among different clients. Many recent FL algo-
rithms [1, 21, 22, 26, 32] are proposed to tackle the prob-
lem. However, most of them evaluate their methods on
classification, while few works focus on more challeng-
ing semantic segmentation. Although some federated learn-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8042
ing approaches [17, 29, 46, 52] for medical image segmen-
tation have been proposed, they mainly address the sim-
ple foreground-background segmentation and cannot solve
the class-heterogeneous problem for semantic segmentation
with a variety of object classes. A recent FL approach, Fed-
Drive [14], evaluates FL methods on an autonomous driv-
ing semantic segmentation dataset, Cityscapes [9]. How-
ever, FedDrive [14] focuses on domain heterogeneity (im-
ages from different cities) while ignoring the more challeng-
ing class-heterogeneous problem.
In this paper, we focus on class-heterogeneous feder-
ated learning for semantic segmentation, which has spe-
cific and more severe issues compared with classifica-
tion. First, images for semantic segmentation are more
complex, and pixel-level annotation is extremely time-
consuming. Clients usually annotate the objects of fre-
quent classes and ignore the rare ones. Due to the non-
IID (non-Independent Identically Distribution) data distri-
bution of different clients, classes ignored by one client
may be foreground classes in another client. For exam-
ple, in Fig. 1 (a), the ignored class “person” in Client 1 is
annotated in Client 2. The foreground-background incon-
sistency across clients leads to divisive optimization direc-
tions and degrades the capability of the aggregated global
model. Second, as shown in Fig. 1 (b), even if there is
no foreground-background inconsistency, for non-IID dis-
tribution, complex dense prediction makes the local opti-
mization direction diverging farther to the global optimum
compared with classification tasks, resulting in poor conver-
gence. From the perspective of the pixel embedding space,
the local update in each client cannot learn the relative posi-
tions of different semantic classes in the pixel embedding
space, leading to the confounded embedding space after
global aggregation.
In this paper, we propose a new federated learning
method for semantic segmentation, FedSeg, to address the
above issues. A standard objective function for semantic
segmentation is the cross-entropy (CE) loss which takes
effect on foreground pixels and ignores the background
pixels. For FL with non-IID data distribution, it makes
the learned local optimum away from the global optimum.
Thus, we propose a simple but strong baseline, a modified
cross-entropy loss, by aggregating the probabilities of back-
ground classes. The modified loss corrects “client drift” in
local updates and alleviates the foreground-background in-
consistency problem. Then we further introduce a local-
to-global pixel-level contrastive learning loss to enforce the
local pixel embedding space close to the global semantic
space, improving the convergence of the global model.
Extensive experiments on four semantic segmentation
datasets (Cityscapes [9], CamVID [3], PascalVOC [13] and
ADE20k [63]) are conducted to evaluate the effectiveness
of our FedSeg. Experimental results show that the sim-ple modified cross-entropy loss significantly improves the
segmentation quality. Based on it, our proposed local-to-
global pixel contrastive learning consistently improves the
segmentation performance compared with previous FL al-
gorithms [1, 22, 26, 31].
To summarize, the contributions of this paper are as fol-
lows:
•We systematically investigate federated learning for
the semantic segmentation task with a variety of classes,
particularly the class-heterogeneous problem.
•We propose a strong baseline with a simple modified
CE loss and a local-to-global metrics learning method to
alleviate the class distribution drift problem across clients.
•We provide benchmarks on four semantic segmenta-
tion datasets to evaluate our FedSeg for the semantic seg-
mentation FL problem. We hope this work will motivate
the FL community to further study the federated learning
problem for challenging semantic segmentation tasks.
|
Li_RiDDLE_Reversible_and_Diversified_De-Identification_With_Latent_Encryptor_CVPR_2023 | Abstract
This work presents RiDDLE, short for Reversible and
Diversified De-identification with Latent Encryptor, to pro-
tect the identity information of people from being misused.
Built upon a pre-learned StyleGAN2 generator, RiDDLE
manages to encrypt and decrypt the facial identity within
the latent space. The design of RiDDLE has three appealing
properties. First, the encryption process is cipher-guided
and hence allows diverse anonymization using different
passwords. Second, the true identity can only be de-
crypted with the correct password, otherwise the system
will produce another de-identified face to maintain the
privacy. Third, both encryption and decryption share
an efficient implementation, benefiting from a carefully
tailored lightweight encryptor. Comparisons with existing
alternatives confirm that our approach accomplishes the
de-identification task with better quality, higher diversity,
and stronger reversibility. We further demonstrate the
effectiveness of RiDDLE in anonymizing videos. Code is
available in https://github.com/ldz666666/RiDDLE.
| 1. Introduction
Recent advances in deep learning and computer vision
technology bring convenience together with security con-
cerns. Personal images shared on social media can be
collected and abused by unauthorized software and mali-
cious attackers, posing a threat to the privacy of individuals.
Comparing with all other biometrics, face has unique im-
portance because of its extensive application scenarios and
abundant personal information. Face de-identification aims
to hide the identity in the face image or video stream for
privacy protection. Traditional de-identification methods
such as blurring and mosaicing can effectively obfuscate
the identity information, but the protected image is often
*Corresponding author.
Ori Encrypted1 Encrypted2 DecryptedWrongly
Decrypted1
Wrongly
Decrypted2
Figure 1. Encryption and decryption on images in the wild. The
first column shows the original people. The second and third
columns show different encrypted faces according to different
passwords. The fourth column is the correctly decrypted faces,
and the last two columns are the incorrectly decrypted faces.
severely damaged and loses its utility.
Current de-identification methods can generate a similar-
looking person with a changed identity and have improved
the image quality and utility by a large margin. How-
ever, many existing works [6, 10, 25] tend to generate
faces with homogeneous appearances for different people,
which is easy for an hacker to realize that these faces are
anonymized. In some cases, e.g. online conferences [17],
the de-identified faces of participants should be different
from each other. For reliable identity protection, it is
also necessary to consider variations in ethnicity, age,
gender and other facial features. Therefore, diversity is
important to face identification. Meanwhile, it is vital for
the anonymous faces to keep superior image quality and
utility. The former brings better photo-realism and stronger
safety, while the latter makes it possible to perform identity-
agnostic task on an anonymous face image in a privacy-
preserving manner.
Moreover, most of the previous works only pay atten-
tion to the process of de-identification, and neglect the
importance of identity recovery. Reversibility of the de-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8093
identification system is also crucial. For instance, family
members are more willing to see the real face rather than
the de-identified one and people may want to share data that
only certain authorized parties can interpret.
Generally, a well-developed de-identification model
should hold the following properties. a) Maintaining high
quality and the utility of the anonymous faces, as well as the
identity independent attributes. b) Generating diversified
virtual identities for safer privacy protection. c) Recovering
the original faces when security conditions are satisfied.
To achieve the above goals, we propose RiDDLE, which
is short for Revers ible and Diversified De-identification
with Latent Encryptor. The main features of our framework
are shown below.
Better Quality. First, we project the face image onto the
latent space of a strong generator, StyleGAN2 [13]. Due
to the decoupling characteristics of the face manifold, the
identity independent attributes can be largely preserved. At
the same time, high-quality virtual faces can be synthesized.
De-identification and recovery results on images in the wild
are shown in Figure 1.
Higher Diversity. After obtaining the inverted la-
tent code, we perform encryption and decryption with a
lightweight latent encryptor together with several randomly
sampled passwords. In the encryption phase, each password
is associated with a unique identity. Discrepancy between
different anonymous faces is maximized, resulting in high
diversity.
Stronger Reversibility. In the decryption phase, when
the password is correct, the true identity can be restored.
Otherwise, a new de-identified face with photo-realism is
returned. Different from opponents [7] and [4] which can
achieve identity recovery to some extent, RiDDLE is free
from manually designed encryption rules, does not need to
be retrained for different passwords, and brings fewer visual
artifacts.
Another advantage of performing latent encryption is
that when face datasets are unavailable due to privacy
reasons, randomly sampled latents can be used to train
the encryptor. We evaluate RiDDLE on both face image
and video de-identification tasks. Sufficient experiments
on public datasets and in the wild data have proven the
superiority of RiDDLE over previous works.
|
Qi_Real-Time_6K_Image_Rescaling_With_Rate-Distortion_Optimization_CVPR_2023 | Abstract
Contemporary image rescaling aims at embedding a
high-resolution (HR) image into a low-resolution (LR)
thumbnail image that contains embedded information for
HR image reconstruction. Unlike traditional image super-
resolution, this enables high-fidelity HR image restoration
faithful to the original one, given the embedded informa-
tion in the LR thumbnail. However, state-of-the-art image
rescaling methods do not optimize the LR image file size for
efficient sharing and fall short of real-time performance for
ultra-high-resolution ( e.g., 6K) image reconstruction. To
address these two challenges, we propose a novel frame-
work (HyperThumbnail) for real-time 6K rate-distortion-
aware image rescaling. Our framework first embeds an HR
image into a JPEG LR thumbnail by an encoder with our
proposed quantization prediction module, which minimizes
the file size of the embedding LR JPEG thumbnail while
maximizing HR reconstruction quality. Then, an efficient
frequency-aware decoder reconstructs a high-fidelity HR
image from the LR one in real time. Extensive experiments
demonstrate that our framework outperforms previous im-
age rescaling baselines in rate-distortion performance and
can perform 6K image reconstruction in real time.
| 1. Introduction
With an increasing number of high-resolution (HR) im-
ages being produced and shared by users on the internet,
a new challenge has arisen: how can we store and transfer
HR images efficiently? Storing HR images on the cloud,
such as iCloud, is becoming a widely adopted solution that
saves storage on a user’s mobile device ( e.g., smartphones)
as only their low-resolution (LR) counterparts are stored on
the mobile device for an instant preview. However, when
a user wants to obtain the full-resolution image, the entire
HR image must be downloaded on the fly from the cloud,
which can result in a poor user experience when the internet
connection is unstable or not available.
Real-time image rescaling can serve as a competitive so-
lution to improving the user experience of cloud photo stor-
*Equal contribution.
Restored HR
Timeout✘
PSNR:
34.20
Cloud StorageReal-timeDecoderEncoder
1440x810,
1.37MB
5760x3240,
48.3MB
Downsample and compressDecompress and upsampleEncoder
Cloud Server
HR
HyperThumbnailLRFigure 1. The application of 6K image rescaling in the con-
text of cloud photo storage on smartphones ( e.g., iCloud). As
more high-resolution (HR) images are uploaded to cloud stor-
age nowadays, challenges are brought to cloud service providers
(CSPs) in fulfilling latency-sensitive image reading requests ( e.g.,
zoom-in) through the internet. To facilitate faster transmission and
high-quality visual content, our HyperThumbnail framework helps
CSPs to encode an HR image into an LR JPEG thumbnail, which
users could cache locally. When the internet is unstable or un-
available, our method can still reconstruct a high-fidelity HR im-
age from the JPEG thumbnail in real time.
age, as shown in Fig. 1. Such a solution can first embed an
HR image (on the cloud) into an LR JPEG thumbnail (on
the mobile device) by an encoder, and the thumbnail pro-
vides an instant preview with little storage. When the user
wants to zoom in on the thumbnail, the HR image with fine
details can be reconstructed locally in real time. In addition,
image rescaling has other applications in image sharing, as
it can “bypass” the resolution limitation of some platforms
(e.g., WhatsApp) to reconstruct a high-quality HR image
from an LR one [57]. While modern smartphones and cam-
eras can capture ultra-high-resolution images in 4K (iPhone
13) or even 6K (Blackmagic camera), we are interested in
designing a real-time image rescaling framework for ultra-
high-resolution images ( e.g., 4K or 6K), which minimizes
LR file size while maximizing HR and LR image quality.
However, existing image rescaling methods have their
own flaws in practice, as shown in Table 1 where we
compare different image rescaling methods in terms of
their properties. One potential solution is to upsample the
downsampled thumbnail with super-resolution (SR) meth-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14092
Method(a) Downsampled JPEG +
super-resolution [37](b) Flow-based rescaling [36, 55] (c) Ours
Architecture
HRLR.jpg𝐻𝑅%
HR𝐻𝑅#LR
HR𝐻𝑅#LR.jpg
Reconstruction fidelity % ! !
Rate-distortion optimization % % !
Real-time 6K reconstruction – % !
Table 1. The comparison of different methods related to image rescaling. (a) Super-resolution from downsampled JPEG does not
optimize rate-distortion performance and can hardly maintain high fidelity due to information lost in downsampling. (b) SOTA flow-based
image rescaling methods also ignore the file size constraints and are not real-time for 6K reconstruction due to the limited speed of invertible
networks. (c) Our framework optimizes rate-distortion performance while maintaining high-fidelity and real-time 6K image rescaling.
ods [14, 17, 35, 37, 61, 62] (Table 1(a)). However, such a
framework applies a simple downsampling strategy ( e.g.,
Bilinear, Bicubic) to the HR image so that high-frequency
details are basically lost in the LR thumbnail. Also, SR
methods only focus on HR reconstruction, which leads to
a sub-optimal image rescaling performance. Instead, ded-
icated image rescaling approaches aim to embed informa-
tion into a visually pleasing LR image and then recon-
struct the HR image with an upsampling module. Recently,
state-of-the-art image rescaling works utilize normalizing
flow [25,36,55,57] show impressive image embedding and
reconstruction capability that outperforms SR approaches,
in terms of the reconstructed HR image quality. However,
there are still some great challenges to apply these flow-
based rescaling frameworks in real-world applications, as
shown in Table 1(b). First, the file size of the LR thumbnail
is not optimized. Second, the reconstruction stage of these
image rescaling methods is computationally expensive due
to their invertible network architecture with extensive use
of dense blocks [23]: IRN [55] costs about a second to re-
construct a 4K image with 4x rescaling on a modern GPU,
which is far from real time (Table 2).
In this work, we propose the HyperThumbnail , a rate-
distortion-aware framework for 6K real-time image rescal-
ing, as shown in Table 1(c). In this framework, we embed
an HR image into a low-bitrate JPEG thumbnail by an en-
coder and a quantization table predictor, as JPEG is a dom-
inant image compression format today [3]. Then the JPEG
thumbnail can be upscaled to its high-fidelity HR counter-
part with our efficient decoder in real time. We leverage
an asymmetric encoder-decoder architecture, where most
computation is put in the encoder to keep the decoder
lightweight. This makes it possible for our decoder to up-
scale a thumbnail to 6K in real time, significantly faster than
previous flow-based image rescaling methods [36, 55].
Meanwhile, the Rate-Distortion (RD) performance is an
important and practical metric rarely studied in prior rescal-
ing works. In this paper, we define the rate as the ratio
between the thumbnail file size and the number of pixelsin the HR image, also known as the bits-per-pixel (bpp).
The distortion consists of two parts: the perceptual qual-
ity of the thumbnail (LR distortion) and the fidelity of the
restored HR image (HR distortion). The rate-distortion per-
formance evaluates an image rescaling framework in both
storage cost and visual quality. Without explicit RD con-
straints, recent works in image rescaling [36, 55] do not
consider RD performance in their models. While some
works [26,47,53,54,58] leverage the rate constraint by em-
bedding extra information in JPEG, they simply utilize a
fixed differentiable JPEG module, which we argue is sub-
optimal for image rescaling. Because such a process dete-
riorates the information in the embedding images without
considering their local distribution. Moreover, the quanti-
zation process of JPEG introduces noise in the frequency
domain and introduces well-known JPEG artifacts, which
brings great challenges to information restoration.
To remedy these issues, our image rescaling framework
is designed to jointly optimize image quality and bpp with
entropy models. Instead of using fixed quantization ta-
bles in conventional JPEG (Sec. 3.1), we propose a novel
quantization prediction module (QPM) that predicts image-
adaptive quantization tables, which can optimize RD perfor-
mance. We further adopt a frequency-aware decoder which
alleviates JPEG artifacts in the thumbnails and improves
HR reconstruction. Moreover, our asymmetric encoder-
decoder framework can be extended to optimization-based
compression.
Our contributions are summarized as follows:
• We propose a 6K real-time rescaling framework with
an asymmetric encoder-decoder architecture, named
HyperThumbnail, which embeds a high-resolution im-
age into a JPEG thumbnail that can be viewed in pop-
ular browsers. The decoder utilizes both spatial and
frequency information to reconstruct high-fidelity im-
ages in real time for 6K image upsampling.
• We introduce a new quantization prediction module
(QPM) that improves the RD performance in the en-
14093
coding stage of our framework. Furthermore, we adopt
rate-distortion-aware loss functions along with QPM
to optimize the RD performance.
• Experiments show that our framework outperforms
state-of-the-art image rescaling methods with higher
LR and HR image quality and faster reconstruction
speed at similar file size.
|
Olson_Cross-GAN_Auditing_Unsupervised_Identification_of_Attribute_Level_Similarities_and_Differences_CVPR_2023 | Abstract
Generative Adversarial Networks (GANs) are notori-
ously difficult to train especially for complex distributions
and with limited data. This has driven the need for tools
to audit trained networks in human intelligible format, for
example, to identify biases or ensure fairness. Existing
GAN audit tools are restricted to coarse-grained, model-
data comparisons based on summary statistics such as FID
or recall. In this paper, we propose an alternative ap-
proach that compares a newly developed GAN against a
prior baseline. To this end, we introduce Cross-GAN Audit-
ing(xGA) that, given an established “reference” GAN and
a newly proposed “client” GAN, jointly identifies intelligi-
ble attributes that are either common across both GANs,
novel to the client GAN, or missing from the client GAN.This provides both users and model developers an intuitive
assessment of similarity and differences between GANs. We
introduce novel metrics to evaluate attribute-based GAN
auditing approaches and use these metrics to demonstrate
quantitatively that xGA outperforms baseline approaches.
We also include qualitative results that illustrate the com-
mon, novel and missing attributes identified by xGA from
GANs trained on a variety of image datasets1.
| 1. Introduction
Generative Adversarial Networks (GANs) [12, 19–21]
have become ubiquitous in a range of high impact commer-
cial and scientific applications [5, 7–9, 13]. With this pro-
1Source code is available at https : / / github . com /
mattolson93/cross_gan_auditing
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7981
lific use comes a growing need for investigative tools that
are able to evaluate, characterize and differentiate one GAN
model from another, especially since such differences can
arise from a wide range of factors – biases in training data,
model architectures and hyper parameters used in training
etc. In practice, this has been mostly restricted to compar-
ing two or more GAN models against the dataset they were
trained on using summary metrics such as Fr ´echet Inception
Distance (FID) [16] and precision/recall [20] scores.
However, in many real world scenarios, different models
may not even be trained on the same dataset, thereby mak-
ing such summary metrics incomparable. More formally, if
we define the model comparison problem as one being be-
tween a known – and presumably well vetted – reference
GAN and a newly developed client GAN. For example,
the reference GANs can correspond to models purchased
from public market places such as AWS [2], Azure [3], or
GCP [11], or to community-wide standards. Furthermore,
there is a critical need for more fine-grained, interpretable,
investigative tools in the context of fairness and account-
ability. Broadly, these class of methods can be studied un-
der the umbrella of AI model auditing [1, 6, 32]. Here, the
interpretability is used in the context to indicate that the
proposed auditing result will involves of human intelligi-
ble attributes, rather than summary statistic that do not have
explicit association with meaningful semantics.
While auditing classifiers has received much attention in
the past [32], GAN auditing is still a relatively new research
problem with existing efforts focusing on model-data com-
parisons, such as identifying how faithfully a GAN recovers
the original data distribution [1]. In contrast, we are inter-
ested in developing a more general framework that enables
a user to visually audit a “client” GAN model with respect
the “reference”. This framework is expected to support
different kinds of auditing tasks: (i) comparing different
GAN models trained on the same dataset (e.g. StyleGAN3-
Rotation and StyleGAN3-Translate on FFHQ); (ii) compar-
ing models trained on datasets with different biases (e.g.,
StyleGAN with race imbalance vs StyleGAN with age im-
balance); and finally (iii) comparing models trained using
datasets that contain challenging distribution shifts (e.g.,
CelebA vs Toons). Since these tools are primarily intended
for human experts and auditors, interpretability is critical.
Hence, it is natural to perform auditing in terms of human
intelligible attributes. Though there has been encouraging
progress in automatically discovering such attributes from a
single GAN in the recent years [14, 28, 39, 40, 43] they are
not applicable to our setting with multiple GANs.
Proposed work We introduce cross-GAN auditing (xGA),
an unsupervised approach for identifying attribute similar-
ities and differences between client GANs and reference
models (which could be pre-trained and potentially unre-
lated). Since the GANs are trained independently, their la-tent spaces are disparate and encode different attributes, and
thus they are not directly comparable. Consequently, dis-
covering attributes is only one part of the solution; we also
need to ‘align’ humanly meaningful and commonly occur-
ring attributes across the individual latent spaces.
Our audit identifies three distinct sets of attributes:
(a) common: attributes that exist in both client and refer-
ence models; (b) novel: attributes encoded only in the client
model; (c) missing: attributes present only in the reference.
In order to identify common attributes, xGA exploits the
fact that shared attributes should induce similar changes in
the resulting images across both the models. On the other
hand, to discover novel/missing attributes, xGA leverages
the key insight that attribute manipulations unique to one
GAN can be viewed as out of distribution (OOD) to the
other GAN. Using empirical studies with a variety of Style-
GAN models and benchmark datasets, we demonstrate that
xGA is effective in providing a fine-grained characterization
of generative models.
Contributions (i) We present the first cross-GAN audit-
ing framework that uses an unified, attribute-centric method
to automatically discover common, novel, and missing at-
tributes from two or more GANs; (ii) Using an external,
robust feature space for optimization, xGA produces high-
quality attributes and achieves effective alignment even
across challenging distribution shifts; (iii) We introduce
novel metrics to evaluate attribute-based GAN auditing ap-
proaches; and (iv) We evaluate xGA using StyleGANs
trained on CelebA, AFHQ, FFHQ, Toons, Disney and Met-
Faces, and also provide a suite of controlled experiments to
evaluate cross-GAN auditing methods.
|
Li_Towards_High-Quality_and_Efficient_Video_Super-Resolution_via_Spatial-Temporal_Data_Overfitting_CVPR_2023 | Abstract
As deep convolutional neural networks (DNNs) are
widely used in various fields of computer vision, leveraging
the overfitting ability of the DNN to achieve video resolu-
tion upscaling has become a new trend in the modern video
delivery system. By dividing videos into chunks and over-
fitting each chunk with a super-resolution model, the server
encodes videos before transmitting them to the clients, thus
achieving better video quality and transmission efficiency.
However, a large number of chunks are expected to ensure
good overfitting quality, which substantially increases the
storage and consumes more bandwidth resources for data
transmission. On the other hand, decreasing the number of
chunks through training optimization techniques usually re-
quires high model capacity, which significantly slows down
execution speed. To reconcile such, we propose a novel
method for high-quality and efficient video resolution up-
scaling tasks, which leverages the spatial-temporal infor-
mation to accurately divide video into chunks, thus keep-
ing the number of chunks as well as the model size to min-
imum. Additionally, we advance our method into a sin-
gle overfitting model by a data-aware joint training tech-
nique, which further reduces the storage requirement with
negligible quality drop. We deploy our models on an off-
the-shelf mobile phone, and experimental results show that
our method achieves real-time video super-resolution with
high video quality. Compared with the state-of-the-art, our
method achieves 28 fps streaming speed with 41.6 PSNR,
which is 14 ×faster and 2.29 dB better in the live video
resolution upscaling tasks. Code available in https://
github.com/coulsonlee/STDO-CVPR2023.git .
| 1. Introduction
Being praised by its high image quality performance
and wide application scenarios, deep learning-based super-
resolution (SR) becomes the core enabler of many incred-
†Equal Contribution.
Frame: 250/375
Frame: 350/375
Figure 1. Patch PSNR heatmap of two frames in a 15s video
when super-resolved by a general WDSR model. A clear bound-
ary shows that PSNR is strongly related to video content.
ible, cutting-edge applications in the field of image/video
reparation [10, 11, 39, 40], surveillance system enhance-
ment [9], medical image processing [35], and high-quality
video live streaming [20]. Distinct from the traditional
methods that adopt classic interpolation algorithms [15, 45]
to improve the image/video quality, the deep learning-based
approaches [10, 11, 21, 24, 28, 40, 44, 47, 57, 60] exploit
the advantages of learning a mapping function from low-
resolution (LR) to high-resolution (HR) using external data,
thus achieving better performance due to better generaliza-
tion ability when meeting new data.
Such benefits have driven numerous interests in design-
ing new methods [5, 17, 50] to deliver high-quality video
stream to users in the real-time fashion, especially in the
context of massive online video and live streaming avail-
able. Among this huge family, an emerging representa-
tive [13,16,31,38] studies the prospect of utilizing SR model
to upscale the resolution of the LR video in lieu of transmit-
ting the HR video directly, which in many cases, consumes
tremendous bandwidth between servers and clients [19].
One practical method is to deploy a pretrained SR model
on the devices of the end users [25, 54], and perform res-
olution upscaling for the transmitted LR videos, thus ob-
taining HR videos without causing bandwidth congestion.
However, the deployed SR model that is trained with lim-
ited data usually suffers from limited generalization abil-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10259
Clustering
function
SR model 2SR model 1LR video HR video
...
...
......
...
Chunk1
(low PSNR patches)
Chunk2
(high PSNR patches)Train
Train
Video framesSlice Ungrouped
patchesGrouped
patchesEvaluate Deliver Overfitting each
video chunkTraining Super-resolution
at user endServer process Overfit and transmit User end AI
Frame: 350/375Frame: 280/375Frame: 210/375
Frame: 350/375Frame: 280/375Frame: 210/375
& rankFigure 2. Overview of the proposed STDO method. Each video frame is sliced into patches, and all patches across time dimension are
divided and grouped into chunks. Here we set the number of chunks to 2 for clear illustration. Then each chunk is overfitted by independent
SR models, and delivered to end-user for video super-resolution.
ity, and may not achieve good performance at the presence
of new data distribution [55]. To overcome this limitation,
new approaches [4, 8, 20, 30, 51, 53, 55] exploit the overfit-
ting property of DNN by training an SR model for each
video chunk (i.e., a fragment of the video), and deliver-
ing the video alongside the corresponding SR models to
the clients. This trade-off between model expressive power
and the storage efficiency significantly improves the quality
of the resolution upscaled videos. However, to obtain bet-
ter overfitting quality, more video segments are expected,
which notably increase the data volume as well as system
overhead when processing the LR videos [55]. While ad-
vanced training techniques are proposed to reduce the num-
ber of SR models [30], it still requires overparameterized
SR backbones (e.g., EDSR [28]) and handcrafted modules
to ensure sufficient model capacity for the learning tasks,
which degrades the execution speed at user-end when the
device is resource-constraint.
In this work, we present a novel approach towards high-
quality and efficient video resolution upscaling via Spatial-
Temporal DataOverfitting, namely STDO , which for the
first time, utilizes the spatial-temporal information to accu-
rately divide video into chunks. Inspired by the work pro-
posed in [1, 14, 23, 46, 58] that images may have different
levels of intra- and inter-image (i.e., within one image or
between different images) information density due to var-
ied texture complexity, we argue that the unbalanced infor-
mation density within or between frames of the video uni-
versally exists, and should be properly managed for data
overfitting. Our preliminary experiment in Figure 1 showsthat the PSNR values at different locations in a video frame
forms certain pattern regarding the video content, and ex-
hibits different patterns along the timeline. Specifically, at
the server end, each frame of the video is evenly divided
into patches, and then we split all the patches into multi-
ple chunks by PSNR regarding all frames. Independent SR
models will be used to overfit the video chunks, and then de-
livered to the clients. Figure 2 demonstrates the overview of
our proposed method. By using spatial-temporal informa-
tion for data overfitting, we reduce the number of chunks
as well as the overfitting models since they are bounded
by the nature of the content, which means our method can
keep a minimum number of chunks regardless the dura-
tion of videos. In addition, since each chunk has similar
data patches, we can actually use smaller SR model without
handcrafted modules for the overfitting task, which reduces
the computation burden for devices of the end-user. Our
experimental results demonstrate that our method achieves
real-time video resolution upscaling from 270p to 1080p on
an off-the-shelf mobile phone with high PSNR.
Note that STDO encodes different video chunks with
independent SR models, we further improve it by a Joint
training technique ( JSTDO ) that results in one single SR
model for all chunks, which further reduces the storage
requirement. We design a novel data-aware joint training
technique, which trains a single SR model with more data
from higher information density chunks and less data from
their counterparts. The underlying rationale is consistent
with the discovery in [46, 58], that more informative data
contributes majorly to the model training. We summarize
10260
our contributions as follows:
• We discover the unbalanced information density within
video frames, and it universally exists and constantly
changes along the video timeline.
• By leveraging the unbalanced information density in
the video, we propose a spatial-temporal data overfit-
ting method STDO for video resolution upscaling, which
achieves outperforming video quality as well as real-time
execution speed.
• We propose an advanced data-aware joint training tech-
nique which takes different chunk information density
into consideration, and reduces the number of SR mod-
els to a single model with negligible quality degradation.
• We deploy our models on an off-the-shelf mobile phone,
and achieve real-time super-resolution performance.
|
Miangoleh_Realistic_Saliency_Guided_Image_Enhancement_CVPR_2023 | Abstract
Common editing operations performed by profes-
sional photographers include the cleanup operations: de-
emphasizing distracting elements and enhancing subjects.
These edits are challenging, requiring a delicate balance
between manipulating the viewer’s attention while main-
taining photo realism. While recent approaches can boast
successful examples of attention attenuation or amplifica-
tion, most of them also suffer from frequent unrealistic ed-
its. We propose a realism loss for saliency-guided image en-
hancement to maintain high realism across varying image
types, while attenuating distractors and amplifying objects
of interest. Evaluations with professional photographers
confirm that we achieve the dual objective of realism and ef-
fectiveness, and outperform the recent approaches on their
own datasets, while requiring a smaller memory footprint
and runtime. We thus offer a viable solution for automating
image enhancement and photo cleanup operations. | 1. Introduction
In everyday photography, the composition of a photo
typically encompasses subjects on which the photographer
intends to focus our attention, rather than other distracting
things. When distracting things cannot be avoided, photog-
raphers routinely edit their photos to de-emphasize them.
Conversely, when the subjects are not sufficiently visible,
photographers routinely emphasize them. Among the most
common emphasis and de-emphasis operations performed
by professionals are the elementary ones: changing the sat-
uration, exposure, or the color of each element. Although
conceptually simple, these operations are challenging to ap-
ply because they must delicately balance the effects on the
viewer attention with photo realism.
To automate this editing process, recent works use
saliency models as a guide [1,2,4,8,16,17]. These saliency
models [3, 7, 10, 14, 19] aim to predict the regions in the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
186
image that catch the viewer’s attention, and saliency-guided
image editing methods are optimized to increase or decrease
the predicted saliency of a selected region. Optimizing
solely based on the predicted saliency, however, often re-
sults in unrealistic edits, as illustrated in Fig. 1. This issue
results from the instability of saliency models under the im-
age editing operations, as saliency models are trained on
unedited images. Unrealistic edits can have low predicted
saliency even when they are highly noticeable to human ob-
servers, or vice versa. This was also noted by Aberman et
al. [1], and is illustrated in Fig. 2.
Previous methods tried to enforce realism using adver-
sarial setups [2,4,8,17], GAN priors [1,8], or cycle consis-
tency [2] but with limited success (Fig. 1). Finding the exact
point when an image edit stops looking realistic is challeng-
ing. Rather than focusing on the entire image, in this work,
we propose a method for measuring the realism of a local
edit. To train our network, we generate realistic image ed-
its by subtle perturbations to exposure, saturation, color or
white balance, as well as very unrealistic edits by apply-
ing extreme adjustments. Although our network is trained
with only positive and negative examples at the extremes,
we successfully learn a continuous measure of realism for a
variety of editing operations as shown in Fig. 3.
We apply our realism metric to saliency-guided image
editing by training the system to optimize the saliency of
a selected region while being penalized for deviations from
realism. We show that a combined loss allows us to enhance
or suppress a selected region successfully while maintaining
high realism. Our method can be also be applied to multiple
regions in a photograph as shown in Fig. 1.
Evaluations with professional photographers and photo
editors confirm our claim that we maintain high realism and
succeed at redirecting attention in the edited photo. Further,
our results are robust to different types of images including
human faces, and are stable across different permutations
of edit parameters. Taken together with our model size of
26Mb and run-time of 8ms, these results demonstrate that
we have a more viable solution for broader use than the ap-
proaches that are available for these tasks to date.
|
Long_PointClustering_Unsupervised_Point_Cloud_Pre-Training_Using_Transformation_Invariance_in_Clustering_CVPR_2023 | Abstract
Feature invariance under different data transformations,
i.e., transformation invariance, can be regarded as a type
of self-supervision for representation learning. In this pa-
per, we present PointClustering, a new unsupervised rep-
resentation learning scheme that leverages transformation
invariance for point cloud pre-training. PointClustering
formulates the pretext task as deep clustering and employs
transformation invariance as an inductive bias, following
the philosophy that common point cloud transformation will
not change the geometric properties and semantics. Techni-
cally, PointClustering iteratively optimizes the feature clus-
ters and backbone, and delves into the transformation in-
variance as learning regularization from two perspectives:
point level and instance level. Point-level invariance learn-
ing maintains local geometric properties through gathering
point features of one instance across transformations, while
instance-level invariance learning further measures cluster-
s over the entire dataset to explore semantics of instances.
Our PointClustering is architecture-agnostic and readily
applicable to MLP-based, CNN-based and Transformer-
based backbones. We empirically demonstrate that the
models pre-learnt on the ScanNet dataset by PointClus-
tering provide superior performances on six benchmark-
s, across downstream tasks of classification and segmen-
tation. More remarkably, PointClustering achieves an ac-
curacy of 94.5% on ModelNet40 with Transformer back-
bone. Source code is available at https://github.
com/FuchenUSTC/PointClustering .
| 1. Introduction
3D point cloud analysis has seen tremendous progress
and made great success in industrial applications, e.g., au-
tonomous driving, augmented reality and robotics. The
achievements heavily rely on large quantities of human an-
Dinstance feature Clustering Learning1
32
4
1
2
transformation
transformationpoint feature
AB
transformation
C DClustering LearningCView 1 View 2Scene 1 Scene 1 Scene 2(b) point level invariance learningpoint cloudsback‐propagation
backbone
point cloudstransformationView 1 View 2ClusteringDistance
Optimization
(a) clustering learning on point cloud
3
4
(c) instance level invariance learningView 1 View 2Figure 1. Illustration of (a) clustering learning on point cloud by
using feature invariance at (b) point level and (c) instance level.
notations for supervised learning. However, acquiring and
manual labeling 3D point cloud data is very expensive and
time-consuming, while the underlying rich data structure is
also not yet fully leveraged. In contrast, unsupervised learn-
ing leaves it on its own to characterize the underlying fea-
ture distribution completely on data itself and is therefore
an appealing way towards more generic model pre-training.
The research in unsupervised point cloud pre-training
has mainly proceeded along two dimensions with respec-
t to the formulation of pretext task: contrastive learning
[23, 65, 74] and reconstruction [34, 52, 60]. Early works of
contrastive learning generally suggest to leverage point or
scene discrimination across different views [65] or modal-
ities [1, 74] for similarity learning. Instead, the direction
of point cloud reconstruction [34, 60] formulates the learn-
ing target as shape completion from the partial points. Un-
like existing discrimination or reconstruction paradigm in a
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21824
sample-specific manner, clustering technique estimates the
data distribution holistically for class level . We rely on
such recipe and shape a new unsupervised point cloud pre-
training scheme that capitalizes on deep clustering as the
pretext task. Technically, we iteratively optimize feature
clusters and backbone as shown in Figure 1(a), and utilize
transformation invariance as an inductive bias. We look into
the feature invariance learning across data transformations
from two aspects: point level and instance level. The ratio-
nale behind point level feature invariance is that the point
features of an identical object (e.g., points of the chair in
Figure 1(b)) should be invariant across different transfor-
mations since the geometric properties will not change with
transformations. Similar in spirit, the high-level semantics
of instances across 3D scenes (e.g., the instances of chair
in Figure 1(c)) do not vary along with the transformations.
As such, we delve into both point-level and instance-level
transformation invariance to regulate deep clustering.
By materializing the idea of transformation invariance
as regularization for deep clustering, we present a novel
PointClustering approach for unsupervised point cloud pre-
training. Specifically, we first obtain the instance masks of
objects in each 3D scene via Density-Based Spatial Cluster-
ing of Applications with Noise (DBSCAN) [12] algorithm.
Based on the instance masks, the point features of an iden-
tical object under different transformations are clustered to-
gether to characterize geometric properties of points. The
instance-level feature of one object is then computed by
globally pooling all point features of that object. Given al-
l instance features over the entire dataset, PointClustering
further seeks the feature consistency across transformations
at instance level. We employ InfoNCE loss to optimize the
similarity between points or instances and their correspond-
ing clustering centroids (i.e., prototypes).
The main contribution of this work is a new paradigm
that leverages feature invariance under different data trans-
formations for unsupervised point cloud pre-training. The
solution also leads to the elegant view of how to explore
self-supervision from the standpoint of transformation in-
variance, and how to indicate geometric properties and se-
mantics of point cloud for unsupervised pre-training. Ex-
tensive experiments on six benchmarks over three down-
stream tasks verify that PointClustering outperforms the
state-of-the-art unsupervised pre-training models.
|
Prabhu_Computationally_Budgeted_Continual_Learning_What_Does_Matter_CVPR_2023 | Abstract
Continual Learning (CL) aims to sequentially train mod-
els on streams of incoming data that vary in distribution
by preserving previous knowledge while adapting to new
data. Current CL literature focuses on restricted access to
previously seen data, while imposing no constraints on the
computational budget for training. This is unreasonable
for applications in-the-wild, where systems are primarily
constrained by computational and time budgets, not stor-
age. We revisit this problem with a large-scale benchmark
and analyze the performance of traditional CL approaches
in a compute-constrained setting, where effective memory
samples used in training can be implicitly restricted as a
consequence of limited computation. We conduct experi-
ments evaluating various CL sampling strategies, distillation
losses, and partial fine-tuning on two large-scale datasets,
namely ImageNet2K and Continual Google Landmarks V2
in data incremental, class incremental, and time incremen-
tal settings. Through extensive experiments amounting to a
total of over 1500 GPU-hours, we find that, under compute-
constrained setting, traditional CL approaches, with no ex-
ception, fail to outperform a simple minimal baseline that
samples uniformly from memory. Our conclusions are con-
sistent in a different number of stream time steps, e.g., 20
to 200, and under several computational budgets. This
suggests that most existing CL methods are particularly
too computationally expensive for realistic budgeted de-
ployment. Code for this project is available at: https:
//github.com/drimpossible/BudgetCL .
| 1. Introduction
Deep learning has excelled in various computer vision
tasks [8,21,25,43] by performing hundreds of shuffled passes
through well-curated offline static labeled datasets. However,
modern real-world systems, e.g., Instagram, TikTok, and
Flickr, experience high throughput of a constantly changing
stream of data, which poses a challenge for deep learning
to cope with such a setting. Continual learning (CL) aims
to go beyond static datasets and develop learning strategies
that can adapt and learn from streams where data is pre-
*authors contributed equally; order decided by a coin flip.
Figure 1. Main Findings. Under per time step computationally
budgeted continual learning, classical continual learning methods,
e.g., sampling strategies, distillation losses, and fully connected
(FC) layer correction based methods such as calibration, struggle
to cope with such a setting. Most proposed continual algorithms
are particularly useful only when large computation is available,
where, otherwise, minimalistic algorithms (ERM) are superior.
sented incrementally over time, often referred to as time
steps. However, the current CL literature overlooks a key
necessity for practical real deployment of such algorithms.
In particular, most prior art is focused on offline continual
learning [22, 23, 41] where, despite limited access to previ-
ous stream data, training algorithms do not have restrictions
on the computational training budget per time step.
High-throughput streams, e.g., Instagram, where every
stream sample at every time step needs to be classified for,
say, misinformation or hate speech, are time-sensitive in
which long training times before deployment are simply not
an option. Otherwise, new stream data will accumulate until
training is completed, causing server delays and worsening
user experience.
Moreover, limiting the computational budget is necessary
towards reducing the overall cost. This is because computa-
tional costs are higher compared to any storage associated
costs. For example, on Google Cloud Standard Storage
(2¢per GB per month), it costs no more than 6 ¢to store
the entire CLEAR benchmark [26], a recent large-scale CL
dataset. On the contrary, one run of a CL algorithm on
CLEAR performing ∼300K iterations costs around 100$
on an A100 Google instance (3 $per hour for 1 GPU). There-
fore, it is prudent to have computationally budgeted methods
where the memory size, as a consequence, is implicitly re-
stricted. This is because, under a computational budget, it
is no longer possible to revisit all previous data even if they
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3698
were all stored in memory (given their low memory costs).
This raises the question: “ Do existing continual learning
algorithms perform well under per step restricted comp |
Michaeli_Alias-Free_Convnets_Fractional_Shift_Invariance_via_Polynomial_Activations_CVPR_2023 | Abstract
Although CNNs are believed to be invariant to transla-
tions, recent works have shown this is not the case due to
aliasing effects that stem from down-sampling layers. The
existing architectural solutions to prevent the aliasing ef-
fects are partial since they do not solve those effects that
originate in non-linearities. We propose an extended anti-
aliasing method that tackles both down-sampling and non-
linear layers, thus creating truly alias-free, shift-invariant
CNNs1. We show that the presented model is invariant to
integer as well as fractional (i.e., sub-pixel) translations,
thus outperforming other shift-invariant methods in terms
of robustness to adversarial translations.
| 1. Introduction
Convolutional Neural Networks (CNNs) are the most
common model in the image classification field. They were
originally intended to have two properties:
1. Shift-invariant output: when we spatially translate the
input image, their output does not change.
2. Shift-equivariant representation: when we spatially
translate the input image, their internal representation
translates in the same way.
Both these properties are thought to be beneficial for gener-
alization ( i.e., they are useful inductive biases), as we expect
the image class not to change by an image translation, and
its features to shift together with the image. Moreover, with-
out the first property, the CNN might become vulnerable to
adversarial attacks using image translations. Such attacks
are real threats since they are very simple to execute in a
“black-box” setting (where we do not know anything about
the CNN). For example, consider a person trying to fool a
CNN-based face scanner, by simply moving continuously
until a face match is achieved.
It was commonly assumed that these useful properties
were maintained since CNNs use only shift-equivariant op-
erations: the convolution operation and component-wise
1Our code is available at github.com/hmichaeli/alias free convnets/.non-linearities. However, CNN models typically also in-
clude downsampling operations such as pooling and strided
convolution. Unfortunately, these operations violate equiv-
ariance, and this also leads to CNNs not being shift-
invariant. Specifically, Azulay and Weiss [2] have shown
that shifting an input image by even one pixel can cause the
output probability of a trained classifier to change signifi-
cantly. This vulnerability can be further exploited in adver-
sarial attacks, lowering classifiers’ accuracy by more than
20% [8]. Later, Zhang [33] has shown that this problem-
atic behavior stems from an aliasing effect, taking place in
downsampling operations such as pooling and strided con-
volutions, and non-linear operations on the downsampled
signals.
Previous works have shown an improvement in CNN
invariance to translations using partial solutions that re-
duced aliasing. For example, Zhang [33] has suggested
adding a low-pass filter before the downsampling opera-
tions. This approach has been shown to reduce aliasing
caused by downsampling, thus improving shift-invariance,
as well as accuracy and noise robustness. Karras et al. [17]
have addressed aliasing in the generator within generative
adversarial networks (GANs). They have shown that with-
out proper treatment, aliasing in GANs leads to a decou-
pling of the high-frequency features (texture) from the low-
frequency content (structure) in the generated images, thus
limiting their applicability in smooth video generation. To
alleviate this issue, Karras et al . [17] extended the low-
pass filter approach and suggested a solution for the implicit
aliasing caused by non-linearities. Their method wraps the
component-wise non-linear operat |
Ning_Trap_Attention_Monocular_Depth_Estimation_With_Manual_Traps_CVPR_2023 | Abstract
Predicting a high quality depth map from a single im-
age is a challenging task, because it exists infinite pos-
sibility to project a 2D scene to the corresponding 3D
scene. Recently, some studies introduced multi-head at-
tention (MHA) modules to perform long-range interaction,
which have shown significant progress in regressing the
depth maps. The main functions of MHA can be loosely
summarized to capture long-distance information and re-
port the attention map by the relationship between pixels.
However, due to the quadratic complexity of MHA, these
methods can not leverage MHA to compute depth features
in high resolution with an appropriate computational com-
plexity. In this paper, we exploit a depth-wise convolution
to obtain long-range information, and propose a novel trap
attention, which sets some traps on the extended space for
each pixel, and forms the attention mechanism by the fea-
ture retention ratio of convolution window, resulting in that
the quadratic computational complexity can be converted to
linear form. Then we build an encoder-decoder trap depth
estimation network, which introduces a vision transformer
as the encoder, and uses the trap attention to estimate the
depth from single image in the decoder. Extensive experi-
mental results demonstrate that our proposed network can
outperform the state-of-the-art methods in monocular depth
estimation on datasets NYU Depth-v2 and KITTI, with sig-
nificantly reduced number of parameters. Code is available
at: https://github.com/ICSResearch/TrapAttention.
| 1. Introduction
Depth estimation is a classical problem in computer vi-
sion (CV) field and is a fundamental component for vari-
ous applications, such as, scene understanding, autonomous
driving, and 3D reconstruction. Estimating the depth map
from a single RGB image is a challenge, since the same 2D
scene can project an infinite number of 3D scenes. There-
*Corresponding author
2 1 12 1 1
4 3 322
4
4 3 3 422 11 1122 11 11
44 33 332222
44
44 33 33 442 1 12 1 1
4 3 322
4
4 3 3 42 1 12 1 1
4 3 322
4
4 3 3 4
Depth
Depth
Encoder
Extend
Input image
Input image
Feature 4 32 1
4 32 1
Feature 4 32 1
Set the traps Set the traps
Attention map
Net
3x3 conv.
0 30 0
0 30 01 11 1
1 11 1
4 4
4 44 4
4 402
22
02
22
000 333000 0001 11 1
4 4
4 4000222
222222Figure 1. Illustration of trap attention for monocular deep esti-
mation. Note that trap attention can significantly enhance depth
estimation, as evidenced by the clearer depth differences between
the table/chairs and the background.
fore, the traditional depth estimation methods [27, 28, 33]
are often only suitable for predicting low-dimension, sparse
distances [27], or known and fixed targets [28], which obvi-
ously limits their application scenarios.
To overcome these constraints, many studies [1, 8, 9, 20]
have employed the deep neural networks to directly obtain
high-quality depth maps. However, most of these research
focuses on improving the performance of depth estimation
networks by designing more complex or large-scale mod-
els. Unfortunately, such a line of research would render the
depth estimation task a simple model scale problem with-
out the trade-off between performance and computational
budget.
Recently, several practitioners and researchers in monoc-
ular depth estimation [3, 17, 45] introduced the multi-head
attention (MHA) modules to perform the long-range inter-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5033
Figure 2. Overview of our trap depth estimation network, which includes an encoder and a decoder. TB, TI and BS denote the trap block,
trap interpolation and block selection unit, respectively. TB is the basic block of our decoder, which decodes the depth feature from coarse
to fine in five stages. TB consists of a depth-wise (DW) convolution layer, a trap attention (TA) unit and a convolution based MLP. The
size of decoder depends on an arbitrary channel dimension (denoted as C). “⊕” denotes the addition operation.
(a)t1
(b)t2
(c)t3
(d)t4
Figure 3. The curves for trap functions used in trap attention with-
out the rounded operation. (a) and (d) are two similar curves. (b)
has a higher frequency. (c) has a different initial phase with other
curves.
action, which have shown considerable progress in regress-
ing the depth maps. Representative works of such methods
are AdaBin [3] and NeW CRFs [45]. Nevertheless, due to
the quadratic computational complexity of MHA, the com-putational complexity of high resolution depth map for Ad-
aBin or NeW CRFs is typically expensive, i.e., for an h×w
image, its complexity is O(h2w2).
To reduce the computational complexity, in this work,
we firstly exploit a deep-wise convolution layer to compute
the long-distance information and then propose an attention
mechanism, called trap attention, which leverages various
manual traps to remove some features in extended space,
and exploits a 3×3convolution window to compute rela-
tionship and attention map. As a result, the quadratic com-
putational complexity O(h2w2)can be converted to linear
formO(hw). As illustrated by the example in Figure 1,
the proposed trap attention is highly effective for depth esti-
mation, which can allocate more computational resource to-
ward the informative features, i.e., edges of table and chairs,
and output a refined depth map from coarse depth map.
Based on this attention mechanism, we finally build an
encoder-decoder depth estimation network, which intro-
duces a vision transformer as the encoder, and uses the trap
attention to estimate the depth from single image in the de-
coder. We can build our depth estimation network of differ-
ent scales according to the depth estimation scene, which
can obtain a balance between performance and computa-
tional budget. Experimental results show that our depth
estimation network outperform previous estimation meth-
ods by remarkable margin on two most popular indoor and
outdoor datasets, NYU [36] and KITTI [11], respectively.
5034
Specifically, our model can obtain consistent predictions
with sharp details on visual representations, and achieve the
new state-of-the-art performance in monocular depth esti-
mation, with only 35% parameters of the prior state-of-the-
art methods.
In summary, our main contributions are as follows:
•We use a depth-wise convolution to capture long-
distance information and introduce an extra attention
mechanism to compute the relationships between fea-
tures, which is an efficient alternative to MHA, result-
ing in that the computational complexity is reduced
fromO(h2w2)toO(hw).
•We propose a novel attention mechanism that can al-
locate more computational resource toward the infor-
mative features, called trap attention, which is highly
effective for depth estimation.
•We build an end-to-end trap network for monocular
depth estimation, which can obtain the state-of-the-art
performance on NYU and KITTI datasets, with signif-
icantly reduced number of parameters.
|
Luan_High_Fidelity_3D_Hand_Shape_Reconstruction_via_Scalable_Graph_Frequency_CVPR_2023 | Abstract
Despite the impressive performance obtained by recent
single-image hand modeling techniques, they lack the capa-
bility to capture sufficient details of the 3D hand mesh. This
deficiency greatly limits their applications when high-fidelity
hand modeling is required, e.g., personalized hand model-
ing. To address this problem, we design a frequency split
network to generate 3D hand mesh using different frequency
bands in a coarse-to-fine manner. To capture high-frequency
personalized details, we transform the 3D mesh into the
frequency domain, and propose a novel frequency decom-
position loss to supervise each frequency component. By
leveraging such a coarse-to-fine scheme, hand details that
correspond to the higher frequency domain can be preserved.
In addition, the proposed network is scalable, and can stop
the inference at any resolution level to accommodate dif-
ferent hardware with varying computational powers. To
quantitatively evaluate the performance of our method in
terms of recovering personalized shape details, we intro-
duce a new evaluation metric named Mean Signal-to-Noise
Ratio (MSNR) to measure the signal-to-noise ratio of each
mesh frequency component. Extensive experiments demon-
strate that our approach generates fine-grained details for
high-fidelity 3D hand reconstruction, and our evaluation
metric is more effective for measuring mesh details com-
pared with traditional metrics. The code is available at
https://github.com/tyluann/FreqHand .
| 1. Introduction
High-fidelity and personalized 3D hand modeling have
seen great demand in 3D games, virtual reality, and the
emerging Metaverse, as it brings better user experiences,
e.g., users can see their own realistic hands in the virtual
space instead of the standard avatar hands. Therefore, it is
0 2 4 6 8 10 12×103
Frequency componentsFigure 1. An exemplar hand mesh of sufficient details and its graph
frequency decomposition. The x-axis shows frequency compo-
nents from low to high. The y-axis shows the amplitude of each
component in the logarithm. At the frequency domain, the signal
amplitude generally decreases as the frequency increases.
of great importance to reconstruct high-fidelity hand meshes
that can adapt to different users and application scenarios.
Despite previous successes in 3D hand reconstruction and
modeling [3, 6, 7, 16, 22, 40, 44, 46], few existing solutions
focus on enriching the details of the reconstructed shape,
and most current methods fail to generate consumer-friendly
high-fidelity hands. When we treat the hand mesh as graph
signals, like most natural signals, the low-frequency compo-
nents have larger amplitudes than those of the high-frequency
parts, which we can observe in a hand mesh spectrum curve
(Fig. 1). Consequently, if we generate the mesh purely in
the spatial domain, the signals of different frequencies could
be biased, thus the high-frequency information can be eas-
ily overwhelmed by its low-frequency counterpart. More-
over, the wide usage of compact parametric models, such as
MANO [32], has limited the expressiveness of personalized
details. Even though MANO can robustly estimate the hand
pose and coarse shape, it sacrifices hand details for compact-
ness and robustness in the parameterization process, so the
detail expression ability of MANO is suppressed.
To better model detailed 3D shape information, we trans-
form the hand mesh into the graph frequency domain, and
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16795
design a frequency-based loss function to generate high-
fidelity hand mesh in a scalable manner. Supervision in
the frequency domain explicitly constrains the signal of a
given frequency band from being influenced by other fre-
quency bands. Therefore, the high-frequency signals of
hand shape will not be suppressed by low-frequency sig-
nals despite the amplitude disadvantage. To improve the
expressiveness of hand models, we design a new hand model
of12,337vertices that extends previous parametric models
such as MANO with nonparametric representation for resid-
ual adjustments. While the nonparametric residual expresses
personalized details, the parametric base ensures the over-
all structure of the hand mesh, e.g., reliable estimation of
hand pose and 3D shape. Instead of fixing the hand mesh
resolution, we design our network architecture in a coarse-to-
fine manner with three resolution levels U-net for scalability.
Different levels of image features contribute to different
levels of detail. Specifically, we use low-level features in
high-frequency detail generation and high-level features in
low-frequency detail generation. At each resolution level,
our network outputs a hand mesh with the corresponding
resolution. During inference, the network outputs an increas-
ingly higher resolution mesh with more personalized details
step-by-step, while the inference process can stop at any one
of the three resolution levels.
In summary, our contributions include the following.
1.We design a high-fidelity 3D hand model for reconstruct-
ing 3D hand shapes from single images. The hand repre-
sentation provides detailed expression, and our frequency
decomposition loss helps to capture the personalized
shape information.
2.To enable computational efficiency, we propose a fre-
quency split network architecture to generate high-fidelity
hand mesh in a scalable manner with multiple levels of de-
tail. During inference, our scalable framework supports
budget-aware mesh reconstruction when the computa-
tional resources are limited.
3.We propose a new metric to evaluate 3D mesh details. It
better captures the signal-to-noise ratio of all frequency
bands to evaluate high-fidelity hand meshes. The effec-
tiveness of this metric has been validated by extensive
experiments.
We evaluate our method on the InterHand2.6M
dataset [29]. In addition to the proposed evaluation met-
rics, we also evaluate mean per joint position error (MPJPE)
and mesh Chamfer distance (CD). Compared to MANO and
other baselines, our proposed method achieves better results
using all three metrics.
|
Ma_Solving_Oscillation_Problem_in_Post-Training_Quantization_Through_a_Theoretical_Perspective_CVPR_2023 | Abstract
Post-training quantization (PTQ) is widely regarded as
one of the most efficient compression methods practically,
benefitting from its data privacy and low computation costs.
We argue that an overlooked problem of oscillation is in the
PTQ methods. In this paper, we take the initiative to ex-
plore and present a theoretical proof to explain why such a
problem is essential in PTQ. And then, we try to solve this
problem by introducing a principled and generalized frame-
work theoretically. In particular, we first formulate the os-
cillation in PTQ and prove the problem is caused by the dif-
ference in module capacity. To this end, we define the mod-
ule capacity (ModCap) under data-dependent and data-free
scenarios, where the differentials between adjacent modules
are used to measure the degree of oscillation. The prob-
lem is then solved by selecting top-k differentials, in which
the corresponding modules are jointly optimized and quan-
tized. Extensive experiments demonstrate that our method
successfully reduces the performance drop and is general-
ized to different neural networks and PTQ methods. For
example, with 2/4bit ResNet- 50quantization, our method
surpasses the previous state-of-the-art method by 1.9%. It
becomes more significant on small model quantization, e.g.
surpasses BRECQ method by 6.61% on MobileNetV 2×0.5.
| 1. Introduction
Deep Neural Networks (DNNs) have rapidly become a
research hotspot in recent years, being applied to various
*This work was done when Yuexiao Ma was intern at ByteDance Inc.
Code is available at: https://github.com/bytedance/MRECG
†Corresponding Author: rrji@xmu.edu.cn
OscillationFigure 1. Left: Reconstruction loss distribution of BRECQ [17]
on0.5scaled MobileNetV 2quantized to 4/4bit. Loss oscilla-
tion in BRECQ during reconstruction see red dashed box. Right:
Mixed reconstruction granularity (MRECG) smoothing loss oscil-
lation and achieving higher accuracy.
scenarios in practice. However, as DNNs evolve, better
model performance is usually associated with huge resource
consumption from deeper and wider networks [8, 14, 28].
Meanwhile, the research field of neural network compres-
sion and acceleration, which aims to deploy models in
resource-constrained scenarios, is gradually gaining more
and more attention, including but not limited to Neural Ar-
chitecture Search [18, 19, 33, 35–40, 42, 43], network prun-
ing [4, 7, 16, 27, 32, 41], and quantization [3, 5, 6, 15, 17, 21,
22,29,31]. Among these methods, quantization proposed to
transform float network activations and weights to low-bit
fixed points, which is capable of accelerating inference [13]
or training [44] speed with little performance degradation.
In general, network quantization methods are divided
into quantization-aware training (QAT) [3, 5, 6] and post-
training quantization (PTQ) [11, 17, 22, 31]. The former
reduces the quantization error by quantization fine-tuning.
Despite the remarkable results, the massive data require-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7950
ments and high computational costs hinder the pervasive
deployment of DNNs, especially on resource-constrained
devices. Therefore, PTQ is proposed to solve the aforemen-
tioned problem, which requires only minor or zero calibra-
tion data for model reconstruction. Since there is no iter-
ative process of quantization training, PTQ algorithms are
extremely efficient, usually obtaining and deploying quan-
tized models in a few minutes. However, this efficiency of-
ten comes at the partial sacrifice of accuracy. PTQ typically
performs worse than full precision models without quanti-
zation training, especially in low-bit compact model quan-
tization. Some recent algorithms [17, 22, 31] try to address
this problem. For example, Nagel et al. [22] constructs new
optimization functions by second-order Taylor expansions
of the loss functions before and after quantization, which
introduces soft quantization with learnable parameters to
achieve adaptive weight rounding. Li et al. [17] changes
layer-by-layer to block-by-block reconstruction and uses di-
agonal Fisher matrices to approximate the Hessian matrix to
retain more information. Wei et al. [31] discovers that ran-
domly disabling some elements of the activation quantiza-
tion can smooth the loss surface of the quantization weights.
However, we observe that all the above methods show
different degrees of oscillation with the deepening of the
layer or block during the reconstruction process, as illus-
trated in the left sub-figure of Fig. 1. We argue that the
problem is essential and has been overlooked in the previ-
ous PTQ methods. In this paper, through strict mathemati-
cal definitions and proofs, we answer 3questions about the
oscillation problem, which are listed as follows:
(i).Why the oscillation happens in PTQ? To answer
this question, we first define module topological homogene-
ity, which relaxes the module equivalence restriction to a
certain extent. And then, we give the definition of module
capacity under the condition of module topological homo-
geneity. In this case, we can prove that when the capacity of
the later module is large enough, the reconstruction loss will
break through the effect of quantization error accumulation
and decrease. On the contrary, if the capacity of the later
module is smaller than that of the preceding module, the
reconstruction loss increases sharply due to the amplified
quantization error accumulation effect. Overall, we demon-
strate that the oscillation of the loss during PTQ reconstruc-
tion is caused by the difference in module capacity;
(ii).How the oscillation will influence the final perfor-
mance? We observe that the final reconstruction error is
highly correlated with the largest reconstruction error in all
the previous modules by randomly sampling a large num-
ber of mixed reconstruction granularity schemes. In other
words, when oscillation occurs, the previous modules ob-
viously have larger reconstruction errors, thus leading to
worse accuracy in PTQ;
(iii). How to solve the oscillation problem in PTQ?Since oscillation is caused by the different capacities
of the front and rear modules, we propose the Mixed
REC onstruction Granularity (MRECG) method which
jointly optimizes the modules where oscillation occurs.
Besides, our method is applicable in data-free and data-
dependent scenarios, which is also compatible with differ-
ent PTQ methods. In general, our contributions are listed as
follows:
• We reveal for the first time the oscillation problem in
PTQ, which has been neglected in previous algorithms.
However, we discover that smoothing out this oscilla-
tion is essential in the optimization of PTQ.
• We show theoretically that this oscillation is caused by
the difference in the capability of adjacent modules.
A small module capability exacerbates the cumulative
effect of quantization errors making the loss increase
rapidly, while a large module capability reduces the cu-
mulative quantization errors making the loss decrease.
• To solve the oscillation problem, we propose a
novel Mixed REC onstruction Granularity (MRECG)
method, which employs loss metric and module capac-
ity to optimize mixed reconstruction granularity under
data-dependency and data-free scenarios. The former
finds the global optimum with moderately higher over-
head and thus has the best performance. The latter is
more effective with a minor performance drop.
• We validate the effectiveness of the proposed method
on a wide range of compression tasks in ImageNet.
In particular, we achieve a Top- 1accuracy of 58.49%
in MobileNetV 2with2/4bit, which exceeds current
SOTA methods by a large margin. Besides, we also
confirm that our algorithm indeed eliminates the oscil-
lation of reconstruction loss on different models and
makes the reconstruction process more stable.
|
Ofri-Amar_Neural_Congealing_Aligning_Images_to_a_Joint_Semantic_Atlas_CVPR_2023 | Abstract
We present Neural Congealing – a zero-shot self-
supervised framework for detecting and jointly aligning
semantically-common content across a given set of images.
Our approach harnesses the power of pre-trained DINO-
ViT features to learn: (i) a joint semantic atlas – a 2D
grid that captures the mode of DINO-ViT features in the
input set, and (ii) dense mappings from the unified atlas
to each of the input images. We derive a new robust self-
supervised framework that optimizes the atlas representa-
tion and mappings per image set, requiring only a few real-
world images as input without any additional input infor-
mation (e.g., segmentation masks). Notably, we design our
losses and training paradigm to account only for the shared
content under severe variations in appearance, pose, back-
ground clutter or other distracting objects. We demon-
strate results on a plethora of challenging image sets in-
cluding sets of mixed domains (e.g., aligning images depict-
ing sculpture and artwork of cats), sets depicting related
yet different object categories (e.g., dogs and tigers), or do-
mains for which large-scale training data is scarce (e.g.,
coffee mugs). We thoroughly evaluate our method and show
that our test-time optimization approach performs favor-
ably compared to a state-of-the-art method that requires ex-
tensive training on large-scale datasets. Project webpage:
https://neural-congealing.github.io/ | 1. Introduction
Humans can easily associate and match semantically-
related objects across images, even under severe variations
in appearance, pose and background content. For exam-
ple, by observing the images in Fig. 1, we can immediately
focus and visually compare the different butterflies, while
ignoring the rest of the irrelevant content. While compu-
tational methods for establishing semantic correspondences
have seen a significant progress in recent years, research ef-
forts are largely focused on either estimating sparse match-
ing across multiple images (e.g., keypoint detection), or es-
tablishing dense correspondences between a pair of images .
In this paper, we consider the task of joint dense semantic
alignment of multiple images . Solving this long-standing
task is useful for a variety of applications, ranging from
editing image collections [36,56], browsing images through
canonical primitives, and 3D reconstruction (e.g., [8, 47]).
The task of joint image alignment dates back to the sem-
inal congealing [13, 14, 20, 28], which aligns a set of im-
ages into a common 2D space. Recently, GANgealing [36]
has modernized this approach for congealing an entire do-
main of images. This is achieved by leveraging a pre-trained
GAN to generate images that serve as self-supervisory sig-
nal. Specifically, their method jointly learns both the mode
of the generated images in the latent space of the GAN, and
a network that predicts the mappings of the images into the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19403
joint mode. GANgealing demonstrated impressive results
on in-the-wild image sets. Nevertheless, their method re-
quires a StyleGAN model pre-trained on the domain of the
test images, e.g., aligning cat images requires training Style-
GAN on a large-scale cat dataset. This is a challenging task
by itself, especially for unstructured image domains or un-
curated datasets [34]. Moreover, they require additional ex-
tensive training for learning the mode and their mapping
network (e.g., training on millions of generated images).
In this work, we take a different route and tackle the joint
alignment task in the challenging setting where only a test
image set is available, without any additional training data.
More specifically, given only a few images as input (e.g.,
<25 images), our method estimates the mode of the test set
and their joint dense alignment, in a self-supervised manner.
We assume the input images share a common semantic
content, yet may depict various factors of variations, such as
pose, appearance, background content or other distracting
objects (e.g., Mugs in Fig. 3). We take inspiration from the
tremendous progress in representation learning, and lever-
age a pre-trained DINO-ViT – a Vision Transformer model
trained in a self-supervised manner [4]. DINO-ViT features
have been shown to serve as an effective visual descriptor,
capturing localized and semantic information (e.g., [2,49]).
Here, we propose a new self-supervised framework that
jointly and densely aligns the images in DINO-ViT feature
space . To the best of our knowledge, we are the first to har-
ness the power of DINO-ViT for dense correspondences be-
tween in-the-wild images. More specifically, given an im-
age set, our framework estimates, at test-time: (i) a joint la-
tent 2D atlas that represents the mode of DINO-ViT features
across the images, and (ii) dense mappings from the atlas
to each of the images. Our training objective is driven by
a matching loss encouraging each image features to match
the canonical learned features in the joint atlas. We further
incorporate additional loss terms that allow our framework
to robustly represent and align only the shared content in the
presence of background clutter or other distracting objects.
Since our atlas and mappings are optimized per set, our
method works in a zero-shot manner and can be applied
to a plethora of image sets, including sets of mixed do-
mains (e.g., aligning images depicting sculpture and art-
work of cats), sets depicting related yet different object
categories (e.g., dogs and tigers), or domains for which
a dedicated generator is not available (e.g., coffee mugs).
We thoroughly evaluate our method, and demonstrate that
our test-time optimization framework performs favorably
compared to [36] and on-par with state-of-the-art self-
supervised methods. We further demonstrate how our atlas
and mappings can be used for editing the image set with
minimal effort by automatically propagating edits that are
applied to a single image to the entire image set.
|
Luo_Constrained_Evolutionary_Diffusion_Filter_for_Monocular_Endoscope_Tracking_CVPR_2023 | Abstract
Stochastic filtering is widely used to deal with nonlin-
ear optimization problems such as 3-D and visual track-
ing in various computer vision and augmented reality ap-
plications. Many current methods suffer from an imbal-
ance between exploration and exploitation due to their par-
ticle degeneracy and impoverishment, resulting in local op-
timums. To address this imbalance, this work proposes a
new constrained evolutionary diffusion filter for nonlinear
optimization. Specifically, this filter develops spatial state
constraints and adaptive history-recall differential evolu-
tion embedded evolutionary stochastic diffusion instead of
sequential resampling to resolve the degeneracy and im-
poverishment problem. With application to monocular en-
doscope 3-D tracking, the experimental results show that
the proposed filtering significantly improves the balance be-
tween exploration and exploitation and certainly works bet-
ter than recent 3-D tracking methods. Particularly, the sur-
gical tracking error was reduced from 4.03 mm to 2.59 mm.
| 1. Introduction
Tracking a camera’s 3-D motion is vital in various com-
puter vision applications, e.g., augmented reality, 3-D re-
construction, computer assisted surgery, navigation and
mapping, and robotics. Recent advances in 3-D tracking
are widely discussed in the literature [9, 11, 13, 20, 21, 33].
Different from commonly used cameras in daily life, en-
doscopic cameras are typical hand-held devices (called en-
doscopes) used to inspect interior surfaces or inaccessible
regions of tubular or hollow structures where the human
visual system can hardly observe. While industrial endo-
scopes are powerful for examining unreachable areas of
*The author would like to give his special thanks to Professor Raymond
Honfu Chan who is with Hong Kong Centre for Cerebro-cardiovascular
Health Engineering and City University of Hong Kong. This work was
supported in part by the National Nature Science Foundation of China un-
der Grants 82272133 and 61971367, in part by the Fujian Provincial Tech-
nology Innovation Joint Funds under Grant 2019Y9091, and in part by the
Fujian Provincial Natural Science Foundation under Grant 2020J01004.buildings or parts of machines, surgical endoscopes are use-
ful to intuitively inspect cavities in the body. Monocular
endoscopic 3-D tracking plays an essential role in precise
industrial inspection, clinical diagnosis and treatment.
Unfortunately, surgical endoscopic cameras only provide
2-D video images without any depth information and cannot
localize themselves and targets of interest like tumors in the
surgical field. To this end, surgical 3-D tracking methods
are widely developed to accurately localize surgical tools
and targets and reduce inadvertent hurts in endoscopic or
robotic surgery [16, 19, 26]. Such 3-D tracking is a nonlin-
ear optimization problem as well as a multisensor or mul-
tisource information fusion procedure, which is commonly
solved by stochastic optimization methods [4].
Stochastic filtering is widely used for 3-D tracking [23],
and usually generates a population of particles (initial solu-
tions) and propagates them to approximate the optimal solu-
tion. But it still limits itself to local optimums or premature
convergence due to an imbalance between exploration and
exploitation. Specifically, this imbalance results from the
particle degeneracy and impoverishment after sequential re-
sampling, leading to ineffective filtering. Theoretically, this
work aims to solve the particle degeneracy-impoverishment
problem to balance exploring and exploiting and create a
new effective and powerful filtering strategy with robust op-
timization performance. Technically, this work also strives
for addressing several challenges in current surgical 3-D
tracking methods: (1) endoscopic image uncertainty or arti-
facts in vision-based 3-D tracking, (2) inaccurate and jitter
measurements in sensor-based 3-D tracking, and (3) tissue
deformation and patient movement in surgical procedures.
Technical contributions of this work are clarified as fol-
lows. First of all, two new spatial state constraints are in-
troduced for nonlinear optimization problems, improving
the optimization performance. More interestingly, a new
strategy of evolutionary stochastic diffusion with adaptive
history-recall differential evolution instead of sequential re-
sampling can successfully resolve the particle degeneracy-
impoverishment problem, effectively balancing between ex-
ploration and exploitation. We then propose constrained
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4747
evolutionary diffusion filtering (CEDF), which is a meta-
heuristic optimization algorithm and more ambidextrous
than other filters. Additionally, a new hybrid bronchoscope
3-D tracking framework using the proposed filtering is de-
veloped to fuse multisource data including computed to-
mography (CT) or magnetic resonance (MR) images, surgi-
cal videos, and positional sensor measurements. Our frame-
work can tackle these challenges discussed above.
|
Li_Source-Free_Video_Domain_Adaptation_With_Spatial-Temporal-Historical_Consistency_Learning_CVPR_2023 | Abstract
Source-free domain adaptation (SFDA) is an emerging
research topic that studies how to adapt a pretrained source
model using unlabeled target data. It is derived from unsu-
pervised domain adaptation but has the advantage of not
requiring labeled source data to learn adaptive models.
This makes it particularly useful in real-world applications
where access to source data is restricted. While there has
been some SFDA work for images, little attention has been
paid to videos. Naively extending image-based methods to
videos without considering the unique properties of videos
often leads to unsatisfactory results. In this paper, we pro-
pose a simple and highly flexible method for Source-Free
Video Domain Adaptation (SFVDA), which extensively ex-
ploits consistency learning for videos from spatial, tempo-
ral, and historical perspectives. Our method is based on
the assumption that videos of the same action category are
drawn from the same low-dimensional space, regardless
of the spatio-temporal variations in the high-dimensional
space that cause domain shifts. To overcome domain shifts,
we simulate spatio-temporal variations by applying spatial
and temporal augmentations on target videos and encour-
age the model to make consistent predictions from a video
and its augmented versions. Due to the simple design, our
method can be applied to various SFVDA settings, and ex-
periments show that our method achieves state-of-the-art
performance for all the settings.
| 1. Introduction
Action recognition is a crucial task in video understand-
ing and has been receiving tremendous attention from the
vision community. In recent years, it has made significant
progress, primarily due to the development of deep learning
techniques [11, 43,45] and the establishment of large-scale
annotated datasets [2, 13,42]. However, it is acknowledged
that an action recognition model trained with annotated data
drawn from one distribution typically experiences a per-formance drop when tested on out-of-distribution data [4].
This is the so-called domain shift problem.
To tackle this problem, Unsupervised Video Domain
Adaptation (UVDA) has been proposed. The goal is to
learn an adaptive model using labeled video data from one
domain (source) and unlabeled video data from another do-
main (target). Typical UVDA methods use videos from both
domains as input and train a model by minimizing the clas-
sification risk on labeled source videos and explicitly align-
ing videos from both domains in a class-agnostic fashion.
Although most image-based domain alignment techniques
can be applied to video domain alignment, such as adver-
sarial learning [22, 37,44], methods that align domains by
considering the richer temporal information in videos have
shown superior performance [6, 33,36].
While UVDA methods help alleviate the domain shift
problem, their assumption that labeled source videos are
available for domain alignment can be problematic in real-
world applications where access to source videos is re-
stricted due to privacy or commercial reasons [24, 50]. This
motivates a new research topic, Source-Free Video Domain
Adaptation (SFVDA) [50], which aims to learn an adap-
tive action recognition model using unlabeled target videos
and a source model pre-trained with labeled source videos.
SFVDA is similar to UVDA in learning an adaptive model
using labeled source and unlabeled target videos but dif-
fers in that labeled source videos are only used for learning
the source model. Adaptation only involves target videos,
which avoids leaking annotated source videos. However,
the absence of labeled source videos makes SFVDA a more
challenging problem than UVDA since there is no reliable
supervision signal, and no data drawn from the distribution
to be aligned, which makes it even more challenging.
Very recently, Xu et al. [50] proposed a pioneering ap-
proach to SFVDA based on temporal consistency. They
adapt the source model by encouraging it to keep the ca-
pability of understanding motion dynamics despite domain
shifts. They train the model to produce features/predictions
for a video clip consistent with those of other clips within
the same video or that of the entire video. Despite im-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14643
Spatial
SpatialTemporalTemporalFigure 1. Conceptual illustration of applying spatial and temporal
augmentations to simulate domain shifts and encouraging predic-
tion consistency for SFVDA.
proved performance over baseline methods, this method
only considers adapting the source model from a tempo-
ral perspective and ignores spatial factors (the appearance
of frames) that also account for domain shifts. Adapting
the model without encouraging it to surpass the visual ap-
pearance variations could still produce sub-optimal adapta-
tion results. Besides, clips from the same video often share
high similarity, and the model can produce consistent fea-
tures/predictions even though it has not been well adapted.
In this paper, we propose a novel SFVDA method that
overcomes the limitation of the existing methods by ex-
ploiting Spatial-Temporal-Historical Consistency (STHC).
Our underlying assumption is that videos of the same ac-
tion category are drawn from the same low-dimensional
space, regardless of spatio-temporal variations in the high-
dimensional space that cause domain shifts . To achieve
this, we simulate spatio-temporal variations with target
videos and adapt the source model by encouraging it to
surpass the variations and produce consistent predictions.
Specifically, we apply spatial and temporal augmentations
to each unlabeled target video in a stochastic manner to sim-
ulate spatio-temporal variations. By encouraging consistent
classification predictions for the video and its augmented
versions, we ensure that they are drawn from the same low-
dimensional space. After adapting the model in this way, it
is expected to generalize well on test videos that fall into the
same low-dimensional space as the training videos. Figure
1 provides an illustration of this concept.
More concretely, we randomly select a clip from the
video and apply stochastic frame-wise spatial augmenta-
tion, resulting in a perturbed version of the clip. In addi-
tion, we also apply stochastic temporal augmentation by
randomly masking some frames to generate a temporally-
perturbed clip. To ensure prediction consistency, we en-
force the spatial consistency (SC) of the clip with its per-
turbed version and the temporal consistency (TC) of the
clip with its temporally-perturbed version. Besides these
two techniques, we propose a third technique that enforces
consistent predictions for the clip and other clips from the
same video. This technique is similar to that in [50], butwe implement this in a nearly no-cost way: We store his-
torical predictions of all the clips (with randomly sampled
frames) from each video in a memory bank and retrieve pre-
dictions from the bank to enforce prediction consistency for
the current clip. This technique reinforces temporal consis-
tency and we call it historical consistency (HC) . Notably,
TC and SC produce “hard” versions of a clip and encourage
the model to overcome the hard factors and make consis-
tent predictions. Therefore, the model must have a strong
understanding of the target domain to fulfill these tasks, fa-
cilitating model adaptation.
Thanks to simplicity in design, our STHC method can
be easily extended to other SFVDA settings, including the
open-set setting where the target domain contains classes
that are absent in the source domain, the partial setting
where the source domain contains classes that are absent
in the target domain, and the black-box setting where only
outputs of the source model are available and the model
weights are not accessible. Experiments show that STHC
outperforms existing methods for all the SFVDA settings.
Our contributions can be summarized as follows:
• We comprehensively exploit consistency learning for
videos and propose STHC model for SFVDA. STHC
performs stochastic spatio-temporal augmentations on
each video and enforces prediction consistency from
spatial, temporal, and historical perspectives.
• We extend STHC to address various domain adap-
tation problems under the SFVDA setting. To our
best knowledge, most of these problems have not been
studied before and we establish the evaluation bench-
marks that will help future development.
• STHC achieves state-of-the-art performance for
SFVDA in various problem settings.
|
Mei_Unsupervised_Deep_Probabilistic_Approach_for_Partial_Point_Cloud_Registration_CVPR_2023 | Abstract
Deep point cloud registration methods face challenges
to partial overlaps and rely on labeled data. To address
these issues, we propose UDPReg, an unsupervised deep
probabilistic registration framework for point clouds with
partial overlaps. Specifically, we first adopt a network to
learn posterior probability distributions of Gaussian mix-
ture models (GMMs) from point clouds. To handle partial
point cloud registration, we apply the Sinkhorn algorithm
to predict the distribution-level correspondences under the
constraint of the mixing weights of GMMs. To enable unsu-
pervised learning, we design three distribution consistency-
based losses: self-consistency, cross-consistency, and local
contrastive. The self-consistency loss is formulated by en-
couraging GMMs in Euclidean and feature spaces to share
identical posterior distributions. The cross-consistency loss
derives from the fact that the points of two partially over-
lapping point clouds belonging to the same clusters share
the cluster centroids. The cross-consistency loss allows the
network to flexibly learn a transformation-invariant pos-
terior distribution of two aligned point clouds. The lo-
cal contrastive loss facilitates the network to extract dis-
criminative local features. Our UDPReg achieves competi-
tive performance on the 3DMatch/3DLoMatch and Model-
Net/ModelLoNet benchmarks.
| 1. Introduction
Rigid point cloud registration aims at determining the
optimal transformation to align two partially overlapping
point clouds into one coherent coordinate system [21, 30–
32]. This task dominates the performance of systems in
many areas, such as robotics [57], augmented reality [6],
autonomous driving [35, 42], radiotherapy [27], etc. Re-
cent advances have been monopolized by learning-based
approaches due to the development of 3D point cloud rep-
resentation learning and differentiable optimization [37].
Existing deep learning-based point cloud registration
methods can be broadly categorized as correspondence-free[2, 21, 30, 32, 47] and correspondence-based [4, 9, 19,
50]. The former minimizes the difference between global
features extracted from two input point clouds. These
global features are typically computed based on all the
points of a point cloud, making correspondence-free ap-
proaches inadequate to handle real scenes with partial over-
lap [9, 55]. Correspondence-based methods first extract lo-
cal features used for the establishment of point-level [9, 17,
19,21] or distribution-level [15,29,39,52] correspondences,
and finally, estimate the pose from those correspondences.
However, point-level registration does not work well un-
der conditions involving varying point densities or repeti-
tive patterns [31]. This issue is especially prominent in in-
door environments, where low-texture regions or repetitive
patterns sometimes dominate the field of view. Distribution-
level registration, which compensates for the shortcomings
of point-level methods, aligns two point clouds without es-
tablishing explicit point correspondences. Unfortunately, to
the best of our knowledge, the existing methods are inflex-
ible and cannot handle point clouds with partial overlaps
in real scenes [28, 31]. Moreover, the success of learning-
based methods mainly depends on large amounts of ground
truth transformations or correspondences as the supervision
signal for model training. Needless to say, the required
ground truth is typically difficult or costly to acquire, thus
hampering their application in the real world [38].
We thus propose an unsupervised deep probabilistic reg-
istration framework to alleviate these limitations. Specif-
ically, we extend the distribution-to-distribution (D2D)
method to solve partial point cloud registration by adopt-
ing the Sinkhorn algorithm [11] to predict correspondences
of distribution. In order to make the network learn ge-
ometrically and semantically consistent features, we de-
sign distribution-consistency losses, i.e., self-consistency
and cross-consistency losses, to train the networks without
using any ground-truth pose or correspondences. Besides,
we also introduce a local contrastive loss to learn more dis-
criminative features by pushing features of points belong-
ing to the same clusters together while pulling dissimilar
features of points coming from different clusters apart.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13611
Our UDPReg is motivated by OGMM [33] and
UGMM [20] but differs from them in several ways. Firstly,
unlike OGMM, which is a supervised method, our approach
is unsupervised. Secondly, while UGMM [20] treats all
clusters equally in the matching process, our method aligns
different clusters with varying levels of importance. This
enables our approach to handle partial point cloud regis-
tration successfully. To enable unsupervised learning, the
designed self-consistency loss encourages the extracted fea-
tures to be geometrically consistent by compelling the fea-
tures and coordinates to share the posterior probability. The
cross-consistency loss prompts the extracted features to be
geometrically consistent by forcing the partially overlap-
ping point clouds to share the same clusters. We evaluate
our UDPReg on 3DMatch [53], 3DLoMatch [19], Model-
Net [45] and ModelLoNet [19], comparing our approach
against traditional and deep learning-based point cloud reg-
istration approaches. UDPReg achieves state-of-the-art re-
sults and significantly outperforms unsupervised methods
on all the benchmarks.
In summary, the main contributions of this work are:
• We propose an unsupervised learning-based probabilistic
framework to register point clouds with partial overlaps.
• We provide a deep probabilistic framework to solve par-
tial point cloud registration by adopting the Sinkhorn al-
gorithm to predict distribution-level correspondences.
• We formulate self-consistency, cross-consistency, and
local-contrastive losses, to make the posterior probabil-
ity in coordinate and feature spaces consistent so that the
feature extractor can be trained in an unsupervised way.
• We achieve state-of-the-art performance on a compre-
hensive set of experiments, including synthetic and real-
world datasets1.
|
Li_MoDAR_Using_Motion_Forecasting_for_3D_Object_Detection_in_Point_CVPR_2023 | Abstract
Occluded and long-range objects are ubiquitous and
challenging for 3D object detection. Point cloud sequence
data provide unique opportunities to improve such cases, as
an occluded or distant object can be observed from differ-
ent viewpoints or gets better visibility over time. However,
the efficiency and effectiveness in encoding long-term se-
quence data can still be improved. In this work, we propose
MoDAR, using motion forecasting outputs as a type of vir-
tual modality, to augment LiDAR point clouds. The MoDAR
modality propagates object information from temporal con-
texts to a target frame, represented as a set of virtual points,
one for each object from a waypoint on a forecasted tra-
jectory. A fused point cloud of both raw sensor points and
the virtual points can then be fed to any off-the-shelf point-
cloud based 3D object detector. Evaluated on the Waymo
Open Dataset, our method significantly improves prior art
detectors by using motion forecasting from extra-long se-
quences (e.g. 18 seconds), achieving new state of the arts,
while not adding much computation overhead.
| 1. Introduction
3D object detection is a fundamental task for many appli-
cations such as autonomous driving. While there has been
tremendous progress in architecture design and LiDAR-
camera sensor fusion, occluded and long-range object de-
tection remains a challenge. Point cloud sequence data pro-
vide unique opportunities to improve such cases. In a dy-
namic scene, as the ego-agent and other objects move, the
sequence data can capture different viewpoints of objects
or improve their visibility over time. The key challenge
though, is how to efficiently and effectively leverage se-
quence data for 3D object detection.
Existing multi-frame 3D object detection methods often
fuse sequence data at two different levels. At scene level,
the most straightforward approach is to transform point
clouds of different frames to a target frame using known
equal contributions
OOMsaturateFigure 1. 3D detection model performance vs.number of input
frames. Naively adding more frames to existing methods, such as
CenterPoint [59] and SWFormer [40], quickly plateaus the gains
while our method, MoDAR, scales up to many more frames and
gets much larger gains. L2 3D mAPH is computed by averaging
vehicle and pedestrian L2 3D APH.
ego motion poses [3, 40, 55, 59]. Each point can be dec-
orated with an extra time channel to indicate which frame
it is from. However, according to previous studies [7, 33]
and our experiments shown in Fig. 1, it is difficult to fur-
ther improve the detection model by including more input
frames due to its large computation overhead as well as in-
effective temporal data fusion at scene level (especially for
moving objects). On the other side, 3D Auto Labeling [33]
and MPPNet [7] propose to aggregate longer temporal con-
texts at object level, which is more tractable as there are
much less points from objects than those from the entire
scenes. However, they also fail to scale up temporal con-
text aggregation to long sequences due to efficiency issues
or alignment challenges.
In our paper, we propose to use motion forecasting to
propagate object information from the past (and the future)
to a target frame. The output of the forecasting model can be
considered another (virtual) sensor modality to the detector
model. Inspired by the naming of the LiDAR sensor, we
name this new modality MoDAR , Motion forecasting based
Detection And Ranging (see Fig. 2 for an example).
Traditionally 3D object detection is a pre-processing step
for a motion forecasting model, where the detector boxes
are either used as input (for past frames) or learning targets
(for future frames). In contrast, we use motion forecasting
outputs as input to LiDAR-MoDAR multi-modal 3D object
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9329
An occluded object A long-range object
Caption: Red & Blue points: MoDAR points from motion forecasting; Gery points:
LiDAR points from raw sensor MoDAR points (red)
from forward prediction
MoDAR points (blue)
from reverse prediction
LiDAR points (gary)
from LiDAR Figure 2. 3D object detection from MoDAR and LiDAR points.
MoDAR points (red and blue) are predicted object centers with ex-
tra features such as sizes, semantic classes and confidence scores.
Compared to LiDAR-only detectors, a multi-modal detector taking
both LiDAR (gray) and MoDAR points can accurately recognize
occluded and long-range objects that have few observed points.
detectors. There are two major benefits of using a MoDAR
sensor for 3D object detection from sequence data. First,
motion forecasting can easily transform object information
across very distant frames (8 seconds or longer). Such prop-
agation is especially robust to occlusions as the forecast-
ing models do not assume successful tracking for trajectory
forecasting. Second, considering forecasting output as an-
other sensor data source for 3D detection, it is a lightweight
sensor modality, making long-term sequence data process-
ing possible without much computation overhead.
Specifically, in MoDAR, we represent motion forecast-
ing output at the target frame as a set of virtual points
(named as MoDAR points), one for each object from a way-
point on a forecasted trajectory. The predicted object loca-
tion is the 3D coordinate of the virtual point, while addi-
tional information (such as object type, size, predicted head-
ing, and confidence score) is encoded into the virtual point
features. Each virtual point is appended with a time channel
to indicate the context frames it uses for the motion fore-
casting. For a target frame, we can use forecasted outputs
from multiple context frames easily through a union of cor-
responding virtual points. In an offboard/offline detection
setup, we can use both forward prediction and reverse pre-
diction (use future frames as input to the forecasting model)
to combine information from the past and the future. For de-
tection, we fuse the raw sensor points (from LiDARs) and
the virtual points (from forecasting), and feed them to any
off-the-shelf point cloud based 3D detector.
In experiments, we use a MultiPath++ [42] motion
forecasting model trained on the Waymo Open Motion
Dataset [9] to generate MoDAR points from past 9 seconds
for online detection; and from past and future 18 seconds for
offline detection. With minimum changes, we adapt Cen-
terPoints [59] and SWFormer [40] detectors for LiDAR-
MoDAR 3D object detection.1Evaluated on the Waymo
1Although we experiment with point-cloud based detectors, MoDAROpen Dataset [39], we show that adding MoDAR signifi-
cantly improves detection quality, improving CenterPoints
and SWFormer by 11:1and 8:5mAPH respectively; and
it especially helps detection of long-range and occluded
objects. Using MoDAR with a 3-frame SWFormer detec-
tor, we have achieved state-of-the-art mAPH on the Waymo
Open Dataset. We further provide extensive ablations and
analysis experiments to validate our designs and show im-
pacts of different MoDAR choices.
|
Lu_Visual_Language_Pretrained_Multiple_Instance_Zero-Shot_Transfer_for_Histopathology_Images_CVPR_2023 | Abstract
Contrastive visual language pretraining has emerged as
a powerful method for either training new language-aware
image encoders or augmenting existing pretrained models
with zero-shot visual recognition capabilities. However,
existing works typically train on large datasets of image-
text pairs and have been designed to perform downstream
tasks involving only small to medium sized-images, neither
of which are applicable to the emerging field of computa-
tional pathology where there are limited publicly available
paired image-text datasets and each image can span up to
100,000 ×100,000 pixels. In this paper we present MI-
Zero, a simple and intuitive framework for unleashing the
zero-shot transfer capabilities of contrastively aligned im-
age and text models on gigapixel histopathology whole slide
images, enabling multiple downstream diagnostic tasks to
be carried out by pretrained encoders without requiring any
additional labels. MI-Zero reformulates zero-shot trans-
fer under the framework of multiple instance learning to
overcome the computational challenge of inference on ex-
tremely large images. We used over 550k pathology re-
ports and other available in-domain text corpora to pre-
train our text encoder. By effectively leveraging strong pre-
trained encoders, our best model pretrained on over 33k
histopathology image-caption pairs achieves an average
median zero-shot accuracy of 70.2% across three different
real-world cancer subtyping tasks. Our code is available
at: https://github.com/mahmoodlab/MI-Zero.
| 1. Introduction
Weakly-supervised deep learning for computational
pathology (CPATH) has rapidly become a standard ap-
proach for modelling whole slide image (WSI) data [9, 30,
47,71,73]. To obtain “clinical grade” machine learning per-
formance on par with human experts for a given clinical
†These authors contributed equally to this work.task, many approaches adopt the following model develop-
ment life cycle: 1) curate a large patient cohort ( N > 1000
samples) with diagnostic whole-slide images and clinical
labels, 2) unravel and tokenize the WSI into a sequence of
patch features, 3) use labels to train a slide classifier that
learns to aggregate the patch features for making a predic-
tion, and 4) transfer the slide classifier for downstream clin-
ical deployment [9, 43, 91].
Successful examples of task-specific model development
(e.g. training models from scratch for each task) include
prostate cancer grading and lymph node metastasis detec-
tion [5, 7–9, 50, 70]. However, this paradigm is intractable
if one wishes to scale across the hundreds of tumor types
across the dozens of different organ sites in the WHO clas-
sification system1, with most tumor types under-represented
in public datasets or having inadequate samples for model
development [41, 92]. To partially address these limita-
tions, self-supervised learning has been explored for learn-
ing the patch representations within the WSI with the idea
that certain local features, such as tumor cells, lymphocytes,
and stroma, may be conserved and transferred across tis-
sue types [10, 16, 39, 40, 44, 64, 77]. Though morphological
features at the patch-level are captured in a task-agnostic
fashion, developing the slide classifier still requires supervi-
sion, which may not be possible for disease types with small
sample sizes. To scale slide classification across the vast
number of clinical tasks and possible findings in CPATH,
an important shift needs to be made from task-specific to
task-agnostic model development.
Recent works [33,55] have demonstrated that large-scale
pretraining using massive, web-sourced datasets of noisy
image-text pairs can not only learn well-aligned represen-
tation spaces between image and language, but also transfer
the aligned latent space to perform downstream tasks such
as image classification. Specifically for CLIP [55], after
pretraining a vision encoder in a task-agnostic fashion, the
vision encoder can be “prompted” with text from the label
1tumourclassification.iarc.who.int/
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19764
space (referred to as “zero-shot transfer”, as no labeled ex-
amples are used in the transfer protocol). Despite the vol-
ume of zero-shot transfer applications developed for natural
images [21, 33, 45, 49, 57, 80, 82] and certain medical imag-
ing modalities ( e.g.radiology [29,60,68,78,90]), zero-shot
transfer for pathology has not yet been studied2. We believe
this is due to 1) the lack of large-scale, publicly available
datasets of paired images and captions in the highly special-
ized field of pathology, and 2) fundamental computational
challenges associated with WSIs, as images can span up to
100,000×100,000pixels and do not routinely come with
textual descriptions, bounding box annotations or even re-
gion of interest labels.
In this work, we overcome the above data and compu-
tational challenges and develop the first zero-shot transfer
framework for the classification of histopathology whole
slide images. On the data end, we curated the largest known
dataset of web-sourced image-caption pairs specifically for
pathology. We propose “MI-Zero”, a simple and intuitive
multiple instance learning-based [3, 30] method for utiliz-
ing the zero-shot transfer capability of pretrained visual-
language encoders for gigapixel-sized WSIs that are rou-
tinely examined during clinical practice. We validate our
approach on 3 different real-world cancer subtyping tasks,
and perform multiple ablation experiments that explore im-
age pretraining, text pretraining, pooling strategies, and
sample size choices for enabling zero-shot transfer in MI-
Zero.
|
Lou_All-in-Focus_Imaging_From_Event_Focal_Stack_CVPR_2023 | Abstract
Traditional focal stack methods require multiple shots to
capture images focused at different distances of the same
scene, which cannot be applied to dynamic scenes well.
Generating a high-quality all-in-focus image from a single
shot is challenging, due to the highly ill-posed nature of the
single-image defocus and deblurring problem. In this pa-
per, to restore an all-in-focus image, we propose the event
focal stack which is defined as event streams captured dur-
ing a continuous focal sweep. Given an RGB image focused
at an arbitrary distance, we explore the high temporal reso-
lution of event streams, from which we automatically select
refocusing timestamps and reconstruct corresponding refo-
cused images with events to form a focal stack. Guided by
the neighbouring events around the selected timestamps, we
can merge the focal stack with proper weights and restore
a sharp all-in-focus image. Experimental results on both
synthetic and real datasets show superior performance over
state-of-the-art methods.
†Contributed equally to this work as first authors
∗Corresponding author
Project page: https://hylz-2019.github.io/EFS | 1. Introduction
The lens aperture of a camera controls the amount of
incoming luminous flux. A larger aperture maintains the
signal-to-noise ratio with shorter exposure time, which is
useful for shooting high-speed scenes or capturing images
in low-light conditions with less noise. However, large aper-
ture settings also make the depth of field (DoF) shallow,
which results in defocus blur. This is preferable in certain
scenarios, such as in portrait photography a shallow DoF
can be used to emphasize the subject. Yet, all-in-focus im-
ages preserve information from all distances and are desired
in more situations, e.g., microscopy imaging [25]. Besides,
all-in-focus imaging also benefits various high-level vision
tasks, e.g., object detection [29] and semantic segmenta-
tion [10].
An all-in-focus image could be obtained by deblurring a
defocused image, but the defocus kernel, determined by the
aperture shape and depth of the scene, is usually spatially-
varying and difficult to be estimated accurately [48]. Con-
ventional two-stage methods [9, 13, 36] first estimate the
pixel-wise or patch-wise defocus kernels with image pri-
ors and then apply non-blind image deconvolution to each
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17366
pixel or patch. Recently, benefiting from the data-driven
strategy, end-to-end deep learning methods [18, 31, 32, 38]
outperform conventional two-stage restoration methods, by
observing defocused and all-in-focus image pairs during
training. Although they have demonstrated high potential
in removing defocus blur, the deblurred results still cannot
avoid ringing artifacts or remain blurry in high-frequency
regions due to inaccurate defocus kernel estimation espe-
cially for weakly textured and defocused regions (an exam-
ple is shown in Figure 1 right (c)).
To overcome the ill-posedness of estimating the defocus
kernel from a single image, merging a focal stack, i.e., a
sequence of images taken at different focus distances, can
generate an all-in-focus image reliably [11, 40, 47]. How-
ever, capturing a focal stack requires a static scene and mul-
tiple exposures. Moreover, the selection of focus distances
is a key factor in capturing the focal stack, which requires
elaborate design.
Neuromorphic event cameras [5, 35] are novel sen-
sors that can detect brightness changes and trigger an
event whenever its log variation exceeds a preset thresh-
old. Thanks to their high temporal resolution featured
with microsecond-level sensitivity, they can capture ap-
proximately continuous signals for intensity variations of a
scene, and support applications like generating high-speed
videos from event streams [28, 41–43]. These characteris-
tics motivate us to think about: Can we use “focal stacks”
composed of event streams for all-in-focus imaging?
In this paper, we propose event focal stack (EFS) for the
first time. It is composed of event streams obtained from a
continuous focal sweep with an event camera, which can be
used to reconstruct an image focal stack (given an RGB im-
age focused at an arbitrary distance) and predict the merging
weights for all-in-focus image recovery, as shown in Fig-
ure 1 left. EFS encodes scene texture information from con-
tinuous different depths in temporal log-gradient domain, so
we first select a refocusing timestamp for each patch of the
scene, which corresponds to sharper edges and richer tex-
ture information at that time. By fusing a defocused image
and the EFS recorded between the defocused timestamp and
refocusing timestamp, we generate a refocused image for
each refocusing timestamp, forming an image focal stack.
Guided by neighbouring events around refocusing times-
tamps, we can predict the merging weight for each image
needed for composing a focal stack, and finally restore an
all-in-focus image (an example is shown in Figure 1 right
(d)). Contributions of this paper are demonstrated by ex-
ploring the following benefits of the proposed EFS:
• reliable selection of refocusing timestamps by decod-
ing continuous scene gradient changes from events;
• consistent link between defocused (given) and refo-
cused images (estimated) composing an image focalstack; and
• robust guidance for merging weight prediction and all-
in-focus reproduction with event triggered neighbour-
ing the selected timestamps.
We quantitatively and qualitatively evaluate our method
on both synthetic and real datasets and demonstrate its su-
perior quality in recovering all-in-focus images over state-
of-the-art methods.
|
Nitzan_Domain_Expansion_of_Image_Generators_CVPR_2023 | Abstract
Can one inject new concepts into an already trained gen-
erative model, while respecting its existing structure and
knowledge? We propose a new task – domain expansion
– to address this. Given a pretrained generator and novel
(but related) domains, we expand the generator to jointly
model all domains, old and new, harmoniously. First, we
note the generator contains a meaningful, pretrained latent
space. Is it possible to minimally perturb this hard-earned
representation, while maximally representing the new do-
mains? Interestingly, we find that the latent space offers un-
used, “dormant” directions, which do not affect the output.
This provides an opportunity: By “repurposing” these di-
rections, we can represent new domains without perturbing
the original representation. In fact, we find that pretrained
generators have the capacity to add several – even hundreds
– of new domains! Using our expansion method, one “ex-
panded” model can supersede numerous domain-specific
models, without expanding the model size. Additionally, a
single expanded generator natively supports smooth transi-
tions between domains, as well as composition of domains.
Code and project page available here. | 1. Introduction
Recent domain adaptation techniques piggyback on the
tremendous success of modern generative image models [3,
12, 32, 40], by adapting a pretrained generator so it can
generate images from a new target domain. Oftentimes,
the target domain is defined with respect to the source do-
main [5,21,22], e.g., changing the “stylization” from a pho-
torealistic image to a sketch. When such a relationship
holds, domain adaptation typically seeks to preserve the fac-
tors of variations learned in the source domain, and transfer
them to the new one (e.g., making the human depicted in
a sketch smile based on the prior from a face generator).
With existing techniques, however, the adapted model loses
the ability to generate images from the original domain.
In this work, we introduce a novel task — domain ex-
pansion . Unlike domain adaptation, we aim to augment the
space of images a single model can generate, without over-
riding its original behavior (see Fig. 1). Rather than view-
ing similar image domains as disjoint data distributions, we
treat them as different modes in a joint distribution. As a
result, the domains share a semantic prior inherited from
the original data domain. For example, the inherent factors
of variation for photorealistic faces, such as pose and face
shape, can equally apply to the domain of “zombies”.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15933
DogCuteSiberian HuskySketch
Expanded DomainsBoarHappyPop ArtSource Domain Domain CompositionSiberian Husky + Cute + SketchBoar + Happy +Pop ArtFigure 2. Example of a domain expansion result. Starting from
dogs as the source domain, we expand a single generator to model
new domains such as facial expressions, breeds of dogs and other
animals, and artistic styles. Finally, as the representations are dis-
entangled, the expanded generator is able to generalize and com-
pose the different domains, although they were never seen jointly
in training.
To this end, we carefully structure the model train-
ing process for expansion, respecting the original data do-
main. It is well-known that modern generative models with
low-dimensional latent spaces offer an intriguing, emer-
gent property – through training, the latent spaces represent
the factors of variation, in a linear and interpretable man-
ner [3, 6, 10, 12, 28, 30, 39, 40]. We wish to extend this ad-
vantageous behavior and represent the new domains along
linear and disentangled directions. Interestingly, it was pre-
viously shown that many latent directions have insignificant
perceptible effect on generated images [6]. Taking advan-
tage of this finding, we repurpose such directions to repre-
sent the new domains.
In practice, we start from an orthogonal decomposition
of the latent space [36] and identify a set of low-magnitude
directions that have no perceptible effect on the generated
images, which we call dormant . To add a new domain,
we select a dormant direction to repurpose. Its orthogo-
nal subspace, which we call base subspace , is sufficient
to represent the original domain [6]. We aim to repur-
pose the dormant direction such that traversing it would
now cause a transition between the original and the new
domain. Specifically, the transition should be disentangled
from the original domain’s factors of variation. To this end,
we define a repurposed affine subspace by transporting the
base subspace along the chosen dormant direction, as shown
in Fig. 3. We capture the new domain by applying a domain
adaptation method, transformed to operate only on latent
codes sampled from the repurposed subspace. A regular-
ization loss is applied on the base subspace to ensure that
the original domain is preserved. The original domain’s
factors of variation are implicitly preserved due to the sub-
spaces being parallel and the latent space being disentan-
gled. For multiple new domains, we simply repeat this pro-
cedure across multiple dormant directions.
We apply our method to the StyleGAN [13] architecture,with multiple datasets, and expand the generator with hun-
dreds of new factors of variation. Crucially, we show our
expanded model simultaneously generates high-quality im-
ages from both original and new domains, comparable to
specialized, domain-specific generators. Thus, a single ex-
panded generator supersedes hundreds of adapted genera-
tors, facilitating the deployment of generative models for
real-world applications. We additionally demonstrate that
the new domains are learned as global and disentangled fac-
tors of variation, alongside existing ones. This enables fine-
grained control over the generative process and paves the
way to new applications and capabilities, e.g., compositing
multiple domains (See Fig. 2). Finally, we conduct a de-
tailed analysis of key aspects of our method, such as the ef-
fect of the number of newly introduced domains, thus shed-
ding light on our method and, in the process, on the nature
of the latent space of generative models.
To summarize, our contributions are as follows:
• We introduce a new task – domain expansion of a pre-
trained generative model.
• We propose a novel latent space structure that is amenable
to representing new knowledge in a disentangled manner,
while maintaining existing knowledge intact.
• We present a simple paradigm transforming domain adap-
tation methods into domain expansion methods.
• We demonstrate successful domain expansion to hun-
dreds of new domains and illustrate its advantage over
domain adaptation methods.
|
Park_Mask-Guided_Matting_in_the_Wild_CVPR_2023 | Abstract
Mask-guided matting has shown great practicality com-
pared to traditional trimap-based methods. The mask-
guided approach takes an easily-obtainable coarse mask as
guidance and produces an accurate alpha matte. To ex-
tend the success toward practical usage, we tackle mask-
guided matting in the wild , which covers a wide range of
categories in their complex context robustly. To this end,
we propose a simple yet effective learning framework based
on two core insights: 1) learning a generalized matting
model that can better understand the given mask guidance
and 2) leveraging weak supervision datasets (e.g., instance
segmentation dataset) to alleviate the limited diversity and
scale of existing matting datasets. Extensive experimen-
tal results on multiple benchmarks, consisting of a newly
proposed synthetic benchmark (Composition-Wild) and ex-
isting natural datasets, demonstrate the superiority of the
proposed method. Moreover, we provide appealing results
on new practical applications (e.g., panoptic matting and
mask-guided video matting), showing the great generality
and potential of our model.
| 1. Introduction
Image matting aims to predict the opacity of ob-
jects, which enables precise separation from surround-
ing backgrounds. Due to the ill-posed nature of the
task, many works [7, 13, 21, 27, 30, 48] have improved
matting performance by relying on the manual guidance
of a trimap . However, pixel-level annotation of fore-
ground/background/unknown is extremely burdensome, re-
stricting its usage in many practical applications such as im-
age/video editing and film production. Recently, many ef-
ficient alternatives for user guidance have been proposed,
including trimap-free [15, 32], additional background im-
ages [22, 34], scribble [43], and the user clicks [45].
Among them, the mask-guided approach [50] shows a
great trade-off between performance and intensity of user
interaction. It utilizes a coarse mask as guidance, which is
much easier to obtain either manually or from off-the-shelf
Image and Guidance Ours MGMattingFigure 1. Qualitative Comparisons of MGMatting [50] and
Ours in the wild. The mask guidance is overlaid on images with
blue color. Best viewed zoomed in.
segmentation models [2, 10]. With only the coarse spatial
prior, the mask-guided matting model [50] shows compara-
ble or even better performance than the trimap-based com-
petitors [13,17,21,27,48] on synthetic Composition-1k [48]
and a real-world human matting dataset [50]. However, de-
spite the encouraging results, we see the previous state-of-
the-art model [50] struggles to obtain desirable alpha matte
in complex real-world scenes (see Fig. 1).
With this observation, we tackle mask-guided matting
in the wild . Specifically, we formulate unique setups and
emerging challenges of the new task as follows: (1) We aim
to handle objects in their complex context, reflecting the
characteristics of natural images. The previous method [50]
evaluates their model on iconic-object images [1,29] where
only a single object is in the center. As the model can
easily find the target object in such images, the model’s
real instance discrimination ability is, in fact, veiled. On
the contrary, in an ‘in-the-wild’ setting, it is crucial to pre-
cisely localize the target object from the given coarse/noisy
mask guidance ( i.e.mask awareness). (2) Our model tar-
gets to deal with diverse categories of objects in natural
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1992
images. Unlike most previous methods that improve gen-
eralization performance at the expense of category-specific
regime ( e.g., limiting to humans [15] or animals [19]), we
aim to understand distinctive matting patterns of vast cat-
egories. (3) Limited data problem makes the new setting
more complicated. Due to the labeling complexity, anno-
tating alpha matte for objects in common scenes, e.g., the
COCO dataset [24], is infeasible. As a sidestep, previ-
ous benchmarks [32, 48] extract the alpha matte and fore-
ground colors from images with simple backgrounds. These
are composited on various backgrounds [6, 24], and result-
ing samples are used to train and evaluate matting models.
However, due to the inevitable composition artifacts, the
models usually show limited generalization performance.
In that sense, how to train and evaluate the in-the-wild mat-
ting model remains an open question.
Toward this goal, we propose a simple yet effective
learning framework for a generalized mask-guided matting
model. First, we investigate fundamental reasons for the
poor generalization of the previous mask-guided matting
model [50] and find that this is mainly from the training
data generation process. Specifically, the previous compo-
sition process includes instance merging data augmentation,
which merges several foreground objects into a single ob-
ject. While this augmentation is effective in the trimap-
based methods [21, 30, 41], it implicitly makes a negative
bias for the mask-guided matting model to ignore the guid-
ance. Thus, the model struggles to localize the target objects
in complex natural scenes. We alleviate the bias by propos-
ing an instance-wise learning objective, where the model is
supervised to segment one of multiple instances according
to the guidance. By doing so, the model learns strong se-
mantic representation regarding complex relations and soft
transitions between objects. Despite the simplicity of the
proposal, this greatly improves performance in the wild.
Second, we explore a practical solution to make the
mask-guided model handle various categories of objects ro-
bustly. Instead of scaling the matting dataset, we leverage
a dataset with weak supervision [14, 46] ( i.e., instance seg-
mentation dataset [24]), as the coarse instance masks are
easier to obtain over the diverse categories of objects. To
effectively hallucinate the fine supervision signal with the
weak localization guidance, we come up with a self-training
framework [36, 37, 47]. Specifically, a pseudo label is gen-
erated based on a weakly-augmented input (both image and
instance mask annotation as guidance), which supervises
the model prediction on a strongly augmented version of
them. During self-training, the model is not only adapting to
the in-the-wild scenario in a self-evolving manner but also
being robust to noise in both image and guidance.
To verify the in-the-wild performance of mask-guided
matting, we formally define an evaluation protocol involv-
ing multiple sub-benchmarks: Composition-Wild, AIM-500 [20], COCO [24]. We first design an in-the-wild ex-
tension of the popular synthetic Composition-1k bench-
mark [48], namely Composition-Wild. We simulate com-
plex real-world images by compositing multiple foreground
objects. To bring valuable insight on the failure cases of
the model, we design sub-metrics for Composition-Wild.
In addition, we use the AIM-500 dataset to establish quan-
titative results on natural images ( i.e., with no composition
artifacts), although most images are iconic-object images
with simple backgrounds. Finally, we provide qualitative
outputs of our mask-guided matting model on the COCO
dataset [24] which is one of the most representative in-the-
wild datasets.
To summarize, we make the following main contribu-
tions. 1) To our best knowledge, it is the first work to
explore mask-guided matting in the wild. 2) We develop
a simple yet effective learning framework leveraging both
composited and weak-guidance images. 3) We design an
evaluation setup for the new task. 4) We initiate several in-
teresting extensions: video and panoptic matting.
|
Li_Open-Set_Semantic_Segmentation_for_Point_Clouds_via_Adversarial_Prototype_Framework_CVPR_2023 | Abstract
Recently, point cloud semantic segmentation has attracted
much attention in computer vision. Most of the existing
works in literature assume that the training and testing
point clouds have the same object classes, but they are gen-
erally invalid in many real-world scenarios for identifying
the 3D objects whose classes are not seen in the training
set. To address this problem, we propose an Adversarial
Prototype Framework (APF) for handling the open-set
3D semantic segmentation task, which aims to identify 3D
unseen-class points while maintaining the segmentation
performance on seen-class points. The proposed APF
consists of a feature extraction module for extracting point
features, a prototypical constraint module, and a feature
adversarial module. The prototypical constraint module is
designed to learn prototypes for each seen class from point
features. The feature adversarial module utilizes generative
adversarial networks to estimate the distribution of unseen-
class features implicitly, and the synthetic unseen-class
features are utilized to prompt the model to learn more
effective point features and prototypes for discriminating
unseen-class samples from the seen-class ones. Experi-
mental results on two public datasets demonstrate that the
proposed APF outperforms the comparative methods by a
large margin in most cases.
| 1. Introduction
Point cloud semantic segmentation is an important and
challenging topic in computer vision. Most of the existing
works [9–11, 29] in literature are based on the assumption
that both the training and testing point clouds have the same
*Corresponding author
(a) AD
(b) O3D
Figure 1. Visualization of the goals of anmaly detection (AD) and
open-set 3D semantic segmentation (O3D) on SemanticKITTI [2].
AD is to identify the unseen-class data, while O3D is to simultane-
ously identify the unseen-class data and segment seen-class data.
The unseen-class points are colorized in yellow.
object classes, however, this assumption is no more valid in
many real-world scenarios, due to the fact that the classes
of some observed 3D points may not be presented in the
training set. Hence, the following problem on open-set 3D
semantic segmentation is naturally raised: How does a seg-
mentation model simultaneously identify unseen-class 3D
points and maintain the segmentation accuracy of seen-class
3D points in open-set scenarios?
Compared with anomaly detection [3, 23, 26], open-set
3D semantic segmentation (O3D) is more challenging, for
it also needs to assign labels to seen-class data simultane-
ously, as shown in Figure 1. In fact, some existing tech-
niques [6,15,17,18] for open-set 2D semantic segmentation
(O2D) task could be extended to handle the O3D task, how-
ever, their open-set ability is generally limited in 3D scenar-
ios. In addition, to our best knowledge, only one pioneering
work [7] has investigated a special technique for O3D task.
In [7], an O3D method called REAL is proposed to utilize
normal classifiers to segment seen-class points and regard
the randomly resized objects as unseen-class objects which
are detected by the redundancy classifiers. REAL outper-
forms some extended O2D methods in the O3D task, how-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9425
ever, the AUPR (Area Under the Precision-Recall curve)
is lower than 21% on two public datasets as shown in Ta-
ble 1 and Table 2 in Section 4, mainly because the resizing
process in REAL alters the geometric structure of the ini-
tial point clouds to some extent. These results indicate that
there still exists a huge space for improvement on O3D task.
Addressing the above issue, we propose an Adversarial
Prototype Framework (APF) for open-set 3D segmentation,
which segments point clouds from a discriminative perspec-
tive and estimates the distribution of unseen-class features
from a generative perspective. The proposed APF consists
of three modules: a feature extraction module, a prototyp-
ical constraint module, and a feature adversarial module.
The feature extraction module is employed to extract latent
features from the input point clouds, which could be an ar-
bitrary closed-set point cloud segmentation network in prin-
ciple. Given the point features, the prototypical constraint
module is explored from the discriminative perspective to
learn a prototype for each seen class. The feature adver-
sarial module is explored from the generative perspective,
which employs the generative adversarial networks (GAN)
to synthesize point features to estimate the unseen-class
feature distribution, based on the finding stated in [6] that
the unseen-class features usually aggregate in the center of
the feature space. And the synthesized unseen-class fea-
tures in this module could further prompt the model to learn
more discriminative point features and prototypes. After the
whole APF is trained, a point-to-prototype hybrid distance-
based criterion is introduced for open-set 3D segmentation.
In sum, the contributions of this paper are as follows:
• We propose the adversarial prototype framework
(APF) for handling the open-set 3D semantic segmen-
tation task. Under the proposed APF, various open-set
3D segmentation methods could be straightforwardly
derived by utilizing existing closed-set 3D segmenta-
tion networks as the feature extraction module. The
effectiveness of the proposed APF has been demon-
strated by the experimental results in Section 4.
• Under the proposed framework, we explore the pro-
totypical constraint module, which learns the corre-
sponding prototype for each seen class. The learned
prototypes are not only conducive to segmenting seen-
class points, but also to detecting unseen-class points.
• Under the proposed framework, we explore the fea-
ture adversarial module to synthesize unseen-class fea-
tures. The synthetic features are helpful for improving
the discriminability of both the seen-class features and
prototypes via the adversarial mechanism.
|
Li_Robust_Model-Based_Face_Reconstruction_Through_Weakly-Supervised_Outlier_Segmentation_CVPR_2023 | Abstract
In this work, we aim to enhance model-based face re-
construction by avoiding fitting the model to outliers, i.e.
regions that cannot be well-expressed by the model such as
occluders or make-up. The core challenge for localizing
outliers is that they are highly variable and difficult to anno-
tate. To overcome this challenging problem, we introduce a
joint Face-autoencoder and outlier segmentation approach
(FOCUS). In particular, we exploit the fact that the outliers
cannot be fitted well by the face model and hence can be
localized well given a high-quality model fitting. The main
challenge is that the model fitting and the outlier segmenta-
tion are mutually dependent on each other, and need to be
inferred jointly. We resolve this chicken-and-egg problem
with an EM-type training strategy, where a face autoen-
coder is trained jointly with an outlier segmentation net-
work. This leads to a synergistic effect, in which the seg-
mentation network prevents the face encoder from fitting to
the outliers, enhancing the reconstruction quality. The im-
proved 3D face reconstruction, in turn, enables the segmen-
tation network to better predict the outliers. To resolve the
ambiguity between outliers and regions that are difficult to
fit, such as eyebrows, we build a statistical prior from syn-
thetic data that measures the systematic bias in model fit-
ting. Experiments on the NoW testset demonstrate that FO-
CUS achieves SOTA 3D face reconstruction performance
among all baselines trained without 3D annotation. More-
over, our results on CelebA-HQ and AR database show that
the segmentation network can localize occluders accurately
∗Denotes same contribution.
Codes available at: github.com/unibas-gravis/Occlusion-Robust-MoFA
C.Li is funded by the China Scholarship Council (CSC) from the Min-
istry of Education of P.R. China. B.Egger was supported by a Post-
Doc Mobility Grant, Swiss National Science Foundation P400P2 191110.
A.Kortylewski acknowledges support via his Emmy Noether Research
Group funded by the German Science Foundation (DFG) under Grant No.
468670075. Sincere gratitude to Tatsuro Koizumi and William A. P. Smith
who offered the MoFA re-implementation.
Figure 1. FOCUS conducts face reconstruction and outlier seg-
mentation jointly under weak supervision. Top to bottom: target
images, our reconstruction images, and estimated outlier masks.
despite being trained without any segmentation annotation.
| 1. Introduction
Monocular 3D face reconstruction aims at estimating the
pose, shape, and albedo of a face, as well as the illumination
conditions and camera parameters of the scene. Solving for
all these factors from a single image is an ill-posed problem.
Model-based face autoencoders [31] overcome this problem
through fitting a 3D Morphable Model (3DMM) [1, 9] to a
target image. The 3DMM provides prior knowledge about
the face albedo and geometry such that 3D face reconstruc-
tion from a single image becomes feasible, enabling face
autoencoders to set the current state-of-the-art in 3D face
reconstruction [5]. The network architectures in the face au-
toencoders are devised to enable end-to-end reconstruction
and to enhance the speed compared to optimization-based
alternatives [19, 41], and sophisticated losses are designed
to stabilize the training and to get better performance [5].
A major remaining challenge for face autoencoders is
that their performance in in-the-wild environments is still
limited by nuisance factors such as model outliers, extreme
illumination, and poses. Among those nuisances, model
outliers are ubiquitous and inherently difficult to handle be-
cause of their wide variety in shape, appearance, and loca-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
372
tion. The outliers are a combination of the occlusions that
do not belong to the face and the mismatches which are the
facial parts but cannot be depicted well by the face model,
such as pigmentation and makeup on the texture and wrin-
kles on the shape. Fitting to the outliers often distorts the
prediction (see Fig. 3) and fitting to the mismatches can-
not improve the fitting further due to the limitation of the
model. Therefore we propose to only fit the inliers, i.e. the
target with the outliers excluded.
To prevent distortion caused by model outliers, existing
methods often follow a bottom-up approach. For exam-
ple, a multi-view shape consistency loss is used as prior
to regularize the shape variation of the same face in dif-
ferent images [5, 10, 33], or the face symmetry is used
to detect occluders [34]. Training the face encoder with
dense landmark supervision also imposes strong regulariza-
tion [37, 42], while pairs of realistic images and meshes are
costly to acquire. Most existing methods apply face [27] or
skin [5] segmentation models and subsequently exclude the
non-facial regions during reconstruction. These segmenta-
tion methods operate in a supervised manner, which suffers
from the high cost and efforts for acquiring a great variety
of occlusion annotations from in-the-wild images.
In this work, we introduce an approach to handle outliers
for model-based face reconstruction that is highly robust,
without requiring any annotations for skin, occlusions, or
dense landmarks. In particular, we propose to train a Face-
autOenCoder and oUtlier Segmentation network, abbrevi-
ated as FOCUS, in a cooperative manner. To train the seg-
mentation network in an unsupervised manner, we exploit
the fact that the outliers cannot be well-expressed by the
face model to guide the decision-making process of an out-
lier segmentation network. Specifically, the discrepancy be-
tween the target image and the rendered face image (Fig. 1
1st and 2nd rows) are evaluated by several losses that can
serve as a supervision signal by preserving the similarities
among the target image, the reconstructed image, and the
reconstructed image under the estimated outlier mask.
The training process follows the core idea of the
Expectation-Maximization (EM) algorithm, by alternating
between training the face autoencoder given the currently
estimated segmentation mask, and subsequently training the
segmentation network based on the current face reconstruc-
tion. The EM-like training strategy resolves the chicken-
and-egg problem that the outlier segmentation and model
fitting are dependent on each other. This leads to a syner-
gistic effect, in which the outlier segmentation first guides
the face autoencoder to fit image regions that are easy to
classify as face regions. The improved face fitting, in turn,
enables the segmentation model to refine its prediction.
We define in-domain misfits as errors in regions, where
a fixed model can explain but constantly not fit well, which
are observed in the eyebrows and the lip region. We assumethat such misfits result from the deficiencies of the fitting
pipeline. Model-based face autoencoders use image-level
losses only, which are highly non-convex and suffer from
local optima. Consequently, it is difficult to converge to the
globally optimal solution. In this work, we propose to mea-
sure and adjust the in-domain misfits with a statistical prior.
Our misfit prior learns from synthetic data at which regions
these systematic errors occur on average. Subsequently, the
learnt prior can be used to counteract these errors for pre-
dictions on real data, especially when our FOCUS structure
excludes the outliers. Building the prior requires only data
generated by a linear face model without any enhancement
and no further improvement in landmark detection.
We demonstrate the effectiveness of our proposed
pipeline by conducting experiments on the NoW testset
[29], where we achieve state-of-the-art performance among
model-based 3D face methods without 3D supervision. Re-
markably, experiments on the CelebA-HQ dataset [20] and
the AR database [22] validate that our method is able to
predict accurate occlusion masks without requiring any su-
pervision during training.
In summary, we make the following contributions:
1. We introduce an approach for model-based 3D face
reconstruction that is highly robust, without requiring
any human skin or occlusion annotation.
2. We propose to compensate for the misfits with an in-
domain statistical misfit prior, which is easy to imple-
ment and benefits the reconstruction.
3. Our model achieves SOTA performance at self-
supervised 3D face reconstruction and provides accu-
rate estimates of the facial occlusion masks on in-the-
wild images.
|
Lv_Unbiased_Multiple_Instance_Learning_for_Weakly_Supervised_Video_Anomaly_Detection_CVPR_2023 | Abstract
Weakly Supervised Video Anomaly Detection (WSVAD)
is challenging because the binary anomaly label is only
given on the video level, but the output requires snippet-
level predictions. So, Multiple Instance Learning (MIL) is
prevailing in WSVAD. However, MIL is notoriously known
to suffer from many false alarms because the snippet-level
detector is easily biased towards the abnormal snippets with
simple context, confused by the normality with the same
bias, and missing the anomaly with a different pattern. To
this end, we propose a new MIL framework: Unbiased MIL
(UMIL), to learn unbiased anomaly features that improve
WSVAD. At each MIL training iteration, we use the cur-
rent detector to divide the samples into two groups with dif-
ferent context biases: the most confident abnormal/normal
snippets and the rest ambiguous ones. Then, by seeking
the invariant features across the two sample groups, we
can remove the variant context biases. Extensive exper-
iments on benchmarks UCF-Crime and TAD demonstrate
the effectiveness of our UMIL. Our code is provided at
https://github.com/ktr-hubrt/UMIL.
| 1. Introduction
Video Anomaly Detection (V AD) aims to detect events
among video sequences that deviate from expectation,
which is widely applied in real-world tasks such as intel-
ligent manufacturing [8], TAD surveillance [9,22] and pub-
lic security [25, 30]. To learn such a detector, conventional
fully-supervised V AD [1] is impractical as the scattered
but diverse anomalies require extremely expensive labeling
cost. On the other hand, unsupervised V AD [3, 11, 13, 35,
42] by only learning on normal videos to detect open-set
anomalies often triggers false alarms, as it is essentially ill-
posed to define what is normal and abnormal by giving only
*Corresponding author
01
Time01
ScoreExplosion Vandalism
Time
(a)
01
Time01
ScoreExplosion Vandalism
Time
(b)
Figure 1. Two anomalies of Explosion and Vandalism are illus-
trated. Among each video sequence, we use red boxes to highlight
the ground-truth anomaly regions as in the first row. The corre-
sponding anomaly curves of an MIL-based model are depicted be-
low. False alarms and real anomalies are linked to the curves with
blue arrows and green arrows respectively. Best viewed in color.
normal videos without any prior knowledge. Hence, we are
interested in a more practical setting: Weakly Supervised
V AD (WSV AD) [12, 43], where only video-level binary la-
bels ( i.e., normal vs.abnormal) are available.
In WSV AD, each video sequence is partitioned into
multiple snippets. Hence, all the snippets are normal
in a normal video, and at least one snippet contains the
anomaly in an abnormal one. The goal of WSV AD is to
train a snippet-level anomaly detector using video-level la-
bels. The mainstream method is Multiple Instance Learn-
ing (MIL) [22, 30]—multiple instances refer to the snip-
pets in each video, and learning is conducted by decreas-
ing the predicted anomaly score for each snippet in a nor-
mal video, and increasing that only for the snippet with the
largest anomaly score in an abnormal video. For example,
Figure 1a shows an abnormal video containing an explo-
sion scene, and the detector is trained by MIL to increase
the anomaly score for the most anomalous explosion snip-
pet (green link).
However, MIL is easily biased towards the simplest con-
text shortcut in a video. We observe in Figure 1a that the de-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8022
Invariance FeatureContext Feature
(a)
(b)
(c)
Failure Cases(a)
(c)(b)
B
BFigure 2. Red: Confident Set, Blue: Ambiguous Set. : Nor-
mal sample, ▲: Abnormal sample, Gray instances: Failure cases.
The red line denotes the classifier trained under MIL. The invari-
ant classifier (black line) can be learned by combining confident
snippets learning in MIL (red line) and the ambiguous snippets
clustering (blue line). Best viewed in color.
tector is biased to smoke, as the pre-explosion snippet with
only smoke is also assigned a large anomaly score (blue
link). This biased detector can trigger false alarms on smoke
snippets without anomaly, e.g., a smoking chimney. More-
over, it could also fail in videos with multiple anomalies of
different contexts. In Figure 1b, the video records two men
vandalizing a car, where only the second one has substantial
motions. We notice that the two snippets of them have large
differences in the anomaly scores, and only the latter is pre-
dicted as an anomaly. This shows that the detector is biased
to the drastic motion context while being less sensitive to
the subtle vandalism behavior, which is the true anomaly.
The root of MIL’s biased predictions lies in its training
scheme with biased sample selection. As shown in Fig-
ure 2, the bottom-left cluster (denoted as the red ellipse)
corresponds to the confident normal snippets, e.g., an empty
crossroad or an old man standing in a room, which are either
from normal videos as the ground truth or from abnormal
videos but visually similar to the ground-truth ones. On the
contrary, the top-right cluster denotes the confident abnor-
mal ones, which not only contain the true anomaly features
(e.g., explosion and vandalism) but also include the context
features commonly appearing with anomaly under a context
bias ( e.g., smoke and motions). In MIL, the trained detector
is dominated by the confident samples, corresponding to the
top-right cluster with the abnormal representation and the
bottom-left cluster with the normal representation. Hence
the learned detector (red line) inevitably captures the con-
text bias in the confident samples. Consequently, the biased
detector generates ambiguous predictions on snippets with
a different context bias (the red line mistakenly crossing the
blue points), e.g., smoke but normal (industrial exhaust inFigure 2a), substantial motion but normal (equipment main-
tenance in Figure 2b), or subtle motion but abnormal (van-
dalizing the rear-view mirror in Figure 2c), leading to the
aforementioned failure cases.
To this end, we aim to build an unbiased MIL detector
by training with both the confident abnormal/normal and
the ambiguous ones. Specifically, at each UMIL training
iteration, we divide the snippets into two sets using the cur-
rent detector: 1) the confident set with abnormal and nor-
mal snippets and 2) the ambiguous set with the rest snip-
pets, e.g., the two sets are enclosed with red circles and
blue circles in Figure 2, respectively. The ambiguous set is
grouped into two unsupervised clusters ( e.g., the two blue
circles separated by the blue line) to discover the intrinsic
difference between normal and abnormal snippets. Then,
we seek an invariant binary classifier between the two sets
that separate the abnormal/normal in the confident set and
the two clusters in the ambiguous one. The rationale of
the proposed invariance pursuit is that the snippets in the
ambiguous set must have a different context bias from the
confident set, otherwise, they will be selected into the same
set. Therefore, given a different context but the same true
anomaly, the invariant pursuit will turn to the true anomaly
(e.g., the black line).
Overall, we term our approach as Unbiased MIL
(UMIL) . Our contributions are summarized below:
• UMIL is a novel WSV AD method that learns an unbiased
anomaly detector by pursuing the invariance across the
confident and ambiguous snippets with different context
biases.
• Thanks to the unbiased objective, UMIL is the first WS-
V AD method that combines feature fine-tuning and de-
tector learning into an end-to-end training scheme. This
leads to a more tailored feature representation for V AD.
• UMIL is equipped with a fine-grained video partitioning
strategy for preserving the subtle anomaly information in
video snippets.
• These contribute to the improved performance over the
current state-of-the-art methods on UCF-Crime [30] (
1.4%AUC) and TAD [22] ( 3.3%AUC) benchmarks.
Note that UMIL brings more than 2% AUC gain com-
pared with the MIL baseline on both datasets, which jus-
tifies the effectiveness of UMIL.
|
Li_SCConv_Spatial_and_Channel_Reconstruction_Convolution_for_Feature_Redundancy_CVPR_2023 | Abstract
Convolutional Neural Networks (CNNs) have achieved
remarkable performance in various computer vision tasks
but this comes at the cost of tremendous computational
resources, partly due to convolutional layers extract-
ing redundant features. Recent works either compress
well-trained large-scale models or explore well-designed
lightweight models. In this paper, we make an attempt
to exploit spatial and channel redundancy among features
for CNN compression and propose an efficient convolu-
tion module, called SCConv (Spatial and Channel recon-
struction Convolution), to decrease redundant computing
and facilitate representative feature learning. The pro-
posed SCConv consists of two units: spatial reconstruction
unit (SRU) and channel reconstruction unit (CRU). SRU
utilizes a separate-and-reconstruct method to suppress the
spatial redundancy while CRU uses a split-transform-and-
fuse strategy to diminish the channel redundancy. In addi-
tion, SCConv is a plug-and-play architectural unit that can
be used to replace standard convolution in various convolu-
tional neural networks directly. Experimental results show
that SCConv-embedded models are able to achieve better
performance by reducing redundant features with signifi-
cantly lower complexity and computational costs.
| 1. Introduction
In recent years, convolutional neural networks (CNNs)
have obtained widespread applications in computer vision
tasks [24] due to its ability in obtaining representative fea-
tures. However, such success relies heavily on intensive
resources of computation and storage, which poses se-
vere challenges to their efficient deployment on resource-
constrained environments. Therefore, to address these chal-
lenges, various types of model compression strategies and
network designs have been explored to improve network ef-
*Corresponding author (e-mail: ywen@cs.ecnu.edu.cn)ficiency [1, 2, 26]. The former includes network pruning,
weight quantization, low-rank factorization, and knowledge
distillation. To be specific, network pruning [17,22,30] is a
straightforward way to prune the uncritical neuron connec-
tions from an existing learned big model to make it thinner.
Weight quantization [9] mainly focuses on converting net-
work weights from floating-point types to integer ones to
save computation resources. Low-rank factorization [5] ap-
plies the matrix decomposition techniques to estimate the
informative parameters. Knowledge distillation [11, 34]
generates small student networks with the guidance of a
well-trained big teacher network. The common part of these
compression techniques is that they have been regarded as
post-processing steps, thus their performance is usually up-
per bounded by the given initial model. Meanwhile, the ac-
curacy of these methods drastically drops while achieving a
high compression rate.
Network design is another alternative way, which aims
at reducing the inherent redundancy in dense model param-
eters and further developing a lightweight network model.
For example, ResNet [10] and DenseNet [14] utilize an effi-
cient shortcut connection to improve the network topology,
which connects all preceding feature maps to diminish the
redundant parameters. ResNeXt [31] replaces traditional
convolutions with sparsely connected group convolutions
to reduce inter-channel connectivity. Networks like Xcep-
tion [4], MobileNet [12] and MobileNeXt [35] disentan-
gle standard convolution into depth-wise convolution and
point-wise convolution to further decrease the connection
density between channels. MicroNet [19] adopts micro-
factorized convolution to handle extremely low FLOPs by
integrating sparse connectivity into convolution. In addi-
tion, EfficientNet [27] learns to automatically search opti-
mal network architectures to lower the redundancy in dense
model parameters.
Moreover, in CNN architecture design, bottleneck struc-
ture has been well adopted, in which 3×3convolutional
layers account for a majority of the model parameters and
FLOPs. Therefore various efficient convolutional opera-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore. |
Pietrantoni_SegLoc_Learning_Segmentation-Based_Representations_for_Privacy-Preserving_Visual_Localization_CVPR_2023 | Abstract
Inspired by properties of semantic segmentation, in this
paper we investigate how to leverage robust image segmen-
tation in the context of privacy-preserving visual localiza-
tion. We propose a new localization framework, SegLoc,
that leverages image segmentation to create robust, com-
pact, and privacy-preserving scene representations, i.e., 3D
maps. We build upon the correspondence-supervised, fine-
grained segmentation approach from [42], making it more
robust by learning a set of cluster labels with discriminative
clustering, additional consistency regularization terms and
we jointly learn a global image representation along with
a dense local representation. In our localization pipeline,
the former will be used for retrieving the most similar im-
ages, the latter to refine the retrieved poses by minimizing
the label inconsistency between the 3D points of the map
and their projection onto the query image. In various ex-
periments, we show that our proposed representation al-
lows to achieve (close-to) state-of-the-art pose estimation
results while only using a compact 3D map that does not
contain enough information about the original images for
an attacker to reconstruct personal information.
| 1. Introduction
Visual localization is the problem of estimating the pre-
cise camera pose – position and orientation – from which
the image was taken in a known scene. It is a core compo-
nent of systems such as self-driving cars [31], autonomous
robots [49], and mixed-reality applications [4, 53].
Traditionally, visual localization algorithms rely on a 3D
scene representation of the target area, which can be a 3D
point cloud map [29, 34, 35, 45, 46, 66, 68, 69, 73, 79], e.g.,
from Structure-from-Motion (SfM), or a learned 3D repre-
sentation [9,10,14,37,38,71,76]. The representation is typ-
ically derived from reference images with known camera
poses. Depending on the application scenario, these maps
(𝑅,𝑇)Pose refinement through label alignment(𝑅!,𝑇!)
Queryimage
SegLoc
Image retrievalGlobal descriptorSegmentation+Labelled 3D mapFigure 1. The SegLoc localization pipeline: Our model jointly
creates a robust global descriptor used to retrieve an initial pose
(R0,T0)and dense local representations used to obtain the re-
fined pose (R,T)by maximizing the label consistency between
the reprojected 3D points and the query image.
need to be stored in the cloud, which raises important ques-
tions about memory consumption andprivacy preserva-
tion. It is possible to reconstruct images from maps that
contain local image features [62], amongst the most widely
used for scene representation.
To tackle the above challenges that feature-based ap-
proaches may face, inspired by semantic-based [48,82] and
segmentation-based [42] approaches, we propose a visual
localization pipeline where robust segmentations are used
as the sole cue for localization, yielding reduced storage
requirements (compared to using local features) while in-
creasing privacy-preservation. Our proposed localization
pipeline, called SegLoc, follows standard structure based-
localization pipelines [34, 66] that represent the scene via
a 3D model: first, image retrieval based on a compact im-
age representation is used to coarsely localize a query im-
age. Given such an initial pose estimate, the camera pose
is refined by aligning the query image to the 3D map. Con-
trary to prior work that is based on extracting features di-
rectly from images, we derive a more abstract representa-
tion in the form of a robust dense segmentation based on a
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15380
set of clusters learned in a self-supervised manner. As illus-
trated in Figure 1, we use this segmentation to both extract
a global descriptor for image retrieval and for pose refine-
ment. The pose is refined by maximizing the label consis-
tency between the predictions in the query image and a set
of labeled 3D points in the scene.
Such an approach has multiple advantages. First, our
model is able to learn representations which are robust to
seasonal or appearance changes . Similar to semantic seg-
mentations, which are invariant to viewing conditions as the
semantic meaning of regions do not change, our represen-
tation is trained such that the same 3D point is mapped to
the same label regardless of viewing conditions. Second, it
results in low storage requirements , as instead of storing
high-dimensional feature descriptors, for each 3D point we
only keep its label. Finally, it allows privacy-preserving vi-
sual localization [15,22,28,78], as it creates a non-injective
mapping from multiple images showing similar objects with
different appearances to similar labels. While, ensuring user
privacy comes at the cost of reduced pose accuracy [19,98],
our method comes close to state-of-the-art results with a
better accuracy vs. memory vs. privacy trade-off.
To summarize, our first contribution is a new localiza-
tion framework, called SegLoc , that extends the idea [41,
42] of learning robust fine-grained image segmentations in
a self-supervised manner. To that end, we leverage dis-
criminative clustering while putting more emphasis on rep-
resentation learning. Furthermore, we derive a full local-
ization pipeline, where our model jointly learns global im-
age representation to retrieve images for pose initialization,
and dense local representations for building a compact 3D
map – an order of magnitude smaller compared to feature-
based approaches – and to perform privacy-preserving pose
refinement. As a second contribution , we draw a con-
nection between segmentation-based representations and
privacy-preserving localization, opening up viable alter-
natives to keypoint-based methods within the accuracy-
privacy-memory trade-off. We evaluate our approach in
multiple indoor and outdoor environments while quantita-
tively measuring privacy through detailed experiments.
|
Mou_Large-Capacity_and_Flexible_Video_Steganography_via_Invertible_Neural_Network_CVPR_2023 | Abstract
Video steganography is the art of unobtrusively conceal-
ing secret data in a cover video and then recovering the
secret data through a decoding protocol at the receiver end.
Although several attempts have been made, most of them
are limited to low-capacity and fixed steganography. To
rectify these weaknesses, we propose a Large-capacity and
Flexible Video Steganography Network (LF-VSN) in this
paper. For large-capacity, we present a reversible pipeline
to perform multiple videos hiding and recovering through
a single invertible neural network (INN). Our method can
hide/recover 7 secret videos in/from 1 cover video with
promising performance. For flexibility, we propose a key-
controllable scheme, enabling different receivers to recover
particular secret videos from the same cover video through
specific keys. Moreover, we further improve the flexibility
by proposing a scalable strategy in multiple videos hid-
ing, which can hide variable numbers of secret videos in
a cover video with a single model and a single training
session. Extensive experiments demonstrate that with the
significant improvement of the video steganography perfor-
mance, our proposed LF-VSN has high security, large hid-
ing capacity, and flexibility. The source code is available at
https://github.com/MC-E/LF-VSN .
| 1. Introduction
Steganography [10] is the technology of hiding some se-
cret data into an inconspicuous cover medium to generate
a stego output, which only allows the authorized receiver
to recover the secret information. Unauthorized people can
only access the content of the plain cover medium, and hard
∗Corresponding author . This work was supported by the King Abdul-
lah University of Science and Technology (KAUST) Office of Sponsored
Research through the Visual Computing Center (VCC) funding, SDAIA-
KAUST Center of Excellence in Data Science and Artificial Intelligence,
and Shenzhen Research Project JCYJ20220531093215035.to detect the existence of secret data. In the current digital
world, image and video are commonly used covers, widely
applied in digital communication [27], copyright protec-
tion [36], information certification [31], e-commerce [26],
and many other practical fields [10, 12].
Traditional video steganography methods usually hide
messages in the spatial domain or transform domain by
manual design. Video steganography in the spatial domain
means embedding is done directly to the pixel values of
video frames. Least significant bits (LSB) [8,45] is the most
well-known spatial-domain method, replacing the nleast
significant bits of the cover image with the most significant
nbits of the secret data. Many researchers have used LSB
replacement [6] and LSB matching [34] for video steganog-
raphy. The transform-domain hiding [5, 17, 39] is done by
modifying certain frequency coefficients of the transformed
frames. For instance, [44] proposed a video steganogra-
phy technique by manipulating the quantized coefficients
of DCT (Discrete Cosine Transformation). [9] proposed to
compare the DWT (Discrete Wavelet Transformation) co-
efficients of the secret image and the cover video for hid-
ing. However, these traditional methods have low hiding
capacity and invisibility, easily being cracked by steganaly-
sis methods [15, 28, 33].
Recently, some deep-learning methods were proposed to
improve the hiding capacity and performance. Early works
are presented in image steganography. Baluja [3, 4] pro-
posed the first deep-learning method to hide a full-size im-
age into another image. Recently, [21,32] proposed design-
ing the steganography model as an invertible neural network
(INN) [13,14] to perform image hiding and recovering with
a single model. For video steganography, Khare et al. [22]
first utilized back propagation neural networks to improve
the performance of the LSB-based scheme. [43] is the first
deep-learning method to hide a video into another video.
Unfortunately, it simply aims to hide the residual across ad-
jacent frames in a frame-by-frame manner, and it requires
several separate steps to complete the video hiding and re-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22606
Figure 1. Illustration of our large-capacity and flexible video steganography network (LF-VSN). Our LF-VSN reversibly solves multiple
videos hiding and recovering with a single model and the same parameters. It has large-capacity, key-controllable and scalable advantages.
covering. [35] utilize 3D-CNN to explore the temporal cor-
relation in video hiding. However, it utilizes two separated
3D UNet to perform hiding and recovering, and it has high
model complexity ( 367.2million parameters). While video
steganography has achieved impressive success in terms of
hiding capacity to hide a full-size video, the more challeng-
ing multiple videos hiding has hardly been studied. Also,
the steganography pipeline is rigid.
In this paper, we study the large-capacity and flexible
video steganography, as shown in Fig. 1. Concretely, we
propose a reversible video steganography pipeline, achiev-
ing large capacity to hide/recover multiple secret videos
in/from a cover video. At the same time, our model
complexity is also attractive by combining several weight-
sharing designs. The flexibility of our method is twofold.
First, we propose a key-controllable scheme, enabling dif-
ferent receivers to recover particular secret videos with spe-
cific keys. Second, we propose a scalable strategy, which
can hide variable numbers of secret videos into a cover
video with a single model and a single training session. To
summarize, this work has the following contributions:
• We propose a large-capacity video steganography
method, which can hide/recover multiple ( up to 7 ) se-
cret videos in/from a cover video. Our hiding and re-
covering are fully reversible via a single INN.
• We propose a key-controllable scheme with which dif-
ferent receivers can recover particular secret videos
from the same cover video via specific keys.
• We propose a scalable embedding module, utilizing a
single model and a single training session to satisfy
different requirements for the number of secret videos
hidden in a cover video.
• Extensive experiments demonstrate that our proposedmethod achieves state-of-the-art performance with
large hiding capacity and flexibility.
|
Lo_Spatio-Temporal_Pixel-Level_Contrastive_Learning-Based_Source-Free_Domain_Adaptation_for_Video_Semantic_CVPR_2023 | Abstract
Unsupervised Domain Adaptation (UDA) of semantic
segmentation transfers labeled source knowledge to an un-
labeled target domain by relying on accessing both the
source and target data. However, the access to source
data is often restricted or infeasible in real-world scenar-
ios. Under the source data restrictive circumstances, UDA
is less practical. To address this, recent works have ex-
plored solutions under the Source-Free Domain Adaptation
(SFDA) setup, which aims to adapt a source-trained model
to the target domain without accessing source data. Still,
existing SFDA approaches use only image-level informa-
tion for adaptation, making them sub-optimal in video ap-
plications. This paper studies SFDA for Video Semantic
Segmentation (VSS), where temporal information is lever-
aged to address video adaptation. Specifically, we pro-
pose Spatio-Temporal Pixel-Level (STPL) contrastive learn-
ing, a novel method that takes full advantage of spatio-
temporal information to tackle the absence of source data
better. STPL explicitly learns semantic correlations among
pixels in the spatio-temporal space, providing strong self-
supervision for adaptation to the unlabeled target domain.
Extensive experiments show that STPL achieves state-of-
the-art performance on VSS benchmarks compared to cur-
rent UDA and SFDA approaches. Code is available at:
https://github.com/shaoyuanlo/STPL
| 1. Introduction
The availability of large amounts of labeled data has
made it possible for various deep networks to achieve re-
markable performance on Image Semantic Segmentation
(ISS) [2, 4, 30]. However, these deep networks often general-
ize poorly on target data from a new unlabeled domain that is
visually distinct from the source training data. Unsupervised
Domain Adaptation (UDA) attempts to mitigate this domain
shift problem by using both the labeled source data and un-
*This work was mostly done when S.-Y . Lo was an intern at Amazon.
Figure 1. Comparison of VSS accuracy. Video-based UDA meth-
ods [12, 38, 49] outperform image-based UDA methods [33, 51],
showing the importance of video-based strategies for the VSS task.
Image-based SFDA methods [16, 39] perform lower than the UDA
methods, which shows the difficulty of the more restricted SFDA
setting. The proposed STPL, even with SFDA, achieves the best
accuracy and locates at the top-right corner of the chart (i.e., more
restriction, but higher accuracy).
labeled target data to train a model transferring the source
knowledge to the target domain [11, 12, 31, 32, 38, 41]. UDA
is effective but relies on the assumption that both source and
target data are available during adaptation. In real-world
scenarios, the access to source data is often restricted (e.g.,
data privacy, commercial proprietary) or infeasible (e.g.,
data transmission efficiency, portability). Hence, under these
source data restrictive circumstances, UDA approaches are
less practical.
To deal with these issues, the Source-Free Domain Adap-
tation (SFDA) setup, also referred to as Unsupervised Model
Adaptation (UMA), has been recently introduced in the lit-
erature [6, 26, 27, 52]. SFDA aims to use a source-trained
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10534
model (i.e., a model trained on labeled source data) and adapt
it to an unlabeled target domain without requiring access to
the source data. More precisely, under the SFDA formula-
tion, given a source-trained model and an unlabeled target
dataset, the goal is to transfer the learned source knowledge
to the target domain. In addition to alleviating data privacy or
proprietary concerns, SFDA makes data transmission much
more efficient. For example, a source-trained model ( 0.1
- 1.0 GB) is usually much smaller than a source dataset (
10 - 100 GB). If one is adapting a model from a large-scale
cloud center to a new edge device that has data with different
domains, the source-trained model is far more portable and
transmission-efficient than the source dataset.
Under SFDA, label supervision is not available. Most
SFDA studies adopt pseudo-supervision or self-supervision
techniques to adapt the source-trained model to the target
domain [16, 39]. However, they consider only image-level
information for model adaptation. In many real-world seman-
tic segmentation applications (autonomous driving, safety
surveillance, etc.), we have to deal with temporal data such as
streams of images or videos. Supervised approaches that use
temporal information have been successful for Video Seman-
tic Segmentation (VSS), which predicts pixel-level semantics
for each video frame [19, 22, 28, 46]. Recently, video-based
UDA strategies have also been developed and yielded better
performance than image-based UDA on VSS [12, 38, 49].
This motivates us to propose a novel SFDA method for
VSS, leveraging temporal information to tackle the absence
of source data better. In particular, we find that current
image-based SFDA approaches suffer from sub-optimal per-
formance when applied to VSS (see Figure 1). To the best of
our knowledge, this is the first work to explore video-based
SFDA solutions.
In this paper, we propose a novel spatio-temporal SFDA
method namely Spatio-Temporal Pixel-Level (STPL) Con-
trastive Learning (CL), which takes full advantage of both
spatial and temporal information for adapting VSS mod-
els. STPL consists of two main stages. (1) Spatio-temporal
feature extraction: First, given a target video sequence in-
put, STPL fuses the RGB and optical flow modalities to
extract spatio-temporal features from the video. Meanwhile,
it performs cross-frame augmentation via randomized spatial
transformations to generate an augmented video sequence,
then extracts augmented spatio-temporal features. (2) Pixel-
level contrastive learning: Next, STPL optimizes a pixel-
level contrastive loss between the original and augmented
spatio-temporal feature representations. This objective en-
forces representations to be compact for same-class pixels
across both the spatial and temporal dimensions.
With these designs, STPL explicitly learns semantic corre-
lations among pixels in the spatio-temporal space, providing
strong self-supervision for adaptation to an unlabeled tar-
get domain. Furthermore, we demonstrate that STPL is anon-trivial unified spatio-temporal framework. Specifically,
Spatial-only CL andTemporal-only CL are special cases of
STPL, and STPL is better than a na ¨ıve combination of them.
Extensive experiments demonstrate the superiority of STPL
over various baselines, including the image-based SFDA as
well as image- and video-based UDA approaches that rely
on source data (see Figure 1). The key contributions of this
work are summarized as follows:
•We propose a novel SFDA method for VSS. To the best of
our knowledge, this is the first work to explore video-based
SFDA solutions.
•We propose a novel CL method, namely STPL, which
explicitly learns semantic correlations among pixels in the
spatio-temporal space, providing strong self-supervision
for adaptation to an unlabeled target domain.
•We conduct extensive experiments and show that STPL
provides a better solution compared to the existing image-
based SFDA methods as well as image- and video-based
UDA methods for the given problem formulation.
|
Mei_Deep_Polarization_Reconstruction_With_PDAVIS_Events_CVPR_2023 | Abstract
The polarization event camera PDAVIS is a novel bio-
inspired neuromorphic vision sensor that reports both con-
ventional polarization frames and asynchronous, continu-
ously per-pixel polarization brightness changes (polariza-
tion events) with fast temporal resolution andlarge dy-
namic range . A deep neural network method (Polariza-
tion FireNet) was previously developed to reconstruct the
polarization angle and degree from polarization events for
bridging the gap between the polarization event camera
and mainstream computer vision. However, Polarization
FireNet applies a network pre-trained for normal event-
based frame reconstruction independently on each of four
channels of polarization events from four linear polariza-
tion angles, which ignores the correlations between chan-
nels and inevitably introduces content inconsistency be-
tween the four reconstructed frames, resulting in unsatisfac-
tory polarization reconstruction performance. In this work,
we strive to train an effective, yet efficient, DNN model thatdirectly outputs polarization from the input raw polariza-
tion events. To this end, we constructed the first large-
scale event-to-polarization dataset, which we subsequently
employed to train our events-to-polarization network E2P .
E2P extracts rich polarization patterns from input polariza-
tion events and enhances features through cross-modality
context integration. We demonstrate that E2P outperforms
Polarization FireNet by a significant margin with no addi-
tional computing cost. Experimental results also show that
E2P produces more accurate measurement of polarization
than the PDAVIS frames in challenging fast and high dy-
namic range scenes. Code and data are publicly available
at:https://github.com/SensorsINI/e2p .
| 1. Introduction
Visual information is encoded in light by intensity, color,
and polarization [12]. Polarization is a property of trans-
verse light waves that specifies the geometric orientation of
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22149
the oscillations (which can be described by the Angle of
Linear Polarization ( AoLP ) and the Degree of Linear Polar-
ization ( DoLP )), providing strong vision cues and enabling
solutions to challenging problems in medical [27], under-
water [34], and remote sensing [53] applications. Existing
polarization digital cameras capture synchronous polariza-
tion frames with a linear photo response [14], while biologi-
cal eyes tend to perceive asynchronous and sparse data with
a compressed non-linear response [12].
Inspired by the mantis shrimp visual system [29], the
novel neuromorphic vision sensor called Polarization Dy-
namic and Active pixel VIsion Sensor ( PDA VIS ) illustrated
in Figure 1 was developed to concurrently record a high-
frequency stream of asynchronous polarization brightness
change events under four polarization angles ( i.e.,0◦,45◦,
90◦, and 135◦) over a wide range of illumination. PDA VIS
also outputs low-frequency synchronous frames like con-
ventional polarization cameras [15].
Even though the stream of polarization events has ad-
vantages of low latency and HDR, it is not friendly to hu-
man observation and traditional computer vision due to the
sparse, irregular, and unstructured properties. To better ex-
ploit the advantages of PDA VIS, an intuitive solution is to
reconstruct polarization from polarization events, which can
bridge off-the-shelf frame-based algorithms and PDA VIS.
Gruev et al. [15] proposed the Polarization FireNet, which
first runs the FireNet [41] pre-trained for normal event-
based intensity frame reconstruction on each of four types
of polarization events under four different polarization an-
gles, and then computes the polarization from four recon-
structed intensity frames via mathematical formulas. Since
this method treats four polarization angle channels indepen-
dently, the correlation between channels is ignored and in-
consistency between the four reconstructed frames hinders
accurate measurement of polarization.
In this work, we make the first attempt to train an
accurate yet efficient DNN model tailored for event-to-
polarization reconstruction. We approach this twofold.
First, we construct the first large-scale event-to-polarization
synthetic-real mixed dataset, dubbed Events to Polariza-
tion Dataset ( E2PD ), which contains 5 billion polarization
events and corresponding 133 thousand polarization video
frames. The diversity and practicality of E2PD are ensured
by including diverse real-world road scenes under differ-
ent weather conditions (rainy and sunny) in different cities.
Second, we design an E2P network that consists of three
branches to reconstruct intensity, AoLP, and DoLP, respec-
tively, from the raw polarization events directly. E2P is built
on two key modules: (i) a Rich Polarization Pattern Per-
ception ( RPPP ) module that effectively harvests features
from raw polarization events and (ii) a Cross-Modality At-
tention Enhancement ( CMAE ) module that explores cross-
modality contextual cues for feature enhancement.We perform extensive validation experiments to demon-
strate the efficacy of our method and show that the network
trained on our E2PD is more accurate than all previously
reported PDA VIS methods, and produces more accurate po-
larization compared with polarization computed from the
PDA VIS frames in challenging scenes ( e.g., Figure 1). In
summary, our contributions are:
1. the first attempt to solve the event-to-polarization
problem using an end-to-end trained deep neural net-
work with polarization events as input, intensity, AoLP
and DoLP as outputs;
2. a new and unique large-scale event-to-polarization
dataset containing both synthetic and real data; and
3. a novel network that perceives rich polarization pat-
terns from raw polarization events and enhances fea-
tures via a cross-modality attention mechanism.
|
Ma_CAT_LoCalization_and_IdentificAtion_Cascade_Detection_Transformer_for_Open-World_Object_CVPR_2023 | Abstract
Open-world object detection (OWOD), as a more gen-
eral and challenging goal, requires the model trained from
data on known objects to detect both known and unknown
objects and incrementally learn to identify these unknown
objects. The existing works which employ standard de-
tection framework and fixed pseudo-labelling mechanism
(PLM) have the following problems: (𝑖)The inclusion of de-
tecting unknown objects substantially reduces the model’s
ability to detect known ones. (𝑖𝑖)The PLM does not ad-
equately utilize the priori knowledge of inputs. (𝑖𝑖𝑖)The
fixed selection manner of PLM cannot guarantee that the
model is trained in the right direction. We observe that hu-
mans subconsciously prefer to focus on all foreground ob-
jects and then identify each one in detail, rather than lo-
calize and identify a single object simultaneously, for al-
leviating the confusion. This motivates us to propose a
novel solution called CAT: Lo Calization and Identific Ation
Cascade Detection Transformer which decouples the detec-
tion process via the shared decoder inthe cascade decod-
ing way . In the meanwhile, we propose the self-adaptive
pseudo-labelling mechanism which combines the model-
driven with input-driven PLM and self-adaptively generates
robust pseudo-labels for unknown objects, significantly im-
proving the ability of CAT to retrieve unknown objects. Ex-
periments on two benchmarks, 𝑖.𝑒., MS-COCO and PAS-
CAL VOC, show that our model outperforms the state-of-
the-art methods. The code is publicly available at https:
//github.com/xiaomabufei/CAT .
| 1. Introduction
Open-world object detection (OWOD) is a more prac-
tical detection problem in computer vision, making artifi-
*Equal contribution.
†Corresponding author.
BearFrogFlower
SquirrelUnknown
CatBeeUnknown
UnknownUnknown UnknownFigure 1. When faced with new scenes in open world, humans sub-
consciously focus on all foreground objects and then identify them
in detail in order to alleviate the confusion between the known and
unknown objects and get a clear view. Motivated by this, our CAT
utilizes the shared decoder to decouple the localization and iden-
tification process in the cascade decoding way, where the former
decoding process is used for localization and the latter for identi-
fication.
cial intelligence (AI) smarter to face more difficulties in real
scenes. Within the OWOD paradigm, the model’s life-span
is pushed by iterative learning process. At each episode, the
model trained only by known objects needs to detect known
objects while simultaneously localizing unknown objects
and identifying them into the unknown class. Human an-
notators then label a few of these tagged unknown classes
of interest gradually. The model given these newly-added
annotations will continue to incrementally update its knowl-
edge without retraining from scratch.
Recently, Joseph et al. [21] proposed an open-world ob-
ject detector, ORE, based on the two-stage Faster R-CNN
[38] pipeline. ORE utilized an auto-labelling step to obtain
pseudo-unknowns for training model to detect unknown ob-
jects and learned an energy-based binary classifier to dis-
tinguish the unknown class from known classes. However,
its success largely relied on a held-out validation set which
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19681
was leveraged to estimate the distribution of unknown ob-
jects in the energy-based classifier. Then, several methods
[29, 43–45] attempted to extend ORE and achieved some
success. To alleviate the problems in ORE, Gupta et al. [17]
proposed to use the detection transformer [4,46] for OWOD
in a justifiable way and directly leveraged the framework
of DDETR [46]. In addition, they proposed an attention-
driven PLM which selected pseudo labels for unknown ob-
jects according to the attention scores.
For the existing works, we find the following hindering
problems.(𝑖)Owing to the inclusion of detecting unknown
objects, the model’s ability to detect known objects substan-
tially drops. To alleviate the confusion between known and
unknown objects, humans prefer to dismantle the process of
open-world object detection rather than parallelly localize
and identify open-world objects like most standard detec-
tion models.(𝑖𝑖)To the best of our knowledge, in the exist-
ing OWOD PLM, models leverage the learning process for
known objects to guide the generation of pseudo l |
Lu_Neuron_Structure_Modeling_for_Generalizable_Remote_Physiological_Measurement_CVPR_2023 | Abstract
Remote photoplethysmography (rPPG) technology has
drawn increasing attention in recent years. It can extract
Blood Volume Pulse (BVP) from facial videos, making many
applications like health monitoring and emotional analysis
more accessible. However, as the BVP signal is easily af-
fected by environmental changes, existing methods struggle
to generalize well for unseen domains. In this paper, we sys-
tematically address the domain shift problem in the rPPG
measurement task. We show that most domain generaliza-
tion methods do not work well in this problem, as domain la-
bels are ambiguous in complicated environmental changes.
In light of this, we propose a domain-label-free approach
called NEuron STructure modeling (NEST). NEST improves
the generalization capacity by maximizing the coverage of
feature space during training, which reduces the chance for
under-optimized feature activation during inference. Be-
sides, NEST can also enrich and enhance domain invari-
ant features across multi-domain. We create and bench-
mark a large-scale domain generalization protocol for the
rPPG measurement task. Extensive experiments show that
our approach outperforms the state-of-the-art methods on
both cross-dataset and intra-dataset settings. The codes are
available at https://github.com/LuPaoPao/NEST.
| 1. Introduction
Physiological signals such as heart rate (HR), and heart
rate variability (HRV), respiration frequency (RF) are im-
portant body indicators that serve not only as vital signs but
also track the level of sympathetic activation [17, 33, 54].
Traditional physiological measurements, such as electrocar-
diograms, heart rate bands, and finger clip devices, have
high accuracy. However, they are costly, intrusive, and un-
comfortable to wear for a long time. Remote photoplethys-
mography (rPPG) can extract blood volume pulse (BVP)
*Corresponding author.
VIPL-HR
UBFC-rPPG
V4V
BUAA
PURE
(a) Typical samples from different datasets
GREEN[58]
CHROM[8]
POS[62]
DeepPhys[6]
TS-CAN[24]
5 6 7 8 9 109.18
8.92
8.04
8.42
8.25
7.91 Rhythmnet[35]
8.08
7.97AD[11]
GroupDRO[37]Traditional
DL-based
DG-based
RMSE↓ (bpm)Ours 6.79Ours
(b) The performance of different methods on DG protocolFigure 1. (a) Typical samples from different publicly rPPG
datasets: VIPL-HR [36], V4V [47], UBFC-rPPG [1], BUAA [68],
PURE [52]. (b) The performance of different methods on DG pro-
tocol (test on the UBFC-rPPG dataset with training on the VIPL,
V4V , PURE, and BUAA).
from face video, which analyzes the periodic changes of the
light absorption of the skin caused by heartbeats. Then var-
ious physiological indicators (such as HR and HRV) can be
calculated based on BVP signals [37, 69, 70]. With non-
intrusion and convenience, the rPPG-based physiological
measurement method can only use an ordinary camera to
monitor physiological indicators and gradually become a re-
search hotspot in the computer vision field [7,28,32,36,44,
49].
Traditional rPPG measurement methods include signal
blind decomposition [21,34,42] and color space transforma-
tion [9, 59, 63]. These approaches rely on heartbeat-related
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18589
statistical information, only applicable in constrained envi-
ronments. In recent years, deep learning (DL) based ap-
proaches [6, 19, 25, 28, 36, 44, 50, 69, 70] have shown their
great potentials in rPPG measurement. By learning dedi-
cated rPPG feature representation, these methods achieve
promising performance in much more complicated environ-
ments [1, 36, 47, 52, 68].
However, deep learning methods suffer from significant
performance degradation when applied in real-world sce-
narios. This is because most training data are captured in lab
environments with limited environmental variations. With
domain shifts (e.g., different illumination, camera param-
eters, motions, etc.), these models may struggle to gener-
alize for the unseen testing domain. To validate this, we
conduct a cross-dataset evaluation shown in Fig. 1(b). As
shown, all DL-based methods do not work well in this eval-
uation. Furthermore, it is worth noting that DeepPhys [6]
and TS-CAN [25] even perform inferior to traditional ap-
proach POS [63].
To improve the performance on the unseen domains, one
common practice is to incorporate Domain Generalization
(DG) approaches, e.g., encouraging intermediate features
to be domain-invariant [12, 18, 30, 38, 57, 62]. However, as
shown in Fig. 1 (b) (AD [12] and GroupDRO [38]), the
improvements are still quite limited. One reason is that
existing DG methods assume that domain labels can be
clearly defined (e.g., by data source). Unfortunately, differ-
ent background scenes, acquisition devices, or even iden-
tities could cause very different data distributions in rPPG
measurement tasks, and such distribution discrepancy may
exist within or cross datasets. In this case, explicitly defin-
ing domains is very difficult, and simply treating one dataset
as one domain may lead to inferior performance. This is
termed as agnostic domain generalization . Besides this, as
physiological cues are usually much more subtle than the
various noise, the model may overfit in the source domain
with the limited training data.
In this paper, we propose the NEural STructure model-
ing (NEST), a principled solution to the abovementioned
problems. The main idea of NEST is to narrow the under-
optimized and redundant feature space, align domain invari-
ant features, and enrich discriminative features. Our intu-
ition is as follows. Neural structure refers to the channel ac-
tivation degree in each convolution layer, which reveals the
discriminative feature combination for the specific sample.
As the limited variation in a certain restricted domain, there
are some spaces that the model seldomly is optimized. Out-
of-distribution (OOD) samples may cause abnormal activa-
tion in these spaces, which may lead to performance de-
generation. Therefore, we regularize the neural structure
to encourage the model to be more well-conditioned to
avoid abnormal activation caused by OOD samples. Specif-
ically, we propose the NEural STructure Coverage Maxi-mization (NEST-CM) that encourages all neural spaces to
be optimized during training, reducing the chance of ab-
normal activation during testing. Secondly, we propose the
NEural STructure Targeted Alignment (NEST-TA) that en-
courage network suppresses domain variant feature by com-
paring the samples with the similar physiological informa-
tion. Thirdly, we propose the NEural STructure Diversity
Maximization (NEST-DM) to enrich discriminative features
against unseen noise. It should be noted that our approach
does not rely on domain labels, which is more applicable in
the rPPG measurement task. To summarize, our contribu-
tions are listed as follows:
1. We are the first to study the domain shift problem
in rPPG measurement, which introduces a new challenge,
agnostic domain generalization.
2. We propose the NEural STructure modeling to alle-
viate domain shift, which is performed by narrowing the
under-optimized feature space, and enhancing and enrich-
ing domain invariant features.
3. We establish a large-scale domain generalization
(DG) benchmark for rPPG measurement, which is the first
DG protocol in this task. Extensive experiments in this
dataset show the superiority of our approach.
|
Mai_DualRel_Semi-Supervised_Mitochondria_Segmentation_From_a_Prototype_Perspective_CVPR_2023 | Abstract
Automatic mitochondria segmentation enjoys great pop-
ularity with the development of deep learning. However,
existing methods rely heavily on the labor-intensive manual
gathering by experienced domain experts. And naively ap-
plying semi-supervised segmentation methods in the natural
image field to mitigate the labeling cost is undesirable. In
this work, we analyze the gap between mitochondrial im-
ages and natural images and rethink how to achieve effec-
tive semi-supervised mitochondria segmentation, from the
perspective of reliable prototype-level supervision. We pro-
pose a novel end-to-end dual-reliable (DualRel) network,
including a reliable pixel aggregation module and a reliable
prototype selection module. The proposed DualRel enjoys
several merits. First, to learn the prototypes well without
any explicit supervision, we carefully design the referential
correlation to rectify the direct pairwise correlation. Sec-
ond, the reliable prototype selection module is responsible
for further evaluating the reliability of prototypes in con-
structing prototype-level consistency regularization. Exten-
sive experimental results on three challenging benchmarks
demonstrate that our method performs favorably against
state-of-the-art semi-supervised segmentation methods. Im-
portantly, with extremely few samples used for training, Du-
alRel is also on par with current state-of-the-art fully super-
vised methods.
| 1. Introduction
Mitochondria, as one of the crucial organelles, are the
primary energy providers for cell activities and are essen-
tial for metabolism. Quantification of mitochondrial mor-
phology can not only promote basic scientific research ( e.g.,
cellular physiology [1, 5]), but also provide new insight for
clinical diagnosis ( e.g., neurodegenerative diseases [20] and
diabetes [24]). Recently, with the development of deep
learning, semantic segmentation [2, 14, 18, 27, 30, 33] en-
ables in-depth exploration of mitochondrial morphology
*Equal contribution
†Corresponding author
Correlation of
and : 0.8
Referential Correlations Similarity : 0.8
Foreground prototype
Background pixelReferential Correlations Similarity : 0.2
Foreground pixel Pixel -Reference CorrelationPixel -Prototype CorrelationCorrelation of
and : 0.9
Reference
Points
Prototype -Reference Correlation
(b) (c)(a)Confusion Density: 0.06 Confusion Density: 0.21Figure 1. Illustration of our motivation. (a) shows the confusion
map and density ( i.e., the expected inverse confidence per pixel)
of mitochondrial and natural images. (b) shows the unreliability
caused by direct pairwise prototype-pixel correlation that is condi-
tioned only on visual similarity. (c) shows how to construct pixel-
reference correlation to rectify the direct pairwise correlation in a
referential correlation manner.
from high-resolution electron microscopy (EM) images and
make conspicuous achievements. However, their flexibil-
ity and scalability are limited in the actual deployment be-
cause the numerous cluttered irrelevant organelles that re-
quire labor-intensive manual discrimination and gathering
by experienced domain experts [10, 21]. Therefore, we be-
gin to turn attention to semi-supervised segmentation with
the assumption that enormous unlabeled data is accessible,
aiming to alleviate the data-hungry issue.
Semi-supervised segmentation enjoys great popularity in
the field of natural images, and representative works such as
CPS [3], which imposes pixel-level consistency regulariza-
tion and establishes state-of-the-art performance. It natu-
rally comes into mind to directly apply a CPS-like method
to semi-supervised mitochondria segmentation. However,
there exist a large gap between mitochondrial and natu-
ral images. As shown in Fig. 1 (a), we observe that the
confusion density ( i.e., the expected inverse confidence per
pixel) in mitochondrial images significantly surpasses coun-
terpart in natural images, implying that directly employing
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19617
pixel-level consistency regularization as supervision signals
on mitochondrial images will inevitably increase the risk
of unreliability. The most intuitive example is that there
exist considerable boundary regions in mitochondrial im-
ages, and the segmentation network is naturally equivocal
for these regions, as proven in [15]. In this case, some rel-
atively small mitochondria are easily overwhelmed by this
ambiguity, leading to sub-optimal results.
In order to seek more reliable supervision signals to al-
leviate the undependable problem caused by pixel-level su-
pervision, we draw inspiration from the inbuilt resistance
to noisy pixels of prototypes and construct more robust
and reliable prototype-level supervision . To achieve this
goal, two issues need to be considered. (1) Unreliable
pixels. Considering the cluttered background caused by
under/overexposure and out-of-focus problems during EM
imaging, the prototype inevitably absorbs unreliable pixels
(i.e., heterogeneous semantic clues) during the interaction
of corresponding pixels with a suitable pattern. We ar-
gue that directly forcing pairwise prototype-pixel correla-
tion is primarily at blame. As shown in Fig. 1 (b), due to
the foreground-background ambiguity, the foreground pro-
totype f1is erroneously closer to p2located in the back-
ground than counterpart point p1with similar pattern situ-
ated in the foreground . Therefore, it is highly desirable to
suppress the unreliable pixels caused by the direct pairwise
prototype-pixel correlation that is only conditioned on vi-
sual similarity during prototype learning process. (2) Unre-
liable prototypes. Intuitively, not all prototypes are equiva-
lent for building prototype-level consistency regularization.
For example, for a prototype that focuses on mitochondrial
boundary patterns, the inherent unreliability of the pixels
belonging to these patterns, as discussed above, will also
taint the purity of this prototype with equivocality. There-
fore, the prototype-level supervision signals should be fur-
ther optimized to guarantee that the true reliable prototypes
enjoy higher weights.
To mitigate the above issues, we rethink how to achieve
effective consistency regularization for semi-supervised mi-
tochondria segmentation, from the perspective of reliable
prototype-level supervision. We propose a Dual-Rel iable
(DualRel) network including a reliable pixel aggregation
module and a reliable prototype selection module. In the
reliable pixel aggregation module (RPiA), to learn the
prototypes well without any explicit supervision, we care-
fully design the referential correlation to rectify the direct
pairwise correlation, enabling the prototype absorb counter-
part reliable pixels with the same semantic pattern during
the interaction with the pixels. The main idea is, for each
pixel/prototype, we can obtain the referential correlation
(i.e., a likehood vector) by comparing this pixel/prototype
with a set of reliable reference points. In essence, the refer-
ential correlation reflects the consensus among reliable ref-erence points with a broader receptive field and thus it en-
codes the relative semantic comparability of the reference
points that can be relied upon, which is from a different per-
spective than the absolute pairwise prototype-pixel correla-
tion. Intuitively, each pair of true prototype-pixel correla-
tion ( e.g., thef1-p1pair in Fig. 1 (c) derived from the proto-
types and mitochondria images should be not only visually
similar to each other ( i.e., high direct pairwise prototype-
pixel correlation), but also similar to any other reference
point ( i.e., similar referential correlation pair). Moreover,
we assemble referential correlation into the cross-attention
mechanism with the ability to capture long-range dependen-
cies. In this case, the relatively equivocal pixels ( e.g., the
f1-p2pair in Fig. 1 (c) will be suppressed while the reliable
ones are highlighted to reduce the correspondence noise. In
the reliable prototype selection module (RPrS), in order
to further evaluate the reliability of prototypes in construct-
ing prototype-level consistency regularization, we draw in-
spiration from bayesian deep learning [12] and devise a
reliability-aware consistency loss to pursue implicitly learn
the reliability about each prototype in a data-driven way. In
this way, the equivocal prototypes will be suppressed while
the reliable ones are highlighted in the supervision signals.
In this work, our contributions can be concluded as fol-
lows: (1) To the best of our knowledge, this is the first work
to rethink how to achieve effective consistency regulariza-
tion for semi-supervised mitochondria segmentation, from
the perspective of reliable prototype-level supervision. We
analyze the gap between mitochondrial images and natu-
ral images, hoping our work will provide some insight for
researchers in this field. (2) We propose a dual-reliable
(DualRel) network in a unified framework. Specifically,
we design the reliable pixel aggregation module to rectify
the direct pairwise correlation, the reliable prototype se-
lection module to further evaluate the reliability of proto-
types in constructing prototype-level consistency regular-
ization. (3) Extensive experimental results on three chal-
lenging benchmarks demonstrate that our method performs
favorably against state-of-the-art semi-supervised segmen-
tation methods. Importantly, with extremely few samples
used for training, DualRel is also on par with current state-
of-the-art fully supervised methods.
|
Ma_Symmetric_Shape-Preserving_Autoencoder_for_Unsupervised_Real_Scene_Point_Cloud_Completion_CVPR_2023 | Abstract
Unsupervised completion of real scene objects is of vi-
tal importance but still remains extremely challenging in
preserving input shapes, predicting accurate results, and
adapting to multi-category data. To solve these prob-
lems, we propose in this paper an Unsupervised Symmetric
Shape-Preserving Autoencoding Network, termed USSPA,
to predict complete point clouds of objects from real scenes.
One of our main observations is that many natural and
man-made objects exhibit significant symmetries. To ac-
commodate this, we devise a symmetry learning module to
learn from those objects and to preserve structural sym-
metries. Starting from an initial coarse predictor, our au-
toencoder refines the complete shape with a carefully de-
signed upsampling refinement module. Besides the discrim-
inative process on the latent space, the discriminators of
our USSPA also take predicted point clouds as direct guid-
ance, enabling more detailed shape prediction. Clearly
different from previous methods which train each category
separately, our USSPA can be adapted to the training of
multi-category data in one pass through a classifier-guided
discriminator, with consistent performance on single cate-
gory. For more accurate evaluation, we contribute to the
community a real scene dataset with paired CAD models
as ground truth. Extensive experiments and comparisons
demonstrate our superiority and generalization and show
that our method achieves state-of-the-art performance on
unsupervised completion of real scene objects.
| 1. Introduction
As the standard outputs of 3D scanners [12, 32], point
clouds are becoming more and more popular [9] which
are also the basic data structure in 3D geometry process-
ing [4, 5,13]. Complete point clouds are hard to obtain
due to the nature of the scanning process and object oc-
clusion [35]. Due to the defects of incomplete point clouds
on downstream applications such as reconstruction [10], re-
cent works [17, 19,22,23,26,30,33,35] pay more attention
*Corresponding author.
(b)
(c)Input Unpaired Ours
(a)Disp3D
Completion
ResultsReal Scene
ObjectsShapeInv
Figure 1. Visual comparison of predicted results on real scene
data by our USSPA and other works (top) and our complete result
on a whole real scene (bottom). (a) shows an example of a real
scene partial point cloud of a chair and the complete predictions
by Disp3D [23], ShapeInv [36], Unpaired [32] and our method. As
shown, our prediction result is more accurate and uniform accord-
ing to the input, which contains complete arms and legs. (b) and
(c) show the original point cloud of a real scene and the complete
results of all the objects in this scene.
to point cloud completion which relies on paired artificial
complete point clouds for supervised training to complete
partial point clouds. However, these supervised works are
difficult to apply in practice because of the great gap be-
tween artificial data and real scene data and the inaccessi-
bility to the ground truth of real scene data. Therefore, it is
important to complete partial point clouds from real scene
in an unsupervised way.
Recent unsupervised works [24, 32, 36] only require real
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13560
scene partial point clouds and artificial CAD models for un-
paired completion utilizing GANs [8] as their fundamental
frameworks, most of which need pre-training on artificial
data. The main ideas are to transform latent codes from
the space of real scene partial data to the space of artificial
complete data and then employ the decoder trained on arti-
ficial data to predict the complete point cloud. Essentially,
these methods make the predictions whose distributions are
consistent with the artificial models. Most of them, how-
ever, just extract a global feature from the partial input with-
out fully exploiting its geometry information, leading to the
prediction severely deviated from the input. And such in-
formation actually provides vital clues and constraints for
completion. Furthermore, prediction results by these meth-
ods usually lack enough geometric details due to the ab-
sence of an explicit discriminative process on point clouds.
These domain transforming methods are also hard to adapt
to multi-category data or other datasets.
In this paper, we present an unsupervised symmetric
shape-preserving autoencoding network, termed USSPA,
for the completion of real scene objects, as shown in Figure
2, which is a GAN-based end-to-end network without the
requirement of pre-trained weights. Different from previ-
ous domain transforming methods which cannot fully lever-
age existing incomplete models, we argue that the exist-
ing partial scanning, which also provides vital clues and
constraints for the prediction of the missing part, should
be preserved to some extent. To this end, we exploit the
symmetries shown in many natural or man-made objects
and devise a novel symmetry learning module to generate
symmetrical point clouds of existing parts by predicting the
symmetric planes. This enables our network to preserve
the shapes of input symmetrically, intrinsically facilitating
structure completion, as shown in Figure 1. For those parts
that can not be directly inferred from inputs, we employ an
initial coarse module for an initial prediction first. Start-
ing from the initial guess, we specifically design a refine-
ment autoencoder with an upsampling refinement module
for detailed refinement and the local feature grouping for
extracting local information, to learn detailed structures of
artificial data through the autoencoding process. Benefit-
ing from this, our final prediction is accurate, uniform, and
symmetric shape-preserving. Besides the indirect guidance
of the feature discriminator on latent space, our point dis-
criminator takes predicted point clouds as direct guidance
for generating more accurate shapes. Compared with pre-
vious methods which train each category separately, our
method can classify the partial point clouds simultaneously
through a classifier-guided discriminator when adapted to
multi-category data, with consistent performance on the sin-
gle category.
To measure the performance of unsupervised comple-
tion quantitatively, we build a dataset from ScanNet [5] andShapeNet [2] utilizing the annotations of Scan2CAD [1].
Our dataset contains real scene partial point clouds and
paired ground truths that are only used for evaluation in
our experiments. Extensive comparisons against previous
works on this dataset and the public PCN Dataset [35] show
the superiority and generalization of our method which
achieves state-of-the-art performance on unsupervised com-
pletion of real scene objects.
Our main contributions are as follows.
• We propose a novel USSPA for unsupervised real
scene point cloud completion whose prediction is
accurate, uniform and symmetric shape-preserving.
Clearly different from previous works training each
category separately, our USSPA can be adapted to the
training of multi-category data in one pass by classify-
ing the input simultaneously.
• We propose a novel symmetry learning module and a
novel refinement autoencoder. The symmetry learn-
ing module preserves input shapes by generating sym-
metrical point clouds, and the refinement autoencoder
learns the detailed information from artificial data to
refine the initial guess by an autoencoding process.
• We propose a new evaluation method for obtaining
paired ground truths and partial data from artificial and
real scene datasets using alignment information, which
can be used to more accurately evaluate unsupervised
completion of real scene objects.
|
Pan_Deep_Discriminative_Spatial_and_Temporal_Network_for_Efficient_Video_Deblurring_CVPR_2023 | Abstract
How to effectively explore spatial and temporal informa-
tion is important for video deblurring. In contrast to exist-
ing methods that directly align adjacent frames without dis-
crimination, we develop a deep discriminative spatial and
temporal network to facilitate the spatial and temporal fea-
ture exploration for better video deblurring. We first de-
velop a channel-wise gated dynamic network to adaptively
explore the spatial information. As adjacent frames usually
contain different contents, directly stacking features of ad-
jacent frames without discrimination may affect the laten-
t clear frame restoration. Therefore, we develop a simple
yet effective discriminative temporal feature fusion module
to obtain useful temporal features for latent frame restora-
tion. Moreover, to utilize the information from long-range
frames, we develop a wavelet-based feature propagation
method that takes the discriminative temporal feature fu-
sion module as the basic unit to effectively propagate main
structures from long-range frames for better video deblur-
ring. We show that the proposed method does not require
additional alignment methods and performs favorably a-
gainst state-of-the-art ones on benchmark datasets in terms
of accuracy and model complexity.
| 1. Introduction
With the rapid development of hand-held video captur-
ing devices in our daily life, capturing high-quality clear
videos becomes more and more important. However, due to
the moving objects, camera shake, and depth variation dur-
ing the exposure time, the captured videos usually contain
significant blur effects. Thus, there is a great need to restore
clear videos from blurred ones so that they can be pleasantly
viewed on display devices and facilitate the following video
understanding problems.
Different from single image deblurring that explores spa-
Co-first authorship
yCorresponding author
2200 2400 ···'671HW2XUV
&'9'763
67)$1'9'6() (67511)*67
('955HVWRUPHU
%DVLF965
0351HWFigure 1. Floating point operations (FLOPs) vs. video deblurring
performance on the GoPro dataset [24]. Our model achieves fa-
vorable results in terms of accuracy and FLOPs.
tial information for blur removal, video deblurring is more
challenging as it needs to model both spatial and tempo-
ral information. Conventional methods usually use optical
flow [2,9,14,36] to model the blur in videos and then joint-
ly estimate optical flow and latent frames under the con-
straints by some assumed priors. As pointed out by [26],
these methods usually lead to complex optimization prob-
lems that are difficult to solve. In addition, improper priors
will significantly affect the quality of restored videos.
Instead of using assumed priors, lots of methods develop
kinds of deep convolutional neural networks (CNNs) to ex-
plore spatial and temporal information for video deblurring.
Several approaches stack adjacent frames as the input of C-
NN models [29] or employ spatial and temporal 3D convo-
lution [40] for latent frame restoration. Gast et al. [10] show
that using proper alignment strategies in deep CNNs would
improve deblurring performance. To this end, several meth-
ods introduce alignment modules in deep neural networks.
The commonly used alignment modules for video deblur-
ring mainly include optical flow [26], deformable convolu-
tion [32], and so on. However, estimating alignment infor-
mation from blurred adjacent frames is not a trivial task due
to the influence of motion blur. In addition, using align-
ment modules usually leads to large deep CNN models that
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22191
are difficult to train and computationally expensive. For ex-
ample, the CDVDTSP method [26] with optical flow as the
alignment module has 16.2 million parameters with FLOP-
s of 357.79G while the EDVR method [32] using the de-
formable convolution as the alignment module has 23.6 mil-
lion parameters with FLOPs of 2298.97G. Therefore, it is
of great interest to develop a lightweight deep CNN model
with lower computational costs to overcome the limitation-
s of existing alignment methods in video deblurring while
achieving better performance.
Note that most existing methods restore each clear frame
based on limited local frames, where the temporal infor-
mation from non-local frames is not fully explored. To
overcome this problem, several methods employ recurrent
neural networks to better model temporal information for
video deblurring [15]. However, these methods have limit-
ed capacity to transfer the useful information temporally for
latent frame restoration as demonstrated in [41]. To rem-
edy this limitation, several methods recurrently propagate
information of non-local frames with some proper atten-
tion mechanisms [41]. However, if the features of non-local
frames are not estimated correctly, the errors will accumu-
late in the recurrent propagation process, which thus affects
video deblurring. As the temporal information exploration
is critical for video deblurring, it is a great need to devel-
op an effective propagation method that can discriminative-
ly propagate useful information from non-local frames for
better video restoration.
In this paper, we develop an effective deep discriminative
spatial and temporal network (DSTNet) to distinctively ex-
plore useful spatial and temporal information from videos
for video deblurring. Motivated by the success of multi-
layer perceptron (MLP) models that are able to model glob-
al contexts, we first develop a channel-wise gated dynamic
network to effectively explore spatial information. In addi-
tion, to exploit the temporal information, instead of directly
stacking estimated features from adjacent frames without
discrimination, we develop a simple yet effective discrim-
inative temporal feature fusion module to fuse the features
generated by the channel-wise gated dynamic network so
that more useful temporal features can be adaptively ex-
plored for video deblurring.
However, the proposed discriminative temporal feature
fusion module does not utilize the information from long-
range frames. Directly repeating this strategy in a recur-
rent manner is computationally expensive and may propa-
gate and accumulate the estimation errors of features from
long-range frames, leading to adverse effects on the final
video deblurring. To solve this problem, we develop a
wavelet-based feature propagation method that effective-
ly propagates main structures from long-range frames for
better video deblurring. Furthermore, the deep discrimina-
tive spatial and temporal network does not require addition-al alignment modules (e.g., optical flow used in [26], de-
formable convolution used in [32]) and is thus efficient yet
effective for video deblurring as shown in Figure 1.
The main contributions are summarized as follows:
We propose a channel-wise gated dynamic network
(CWGDN) based on multi-layer perceptron (MLP)
models to explore the spatial information. A detailed
analysis demonstrates that the proposed CWGDN is
more effective for video deblurring.
We develop a simple yet effective discriminative tem-
poral feature fusion (DTFF) module to explore useful
temporal features for clear frame reconstruction.
We develop a wavelet-based feature propagation
(WaveletFP) method to efficiently propagate useful
structures from long-range frames and avoid error ac-
cumulation for better video deblurring.
We formulate the proposed network in an end-to-end
trainable framework and show that it performs favor-
ably against state-of-the-art methods in terms of accu-
racy and model complexity.
|
Qiao_End-to-End_Vectorized_HD-Map_Construction_With_Piecewise_Bezier_Curve_CVPR_2023 | Abstract
Vectorized high-definition map (HD-map) construction,
which focuses on the perception of centimeter-level environ-
mental information, has attracted significant research inter-
est in the autonomous driving community. Most existing ap-
proaches first obtain rasterized map with the segmentation-
based pipeline and then conduct heavy post-processing for
downstream-friendly vectorization. In this paper, by delving
into parameterization-based methods, we pioneer a concise
and elegant scheme that adopts unified piecewise B ´ezier
curve. In order to vectorize changeful map elements end-
to-end, we elaborate a simple yet effective architecture,
named Piecewise B ´ezier HD-map Network ( BeMapNet ),
which is formulated as a direct set prediction paradigm and
postprocessing-free. Concretely, we first introduce a novel
IPM-PE Align module to inject 3D geometry prior into BEV
features through common position encoding in Transformer.
Then a well-designed Piecewise B ´ezier Head is proposed to
output the details of each map element, including the coor-
dinate of control points and the segment number of curves.
In addition, based on the progressively restoration of B ´ezier
curve, we also present an efficient Point-Curve-Region Loss
for supervising more robust and precise HD-map modeling.
Extensive comparisons show that our method is remarkably
superior to other existing SOTAs by 18.0mAP at least1.
| 1. Introduction
As one of the most fundamental components in the auto-
driving system, high-definition map contains centimeter de-
tails of traffic elements, vectorized topology and navigation
information, which instruct ego-vehicle to accurately locate
itself on the road and understand what is coming up ahead.
At present, conventional SLAM-based solutions [45, 46, 60]
have been widely adopted in practice. Yet, due to dilemmas
of high annotation costs and untimely updates, the offline
approach is gradually being replaced by the learning-based
online HD-map construction with onboard sensors.
*Corresponding author .
1https://github.com/er-muyue/BeMapNet
DividerCrossingBoundary(a) HD-map example
(c) map curve restored by real control points
(b) vanillaBéziervs. piecewiseBézier
Figure 1. Illustration of our motivation for piecewise B ´ezier curve,
termed as ⟨k,n ⟩, where kis the piece number and nis the degree.
Fig.(a) is a real HD-map case from NuScenes .Fig.(b) compares
the difference between vanilla and piecewise B ´ezier curve through
the same map element, where the light purple is the restored curve
with B ´ezier process. The last is more efficient than previous ones
with reducing the number of control points by 64%in this case.
Fig.(c) illustrates that piecewise B ´ezier curve can model arbitrary-
shaped curves. Note the blue circles denote actual control points.
The deep-based paradigm of online HD-map building is
gradually developing, but it still faces two main challenges:
1)modeling instance-level vectorized HD-map end-to-end.
Most existing works construct HD-map by rasterizing BEV
(bird-eye-view) maps into semantic pixels with segmenta-
tion [24,42], which not only lacks the modeling of instance-
level details, but also requires heavy post-processing to ob-
tain vectorized information. As a sub-task, lane detection
makes a relatively better advance for this issue, that is, in ad-
dition to segmentation-based methods [39,41,62], there are
also point-based [25,47] and curve-based [12,28] schemes.
However, compared to the simple lane scenario, HD-map
contains more shape-changeful elements, so such methods
cannot be directly adopted into the HD-map construction.
2)performing 2D-3D perspective transformation efficiently.
Obtaining 3D-BEV perception from multi-view 2Dimages
is an essential step for building HD-map , which is mainly
divided into three ideas, i.e. geometric priors [44], learnable
parameters [40, 42], and a combination of the two [17, 43].
Note the assumptions of geometry-based methods often do
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13218
not conform to the actual situation, leading to such schemes
are less adaptable, while learning-based methods require a
large amount of labeled data to generalize across various
scenarios. Combining the above two branches not only has
multi-scenario scalability, but also reduces the demand for
annotated data, has attracted increasing research interest.
To the best of our knowledge, the curve parameterization
construction of HD-map in the BEV space is vacancy and no
one has explored it. Based on the widely used B ´ezier curve,
which is mathematically defined by a set of control points,
we pioneer to devise a concise and elegant HD-map scheme
that adopts piecewise B´ezier curve, where each map curve is
divided into multiple ksegments and each segment is then
represented by a vanilla B ´ezier curve with degree n, hence
denoted as ⟨k,n ⟩. Despite ⟨1,n⟩is enough to express any
map element with infinite nin theory, more complex curve
tends to require higher degree, meaning that there are more
control points need to be modeled, which is shown in Fig.1.
The proposed piecewise strategy allows us to parameterize a
curve more compactly with fewer control points and higher
capacity, which is extremely scalable and robust in practice.
Inspired by the above motivations, we propose an end-to-
end vectorized HD-map construction architecture, named
asPiecewise B´ezierHD-map Network ( BeMapNet ). The
overall framework is illustrated in detail in Fig.2, which
streamlines the architecture into four primary modules for
gradually-enriched information, i.e. feature extractor shared
among multi-view images, semantic BEV decoder for 2D-
3Dperspective elevation, instance B ´ezier decoder for curve-
level descriptors, and piecewise B ´ezier head for point-level
parameterization. To be concrete, we first introduce a novel
IPM-PE Align module into Transformer-based decoders,
which injects IPM (inverse perspective mapping) geomet-
ric priors into BEV features via PE(position encoding) and
hardly adds any parameters except a FClayer. Secondly, we
further design a Piecewise B ´ezier Head for dynamic curve
modeling with adopting two branches as classification and
regression, where the former classifies the number of piece
to determine the curve length and the latter regresses the
coordinate of control points to determine the curve shape.
Lastly, we present an Point-Curve-Region Loss for robust
curve modeling by supervising restoration information as a
progressive manner. Since it is modeled as a sparse set pre-
diction task and optimized with a bipartite matching loss,
our method is postprocessing-free and high-performance.
The main contributions of our approach are three-folds:
• We pioneer the BeMapNet for concise and elegant mod-
eling of HD-map with unified piecewise B ´ezier curve.
• We elaborate the overall end-to-end architecture with in-
novatively introducing IPM-PE Align Module ,Piecewise
B´ezier Output Head and well-designed PCR-Loss .
•BeMapNet is remarkably superior to SOTAs on existing
benchmarks, revealing the effectiveness of our approach. |
Peng_Hierarchical_Dense_Correlation_Distillation_for_Few-Shot_Segmentation_CVPR_2023 | Abstract
Few-shot semantic segmentation (FSS) aims to form
class-agnostic models segmenting unseen classes with only
a handful of annotations. Previous methods limited to the
semantic feature and prototype representation suffer from
coarse segmentation granularity and train-set overfitting.
In this work, we design Hierarchically Decoupled Match-
ing Network (HDMNet) mining pixel-level support corre-
lation based on the transformer architecture. The self-
attention modules are used to assist in establishing hierar-
chical dense features, as a means to accomplish the cascade
matching between query and support features. Moreover,
we propose a matching module to reduce train-set over-
fitting and introduce correlation distillation leveraging se-
mantic correspondence from coarse resolution to boost fine-
grained segmentation. Our method performs decently in ex-
periments. We achieve 50.0%mIoU on COCO- 20idataset
one-shot setting and 56.0%on five-shot segmentation, re-
spectively. The code is available on the project website1.
| 1. Introduction
Semantic segmentation tasks [2, 3, 22, 52] have made
tremendous progress in recent years, benefiting from the
rapid development of deep learning [13,32]. However, most
existing deep networks are not scalable to previously unseen
classes and rely on annotated datasets to achieve satisfy-
ing performance. Data collection and annotation cost much
time and resources, especially for dense prediction tasks.
Few-shot learning [34, 39, 43] has been introduced into
semantic segmentation [5, 38] to build class-agnostic mod-
els quickly adapting to novel classes. Typically, few-
shot segmentation (FSS) divides the input into the query
and support sets [5, 46, 48, 51] following the episode
paradigm [41]. It segments the query targets conditioned on
*Corresponding Author
1https://github.com/Pbihao/HDMNet
PASCAL-𝟓𝒊COCO-𝟐𝟎𝒊SupportQueryOursBaselineFigure 1. Activation maps of the correlation values on both
PASCAL- 5i[29] and COCO- 20i[26]. The baseline is prone to
give high activation values to the categories sufficiently witnessed
during training, such as the “People” class, even with other sup-
port annotations. Then we convert it to the hierarchically decou-
pled matching structure and adopt correlation map distillation to
mine inner-class correlation.
the semantic clues from the support annotations with meta-
learning [34, 39] or feature matching [25, 41, 49].
Previous few-shot learning methods may still suffer
from coarse segmentation granularity and train-set over-
fitting [38] issues. As shown in Fig. 1, “people” is the
base class that has been sufficiently witnessed during train-
ing. But the model is still prone to yield high activa-
tion to “people” instead of more related novel classes with
the support samples, producing inferior results. This is-
sue stems from framework design, as illustrated in Fig. 2.
Concretely, prototype-based [38,42] and adaptive-classifier
methods [1, 23] aim at distinguishing different categories
with global class-wise characteristics. It is challenging to
compute the correspondence of different components be-
tween query and support objects for the dense prediction
tasks. In contrast, matching-based methods [49] mine pixel-
level correlation but may heavily rely on class-specific fea-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23641
𝐼!𝐼"EncoderSelf-AttnCross-Attn𝑀"×𝑁Encoder𝐼!PoolExpandConcatSelf-AttnSelf-AttnSelf-AttnCorrelation&Distillation(a)(c)(d)𝐼"𝑀"Encoder𝐼"𝑀"𝐼!Encoder(b)DecoderAdaptiveClassifier𝐼!𝐼"𝑀"Figure 2. Illustration of different few-shot segmentation frame-
works. (a) Prototype-based method. (b) Adaptive-classifier
method. (c) Feature matching with transformer architecture. (d)
Our Hierarchically Decoupled Matching Network (HDMNet) with
correlation map distillation.
tures and cause overfitting and weak generalization.
To address these issues, we propose Hierarchically De-
coupled Matching Network (HDMNet) with correlation
map distillation for better mining pixel-level support corre-
spondences. HDMNet extends transformer architecture [6,
40,44] to construct the feature pyramid and performs dense
matching. Previous transformer-based methods [35, 49]
adopt the self-attention layer to parse features and then
feed query and support features to the cross-attention layer
for pattern matching, as illustrated in Fig. 2(c). This pro-
cess stacks the self- and cross-attention layers multiple
times, mixes separated embedding features, and acciden-
tally causes unnecessary information interference.
In this paper, we decouple the feature parsing and match-
ing process in a hierarchical paradigm and design a new
matching module based on correlation and distillation.
This correlation mechanism calculates pixel-level corre-
spondence without directly relying on the semantic-specific
features, alleviating the train-set overfitting problem. Fur-
ther, we introduce correlation map distillation [14, 50] that
encourages the shallow layers to approximate the semantic
correlation of deeper layers to make the former more aware
of the context for high-quality prediction.
Our contribution is the following. 1) We extend the
transformer to hierarchical parsing and feature matching for
few-shot semantic segmentation, with a new matching mod-
ule reducing overfitting. 2) We propose correlation map dis-
tillation leveraging soft correspondence under multi-level
and multi-scale structure. 3) We achieve new state-of-
the-art results on standard benchmark of COCO- 20iand
PASCAL- 5iwithout compromising efficiency.
|
Ma_CREPE_Can_Vision-Language_Foundation_Models_Reason_Compositionally_CVPR_2023 | Abstract
A fundamental characteristic common to both human vi-
sion and natural language is their compositional nature. Yet,
despite the performance gains contributed by large vision
and language pretraining, we find that—across 7 architec-
tures trained with 4 algorithms on massive datasets—they
struggle at compositionality. To arrive at this conclusion, we
introduce a new compositionality evaluation benchmark,
CREPE, which measures two important aspects of compo-
sitionality identified by cognitive science literature: system-
aticity and productivity. To measure systematicity, CREPE
consists of a test dataset containing over 370Kimage-text
pairs and three different seen-unseen splits. The three splits
are designed to test models trained on three popular training
datasets: CC-12M, YFCC-15M, and LAION-400M. We also
generate 325K,316K, and 309Khard negative captions
for a subset of the pairs. To test productivity, CREPE con-
tains 17Kimage-text pairs with nine different complexities
plus278Khard negative captions with atomic, swapping
and negation foils. The datasets are generated by repurpos-
ing the Visual Genome scene graphs and region descriptions
and applying handcrafted templates and GPT-3. For sys-
tematicity, we find that model performance decreases con-
sistently when novel compositions dominate the retrieval
set, with Recall@1 dropping by up to 9%. For productivity,
models’ retrieval success decays as complexity increases,
frequently nearing random chance at high complexity. These
results hold regardless of model and training dataset size.
| 1. Introduction
Compositionality, the understanding that “the meaning
of the whole is a function of the meanings of its parts” [11],
is held to be a key characteristic of human intelligence.
In language, the whole is a sentence, made up of words.
In vision, the whole is a scene, made up of parts like
objects, their attributes, and their relationships [31, 35].
*Equal contribution
Figure 1. We introduce
CREPE, a benchmark to evaluate whether
vision-language foundation models demonstrate two fundamental
aspects of compositionality: systematicity and productivity. To eval-
uate systematicity, CREPE utilizes Visual Genome and introduces
three new test datasets for the three popular pretraining datasets:
CC-12M, YFCC-15M, and LAION-400M. These enable evaluating
models’ abilities to systematically generalize their understanding
to seen compounds, unseen compounds, and even unseen atoms.
To evaluate productivity, CREPE introduces examples of nine com-
plexities, with three types of hard negatives for each.
Through compositional reasoning, humans can understand
new scenes and generate complex sentences by combining
known parts [6, 27, 30]. Despite compositionality’s impor-
tance, there are no large-scale benchmarks directly evaluat-
ing whether vision-language models can reason composition-
ally. These models are pretrained using large-scale image-
caption datasets [62, 64, 74], and are already widely applied
for tasks that benefit from compositional reasoning, includ-
ing retrieval, text-to-image generation, and open-vocabulary
classification [10,57,60]. Especially as such models become
ubiquitous “foundations” for other models [5], it is critical
to understand their compositional abilities.
Previous work has evaluated these models using image-
text retrieval [32,56,82]. However, the retrieval datasets used
either do not provide controlled sets of negatives [45, 74]
or study narrow negatives which vary along a single axis
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10910
Figure 2. An overview of the systematicity retrieval set generation process. First, a model’s image-caption training set is parsed to identify
what atoms and compounds the model has seen. Then, an evaluation set is divided into three compositional splits according to whether the
model has seen all the compounds (Seen Compounds), only all the atoms of the caption (Unseen Compounds), or neither (Unseen Atoms).
Finally, hard negative captions HN-A TOM and HN-C OMP are generated for the hard negatives retrieval set DHN
test.
(e.g. permuted word orders or single word substitutions as
negative captions) [21, 51, 65, 75]. Further, these analyses
have also not studied how retrieval performance varies when
generalizing to unseen compositional combinations, or to
combinations of increased complexity.
We introduce
CREPE (Compositional REPresentation
Evaluation): a new large-scale benchmark to evaluate two
aspects of compositionality: systematicity andproductivity
(Figure 1). Systematicity measures how well a model is able
to represent seen versus unseen atoms and their composi-
tions. Productivity studies how well a model can compre-
hend an unbounded set of increasingly complex expressions.
CREPE uses Visual Genome’s scene graph representation as
the compositionality language [35] and constructs evaluation
datasets using its annotations. To test systematicity, we parse
the captions in three popular training datasets, CC-12M [8],
YFCC-15M [74], and LAION-400M [62], to identify atoms
(objects, relations, or attributes) and compounds (combina-
tions of atoms) present in each dataset. For each training set,
we curate corresponding test sets containing 385K,385K
and373Kimage-text pairs respectively, with splits checking
generalization to seen compounds, unseen compounds, and
unseen atoms. To test productivity, CREPE contains 17K
image-text pairs split across nine levels of complexity, as
defined by the number of atoms present in the text. Exam-
ples across all datasets are paired with various hard negative
types to ensure the legitimacy of our conclusions.
Our experiments—across 7 architectures trained with 4
training algorithms on massive datasets—find that vision-
language models struggle at compositionality, with both
systematicity and productivity. We present six key findings:
first, our systematicity experiments find that models’ perfor-
mance consistently drops between seen and unseen composi-
tions; second, we observe larger drops for models trained onLAION-400M (up to a 9%decrease in Recall@1); third, our
productivity experiments indicate that retrieval performance
degrades with increased caption complexity; fourth, we find
no clear trend relating training dataset size to models’ com-
positional reasoning; fifth, model size also has no impact;
finally, models’ zero-shot ImageNet classification accuracy
correlates only with their absolute retrieval performance on
the systematicity dataset but not systematic generalization to
unseen compounds or to productivity.1
|
Nagata_Tangentially_Elongated_Gaussian_Belief_Propagation_for_Event-Based_Incremental_Optical_Flow_CVPR_2023 | Abstract
Optical flow estimation is a fundamental functionality in
computer vision. An event-based camera, which asyn-
chronously detects sparse intensity changes, is an ideal
device for realizing low-latency estimation of the optical
flow owing to its low-latency sensing mechanism. An
existing method using local plane fitting of events could
utilize the sparsity to realize incremental updates for
low-latency estimation; however, its output is merely a
normal component of the full optical flow. An alterna-
tive approach using a frame-based deep neural network
could estimate the full flow; however, its intensive non-
incremental dense operation prohibits the low-latency
estimation. We propose tangentially elongated Gaussian
(TEG) belief propagation (BP) that realizes incremental
full-flow estimation. We model the probability of full flow
as the joint distribution of TEGs from the normal flow
measurements, such that the marginal of this distribution
with correct prior equals the full flow. We formulate the
marginalization using a message-passing based on the
BP to realize efficient incremental updates using sparse
measurements. In addition to the theoretical justification,
we evaluate the effectiveness of the TEGBP in real-world
datasets; it outperforms SOTA incremental quasi-full flow
method by a large margin. (The code is available at
https://github.com/DensoITLab/tegbp/ ).
| 1. Introduction
Optical flow estimation, which computes the correspon-
dence of pixels in different time measurements, is a fun-
damental building block of computer vision. One needs to
estimate the flow at low latency in many practical appli-
cations, such as autonomous driving cars, unmanned aerial
vehicles, and factory automation robots. Most of the ex-
isting optical flow algorithm utilizes dense video frames; it
computes the flow by searching the similar intensity pat-
tern [ 15,29]. Recently, methods using deep neural net-
work (DNN) [ 29] demonstrate impressive accuracy at the
†These authors contributed equally to this work.
Figure 1. Overview of the proposed TEGBP . We model a belief
about the full flow from a normal flow measurement using TEG
(RGBellipse). The mean of the marginal distribution of each TEG
with an appropriate prior equals the full flow ( magenta arrow). The
marginal ( magenta ellipse) is computed incrementally using local
message passing ( RGBarrows) based on BP.
cost of higher-computational cost. Either model-based or
DNN-based, the frame-based algorithm needs to compute
the entire pixel for every frame, even when there are subtle
changes or no changes at all. This dense operation makes it
difficult to realize low-latency estimation, especially on the
resource-constrained edge device.
The event-based camera is a bio-inspired vision sen-
sor, which asynchronously detects intensity change on each
pixel. Thanks to the novel sensing mechanism, the camera
equips favorable characteristics for optical flow estimation,
such as high dynamic range (HDR), blur-free measurement,
and, most importantly, sparse low-latency data acquisition.
Many researchers have explored the way to utilize sparsity
to realize efficient low-latency estimation; one extends the
well-known Lucas-Kanade algorithm [ 7], and the other ex-
ploits the local planer shape of the spatiotemporal event
streams [ 6]. These methods could utilize the sparsity for
efficient incremental processing; however, the optical flow
computed in this way (e.g., by plane fitting) is the normal
flow, which is a normal component of full flow and often
different from them1we want to obtain. The normal flow
is the component of the full flow perpendicular to the edge
(i.e., parallel to the intensity gradient). Some work tried to
recover the full flow from the normal flow [ 2]; however, it
does not precisely equal the full flow (refer to Sec. 2). There
exist methods that could estimate full flow, such as a varia-
1full flow is usually simply called optical flow orflow, yet, we use full
flowwhen we want to highlight the difference with the normal flow .
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21940
tional method [ 5], a multi-scale extension of contrast max-
imization [ 25], or a frame-based DNN [ 14]. However, they
need to apply non-incremental dense operation for all the
pixels of the event frame (dense representation constructed
from sparse events) for every frame, which prohibits the
low-latency estimation on the edge device.
Our research goal is to realize an incremental full flow
algorithm from sparse normal flow measurements. To this
end, we propose Tangentially Elongated Gaussian (TEG)
Belief Propagation (BP) . We compute the full flow using
the normal flow measurements, which can be observed di-
rectly from an event camera or computed cheaply using an
existing algorithm2. Notice that given a single measurement
of normal flow, there are infinite possibilities for the full
flow along the tangential direction of the normal flow. We
model this uncertainty using the TEG, Gaussian distribu-
tion, which has a large variance along the tangential direc-
tion of the normal flow. The probability density of full flow
on each pixel is given as the marginals of the joint distribu-
tion of TEG data factor and some prior factor on a sparse
graph (Fig. 1, Sec. 3.3.2 ). We leverage the sparse graph
to formulate the incremental full flow estimation algorithm
using message-passing based on BP [ 9]. We evaluate the ef-
fectiveness of the TEGBP on real-world data captured from
aerial drones and automobiles. TEGBP outperforms SOTA
incremental method [ 2] by a large margin.
|
Nguyen_TIPI_Test_Time_Adaptation_With_Transformation_Invariance_CVPR_2023 | Abstract
When deploying a machine learning model to a new en-
vironment, we often encounter the distribution shift prob-
lem – meaning the target data distribution is different from
the model’s training distribution. In this paper, we assume
that labels are not provided for this new domain, and that
we do not store the source data (e.g., for privacy reasons).
It has been shown that even small shifts in the data distri-
bution can affect the model’s performance severely. Test
Time Adaptation offers a means to combat this problem,
as it allows the model to adapt during test time to the new
data distribution, using only unlabeled test data batches. To
achieve this, the predominant approach is to optimize a sur-
rogate loss on the test-time unlabeled target data. In par-
ticular, minimizing the prediction’s entropy on target sam-
ples [34] has received much interest as it is task-agnostic
and does not require altering the model’s training phase
(e.g., does not require adding a self-supervised task dur-
ing training on the source domain). However, as the tar-
get data’s batch size is often small in real-world scenarios
(e.g., autonomous driving models process each few frames
in real-time), we argue that this surrogate loss is not op-
timal since it often collapses with small batch sizes. To
tackle this problem, in this paper, we propose to use an in-
variance regularizer as the surrogate loss during test-time
adaptation, motivated by our theoretical results regarding
the model’s performance under input transformations. The
resulting method (TIPI – Test tIme adaPtation with transfor-
mation Invariance) is validated with extensive experiments
in various benchmarks (Cifar10-C, Cifar100-C, ImageNet-
C, DIGITS, and VisDA17). Remarkably, TIPI is robust
against small batch sizes (as small as 2 in our experiments),
and consistently outperforms TENT [34] in all settings. Our
code is released at https://github.com/atuannguyen/TIPI.
*The last two authors contributed equally | 1. Introduction
Distribution shift is a common problem and is often
faced in real-world applications. Specifically, despite tak-
ing various precautions while training a machine learning
model to ensure a better generalization (e.g., collecting
and training on multiple source domains [17, 27], finding
flat minima [5], training with meta-learning objectives [16]
etc.), the model often still struggles when the test data dis-
tribution shifts slightly. Note that it is also common not
to have labels for the new shifted domain during test time.
To tackle this problem, test time adaptation (also known as
online domain adaptation) is a framework that allows the
model to adapt to the target distribution using unlabeled
test data batches. This is necessary since we typically do
not have time to annotate the data during test time and can
only make use of the unlabeled data. In this framework,
the model needs to give predictions for the target data while
simultaneously updating itself to improve its performance
on that particular target distribution. We assume a situation
in which the target data only arrive in small batches, which
makes the adaptation task extremely challenging and ren-
ders traditional domain adaptation techniques such as rep-
resentation alignment (via a distance metric) ineffective.
Within this test time adaptation framework, a common
and effective approach is to optimize a surrogate objective
function in lieu of the true loss function on the target data.
The first group of surrogate objectives is the loss functions
of self-supervised tasks. In particular, one would formu-
late a user-defined task (such as predicting the rotation an-
gle of an image) and train it alongside the main task on
the source domain; and keep training the self-supervised
task on the test-time target data [19, 33]. However, these
are not fully test-time adaptation methods, since they re-
quire altering the training procedure of the source domain.
The second line of surrogate objectives is unsupervised loss
functions. Among this group of unsupervised objectives,
entropy minimization (TENT) [34] is the most successful
method, and has been shown to be consistent across many
benchmarks. Furthermore, different from the former group,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24162
TENT is a fully test-time adaptation method. For these rea-
sons, TENT has received much interest and a lot of follow-
up papers/discussions.
However, as pointed out by its authors, TENT is not ro-
bust when using small batch sizes, as it often collapses to a
trivial solution (i.e., it always predicts the same class for all
input). This is detrimental to real-world applications since
test data often arrive in small batches. For example, au-
tonomous driving systems only process a few frames in real
time – they typically do not accumulate the frames (let’s say
within one minute) to form a bigger batch. Note that TENT
is previously evaluated mainly for large batch sizes (such as
64,128 or 200).
In this paper, we aim to tackle the aforementioned prob-
lem. Specifically, we aim to develop an unsupervised sur-
rogate objective function for the test time adaptation prob-
lem such that it is task-agnostic, does not require altering
the training procedure (e.g., does not require incorporating
a self-supervised task into the training process), and is more
resilient against small batch sizes.
We first provide theoretical results regarding a model’s
performance under input transformations. Specifically, we
show that a model’s loss on a data distribution is bounded
by the KL distance on the predictive distribution of the data
before and after the transformations (which we will use as
a regularizer), and its loss on the transformed data distri-
bution. Motivated by this result, we use small shifts in the
input images that can simulate real source-target shifts, and
enforce the network to be invariant under such data transfor-
mations. Our model outperforms TENT (and other relevant
baselines) in all problem settings considered in the paper
and is remarkably robust in the small-batch-size regime.
Our contributions in this paper are threefold:
• We provide theoretical results regarding a model’s
performance under transformations of the input data.
Specifically, a model’s performance on the target do-
main is bounded by an invariance term (maximum KL
divergence of the predictive distributions before and
after the transformations) and its loss on the trans-
formed domain.
• We propose to find input transformations that can sim-
ulate the domain shifts, and enforce the network to
be invariant under such transformations, using a reg-
ularizer based on our derived bound. The resulting
method is TIPI (Test tIme ada Ptation with transfor-
mation Invariance).
• We perform extensive experiments on a wide range
of datasets (Cifar10-C, Cifar100-C, ImageNet-C, DIG-
ITS, and VisDA17) and settings (varying batch sizes)
to validate our method. TIPI shows preferable perfor-
mance compared to relevant baselines. |
Li_Rethinking_Out-of-Distribution_OOD_Detection_Masked_Image_Modeling_Is_All_You_CVPR_2023 | Abstract
The core of out-of-distribution (OOD) detection is to
learn the in-distribution (ID) representation, which is dis-
tinguishable from OOD samples. Previous work applied
recognition-based methods to learn the ID features, which
tend to learn shortcuts instead of comprehensive repre-
sentations. In this work, we find surprisingly that simply
using reconstruction-based methods could boost the per-
formance of OOD detection significantly. We deeply ex-
plore the main contributors of OOD detection and find that
reconstruction-based pretext tasks have the potential to pro-
vide a generally applicable and efficacious prior, which
benefits the model in learning intrinsic data distributions
of the ID dataset. Specifically, we take Masked Image Mod-
eling as a pretext task for our OOD detection framework
(MOOD). Without bells and whistles, MOOD outperforms
previous SOTA of one-class OOD detection by 5.7%, multi-
class OOD detection by 3.0%, and near-distribution OOD
detection by 2.1%. It even defeats the 10-shot-per-class out-
lier exposure OOD detection, although we do not include
any OOD samples for our detection. Codes are available at
https://github.com/lijingyao20010602/MOOD.
| 1. Introduction
A reliable visual recognition system not only pro-
vides correct predictions on known context (also known
as in-distribution data) but also detects unknown out-of-
distribution (OOD) samples and rejects (or transfers) them
to human intervention for safe handling. This motivates ap-
plications of outlier detectors before feeding input to the
downstream networks, which is the main task of OOD de-
tection, also referred to as novelty or anomaly detection.
OOD detection is the task of identifying whether a test sam-
ple is drawn far from the in-distribution (ID) data or not. It
is at the cornerstone of various safety-critical applications,
including medical diagnosis [5], fraud detection [45], au-
tonomous driving [14], etc.
CSI* ours889296100
89.294.9(a) One-class
SSD+* ours949698100
94.697.6(b) Multi-class
R50+ViT* ours96979899100
96.298.3(c) Near-Distribution
R50+ViT*
(1-shot)R50+ViT*
(10-shot)ours
(0-shot)99100
99.099.399.4(d) Few-shot Outlier ExposureFigure 1. Performance of MOOD compared with current SOTA
(indicated by ‘*’) on four OOD detection tasks: (a) one-class OOD
detection; (b) multi-class detection; (c) near-distribution detection;
and (d) few-shot outlier exposure OOD detection.
Many previous OOD detection approaches depend on
outlier exposure [15, 53] to improve the performance of
OOD detection, which turns OOD detection into a simple
binary classification problem. We claim that the core of
OOD detection is, instead, to learn the effective ID repre-
sentation to discover OOD samples without any known out-
lier exposure.
In this paper, we first present our surprising finding – that
is,simply using reconstruction-based methods can notably
boost the performance on various OOD detection tasks .
Our pioneer work along this line even outperforms previ-
ous few-shot outlier exposure OOD detection, albeit we do
not include any OOD samples.
Existing methods perform contrastive learning [53,58] or
pretrain classification on a large dataset [15] to detect OOD
samples. The former methods classify images according to
the pseudo labels while the latter classifies images based on
ground truth, whose core tasks are both to fulfill the classifi-
cation target. However, research on backdoor attack [50,51]
shows that when learning is represented by classifying data,
networks tend to take a shortcut to classify images.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11578
In a typical backdoor attack scene [51], the attacker adds
secret triggers on original training images with the visibly
correct label. During the course of testing, the victim model
classifies images with secret triggers into the wrong cate-
gory. Research in this area demonstrates that networks only
learn specific distinguishable patterns of different categories
because it is a shortcut to fulfill the classification require-
ment.
Nonetheless, learning these patterns is ineffective for
OOD detection since the network does not understand the
intrinsic data distribution of the ID images. Thus, learning
representations by classifying ID data for OOD detection
may not be satisfying. For example, when the patterns sim-
ilar to some ID categories appear in OOD samples, the net-
work could easily interpret these OOD samples as the ID
data and classify them into the wrong ID categories.
To remedy this issue, we introduce the reconstruction-
based pretext task. Different from contrastive learning in
existing OOD detection approaches [53, 58], our method
forces the network to achieve the training purpose of recon-
structing the image and thus makes it learn pixel-level data
distribution.
Specifically, we adopt the masked image modeling
(MIM) [2, 11, 20] as our self-supervised pretext task, which
has been demonstrated to have great potential in both natu-
ral language processing [11] and computer vision [2,20]. In
the MIM task, we split images into patches and randomly
mask a proportion of image patches before feeding the cor-
rupted input to the vision transformer. Then we use the
tokens from discrete V AE [47] as labels to supervise the
network during training. With its procedure, the network
learns information from remaining patches to speculate the
masked patches and restore tokens of the original image.
The reconstruction process enables the model to learn from
the prior based on the intrinsic data distribution of images
rather than just learning different patterns among categories
in the classification process.
In our extensive experiments, it is noteworthy that
masked image modeling for OOD detection (MOOD) out-
performs the current SOTA on all four tasks of one-
class OOD detection, multi-class OOD detection, near-
distribution OOD detection, and even few-shot outlier ex-
posure OOD detection, as shown in Fig. 1. A few statistics
are the following.
1. For one-class OOD detection (Tab. 6), MOOD boosts
the AUROC of current SOTA, i.e., CSI [58], by 5.7%
to94.9% .
2. For multi-class OOD detection (Tab. 7), MOOD out-
performs current SOTA of SSD+ [53] by 3.0% and
reaches 97.6% .
3. For near-distribution OOD detection (Tab. 2), AUROC
of MOOD achieves 98.3% , which is 2.1% higher than
the current SOTA of R50+ViT [15].4. For few-shot outlier exposure OOD detection (Tab. 9),
MOOD ( 99.41% ) surprisingly defeats current SOTA
of R50+ViT [15] (with 99.29% ), which makes use of
10 OOD samples per class. It is notable that we do not
even include any OOD samples in MOOD.
|
Qin_Robust_3D_Shape_Classification_via_Non-Local_Graph_Attention_Network_CVPR_2023 | Abstract
We introduce a non-local graph attention network (NL-
GAT), which generates a novel global descriptor through
two sub-networks for robust 3D shape classification. In
the first sub-network, we capture the global relationships
between points ( i.e., point-point features) by designing a
global relationship network (GRN). In the second sub-
network, we enhance the local features with a geometric
shape attention map obtained from a global structure net-
work (GSN). To keep rotation invariant and extract more
information from sparse point clouds, all sub-networks use
the Gram matrices with different dimensions as input for
working with robust classification. Additionally, GRN ef-
fectively preserves the low-frequency features and improves
the classification results. Experimental results on various
datasets exhibit that the classification effect of the NLGAT
model is better than other state-of-the-art models. Espe-
cially, in the case of sparse point clouds ( 64points) with
noise under arbitrary SO(3)rotation, the classification re-
sult ( 85:4%) of NLGAT is improved by 39:4%compared
with the best development of other methods.
| 1. Introduction
3D shape classification is one of the most critical tasks
in 3D computer vision and computer graphics [7,10,18,37].
As 3D point cloud models are more accessible due to the
rapid development of 3D scanning technology, their clas-
sifications have attracted considerable attention in the last
two decades [9, 14, 39].
The essential task for shape classification is to find a
global descriptor for the input point cloud. Mainstream neu-
ral networks have achieved excellent performance in point
cloud classification on manually processed and aligned
Corresponding author.data [15, 20, 25, 27, 35]. However, their performance tends
to drop dramatically for complex real-world point clouds,
which can be rotated (arbitrary orientation), sparse (with
many missing parts), and noisy. Although there are methods
for one or several states of complex point clouds classifica-
tion through hand-crafted features, their global descriptors
depend on the designed features [12, 29, 30, 38].
The reasons why current methods do not work well for
complex point clouds are two folds. First, these methods
tend to adopt aggregation operations of local features, by
stacking hundreds of network layers as those in images [24],
to obtain the global feature. Actually, it is difficult due to the
point cloud network models using the point coordinates as
input, and it will lead to feature homogenization, especially
for the complex point clouds. Second, most of the meth-
ods are not end-to-end and partially rely on the designed
hand-crafted features, which can hardly capture the global
information of the complex point clouds [2,5,31,32,36,41].
To this end, we propose an end-to-end deep learning
network model built on complex point clouds, which con-
sist of two global feature learning sub-networks for robust
classification. In our model, we construct Gram matrices
with different dimensions based on the input point coordi-
nates for keeping rotation invariant, capturing crucial fea-
tures (including local and non-local information with simi-
lar structures) from noisy and sparse point clouds. The first
sub-network based on multi-scale local Gram matrices is to
extract the global relationships of point-point features in a
shallow network layer through the network channel fusion
operation ( i.e., channel attention mechanism). The second
sub-network generates an attention map for enhancing the
global relationships, from the global structure of a Gram
matrix constructed by a whole point cloud. Finally, three
fully connected (FC) layers receive the results learned on
two sub-networks to generate a global descriptor for robust
classification tasks.
Contributions. Our contributions are summarized as fol-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5374
lows.
The global descriptor obtained by our method can well
capture both the global relationship and global struc-
ture, which outperforms existing methods in the task
of classification for complex point clouds.
We design an end-to-end deep learning network model,
consisting of specific function modules in two global
feature learning sub-networks. Our proposed modules,
based on multi-scale Gram matrices constructed by the
point coordinates, can gather lots of information for
sparse point clouds, preserve valuable low-frequency
features for noisy point clouds, and guarantee invari-
ance to any rotational transformations.
|
Nair_Unite_and_Conquer_Plug__Play_Multi-Modal_Synthesis_Using_Diffusion_CVPR_2023 | Abstract
Generating photos satisfying multiple constraints finds
broad utility in the content creation industry. A key hur-
dle to accomplishing this task is the need for paired data
consisting of all modalities (i.e., constraints) and their cor-
responding output. Moreover, existing methods need re-
training using paired data across all modalities to intro-
duce a new condition. This paper proposes a solution to thisproblem based on denoising diffusion probabilistic models
(DDPMs). Our motivation for choosing diffusion models
over other generative models comes from the flexible in-
ternal structure of diffusion models. Since each sampling
step in the DDPM follows a Gaussian distribution, we show
that there exists a closed-form solution for generating an
image given various constraints. Our method can unite
multiple diffusion models trained on multiple sub-tasks and
conquer the combined task through our proposed sampling
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6070
strategy. We also introduce a novel reliability parame-
ter that allows using different off-the-shelf diffusion mod-
els trained across various datasets during sampling time
alone to guide it to the desired outcome satisfying multi-
ple constraints. We perform experiments on various stan-
dard multimodal tasks to demonstrate the effectiveness of
our approach. More details can be found at: https://nithin-
gk.github.io/projectpages/Multidiff
| 1. Introduction
Today’s entertainment industry is rapidly investing in
content creation tasks [12, 22]. Studios and companies
working on games or animated movies find various applica-
tions of photos/videos satisfying multiple characteristics (or
constraints) simultaneously. However, creating such photos
is time-consuming and requires a lot of manual labor. This
era of content creation has led to some exciting and valuable
works like Stable Diffusion [28], Dall.E-2 [26], Imagen [29]
and multiple other works that can create photorealistic im-
ages using text prompts. All of these methods belong to the
broad field of conditional image generation [25, 33]. This
process is equivalent to sampling a point from the multi-
dimensional space P(z|x)and can be mathematically ex-
pressed as:
ˆz∼P(z|x), (1)
where ˆzdenotes the image to be generated based on a condi-
tionx. The task of image synthesis becomes more restricted
when the number of conditions increases, but it also hap-
pens according to the user’s expectations. Several previous
works have attempted to solve the conditional generation
problem using generative models, such as V AEs [18,25] and
Generative Adversarial Networks (GANs) [7, 34]. How-
ever, most of these methods use only one constraint. In
terms of image generation quality, the GAN-based meth-
ods outperform V AE-based counterparts. Furthermore, dif-
ferent strategies for conditioning GANs have been pro-
posed in the literature. Among them, the text conditional
GANs [3,27,41,45] embed conditional feature into the fea-
tures from the initial layer through adaptive normalization
scheme. For the case of image-level conditions such as a
sketches or semantic labels, the conditional image is also
the input to the discriminator and is embedded with an adap-
tive normalization scheme [22, 31, 40, 42]. Hence, a GAN-
based method for multimodal generation has multiple archi-
tectural constraints [11]
A major challenge in training generative models for mul-
timodal image synthesis is the need for paired data con-
taining multiple modalities [12, 32, 44]. This is one of the
main reasons why most existing models restrict themselves
to one or two modalities [32, 44]. Few works use more
than two domain variant modalities for multimodal gener-
ation [11, 45]. These methods can perform high-resolution
Figure 2. An illustration of the difference between the existing
multimodal generation approaches [45] and the proposed ap-
proach. Existing multimodal methods require training on paired
data across all modalities. In contrast, we present two ways that
can be used for training: (1) Train with data pairs belonging to dif-
ferent modalities one at a time, and (2) Train only for the additional
modalities using a separate diffusion model in case existing mod-
els are available for the remaining modalities. During sampling,
we forward pass for each conditioning strategy independently and
combine their corresponding outputs, hence preserving the differ-
ent conditions.
image synthesis and require training with paired data across
different domains to achieve good results. But to increase
the number of modalities, the models need to be retrained;
thus they do not scale easily. Recently, Shi et al. [32] pro-
posed a weakly supervised V AE-based multimodal genera-
tion method without paired data from all modalities. The
model performs well when trained with sparse data. How-
ever, if we need to increase the number of modalities, the
model needs to be retrained; therefore, it is not scalable.
Scalable multimodal generation is an area that has not been
properly explored because of the difficulty in obtaining the
large amounts of data needed to train models for the gener-
ative process.
Recently diffusion models have outperformed other gen-
erative models in the task of image generation [5, 9]. This
is due to the ability of diffusion models to perform exact
sampling from very complex distributions [33]. A unique
quality of the diffusion models compared to other genera-
tive processes is that the model performs generation through
a tractable Markovian process, which happens over many
time steps. The output at each timestep is easily accessible.
Therefore, the model is more flexible than other generative
models, and this form of generation allows manipulation of
images by adjusting latents [1, 5, 23]. Various techniques
have used this interesting property of diffusion models for
low-level vision tasks such as image editing [1, 17], im-
age inpainting [20], image super-resolution [4], and image
restoration problems [16].
In this paper, we exploit this flexible property of the de-
noising diffusion probabilistic models and use it to design
a solution to multimodal image generation problems with-
6071
Figure 3. An illustration of our proposed approach. During training, we use diffusion models trained across multiple datasets (we can
either train a single model that supports multiple different conditional strategies one at a time or multiple models). During Inference, we
sample using the proposed approach and condition them using different modalities at the same time.
out explicitly retraining the network with paired data across
all modalities. Figure 2 depicts the comparison between
existing methods and our proposed method. Current ap-
proaches face a major challenge: the inability to combine
models trained across different datasets during inference
time [9, 19]. In contrast, our work allows users flexibility
during training and can also use off-the-shelf models for
multi-conditioning, providing greater flexibility when us-
ing diffusion models for multimodal synthesis task. Figure
1 visualizes some applications of our proposed approach.
As shown in Figure 1-(a), we use two open-source mod-
els [5,21] for generic scene creation. Using these two mod-
els, we can bring new novel categories into an image (e.g.
Otterhound: the rarest breed of dog). We also illustrate the
results showing multimodal face generation, where we use
a model trained to utilize different modalities from differ-
ent datasets. As it can be seen in 1-(b) and (c), our work
can leverage models trained across different datasets and
combine them for multi-conditional synthesis during sam-
pling. We evaluate the performance of our method for the
task of multimodal synthesis using the only existing mul-
timodal dataset [45] for face generation where we condi-
tion based on semantic labels and text attributes. We also
evaluate our method based on the quality of generic scene
generation.
The main contributions of this paper are summarized as
follows:
• We propose a diffusion-based solution for image gen-eration under the presence of multimodal priors.
• We tackle the problem of need for paired data for mul-
timodal synthesis by deriving upon the flexible prop-
erty of diffusion models.
• Unlike existing methods, our method is easily scalable
and can be incorporated with off-the-shelf models to
add additional constraints.
|
Li_Neuralangelo_High-Fidelity_Neural_Surface_Reconstruction_CVPR_2023 | Abstract
Neural surface reconstruction has been shown to be pow-
erful for recovering dense 3D surfaces via image-based neu-
ral rendering. However, current methods struggle to recover
detailed structures of real-world scenes. To address the
issue, we present Neuralangelo , which combines the rep-
resentation power of multi-resolution 3D hash grids with
neural surface rendering. Two key ingredients enable our ap-
proach: (1) numerical gradients for computing higher-order
derivatives as a smoothing operation and (2) coarse-to-fine
optimization on the hash grids controlling different levels of
details. Even without auxiliary inputs such as depth, Neu-
ralangelo can effectively recover dense 3D surface structures
from multi-view images with fidelity significantly surpass-
ing previous methods, enabling detailed large-scale scene
reconstruction from RGB video captures. | 1. Introduction
3D surface reconstruction aims to recover dense geomet-
ric scene structures from multiple images observed at differ-
ent viewpoints [9]. The recovered surfaces provide structural
information useful for many downstream applications, such
as 3D asset generation for augmented/virtual/mixed real-
ity or environment mapping for autonomous navigation of
robotics. Photogrammetric surface reconstruction using a
monocular RGB camera is of particular interest, as it equips
users with the capability of casually creating digital twins of
the real world using ubiquitous mobile devices.
Classically, multi-view stereo algorithms [6, 16, 29, 34]
had been the method of choice for sparse 3D reconstruc-
tion. An inherent drawback of these algorithms, however, is
their inability to handle ambiguous observations, e.g. regions
with large areas of homogeneous colors, repetitive texture
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8456
patterns, or strong color variations. This would result in
inaccurate reconstructions with noisy or missing surfaces.
Recently, neural surface reconstruction methods [36, 41, 42]
have shown great potential in addressing these limitations.
This new class of methods uses coordinate-based multi-layer
perceptrons (MLPs) to represent the scene as an implicit
function, such as occupancy fields [25] or signed distance
functions (SDF) [36, 41, 42]. Leveraging the inherent con-
tinuity of MLPs and neural volume rendering [22], these
techniques allow the optimized surfaces to meaningfully in-
terpolate between spatial locations, resulting in smooth and
complete surface representations.
Despite the superiority of neural surface reconstruction
methods over classical approaches, the recovered fidelity
of current methods does not scale well with the capacity of
MLPs. Recently, Müller et al. [23] proposed a new scalable
representation, referred to as Instant NGP (Neural Graphics
Primitives). Instant NGP introduces a hybrid 3D grid struc-
ture with a multi-resolution hash encoding and a lightweight
MLP that is more expressive with a memory footprint log-
linear to the resolution. The proposed hybrid representation
greatly increases the representation power of neural fields
and has achieved great success at representing very fine-
grained details for a wide variety of tasks, such as object
shape representation and novel view synthesis problems.
In this paper, we propose Neuralangelo for high-fidelity
surface reconstruction (Fig. 1). Neuralangelo adopts In-
stant NGP as a neural SDF representation of the underlying
3D scene, optimized from multi-view image observations
via neural surface rendering [36]. We present two findings
central to fully unlocking the potentials of multi-resolution
hash encodings. First, using numerical gradients to compute
higher-order derivatives, such as surface normals for the
eikonal regularization [8, 12, 20, 42], is critical to stabilizing
the optimization. Second, a progressive optimization sched-
ule plays an important role in recovering the structures at
different levels of details. We combine these two key ingredi-
ents and, via extensive experiments on standard benchmarks
and real-world scenes, demonstrate significant improvements
over image-based neural surface reconstruction methods in
both reconstruction accuracy and view synthesis quality.
In summary, we present the following contributions:
•We present the Neuralangelo framework to naturally
incorporate the representation power of multi-resolution
hash encoding [23] into neural SDF representations.
•We present two simple techniques to improve the quality
of hash-encoded surface reconstruction: higher-order
derivatives with numerical gradients and coarse-to-fine
optimization with a progressive level of details.
•We empirically demonstrate the effectiveness of Neu-
ralangelo on various datasets, showing significant im-
provements over previous methods. |
Luo_Class-Incremental_Exemplar_Compression_for_Class-Incremental_Learning_CVPR_2023 | Abstract
Exemplar-based class-incremental learning (CIL) [36]
finetunes the model with all samples of new classes but few-
shot exemplars of old classes in each incremental phase,
where the “few-shot” abides by the limited memory bud-
get. In this paper, we break this “few-shot” limit based
on a simple yet surprisingly effective idea: compressing
exemplars by downsampling non-discriminative pixels and
saving “many-shot” compressed exemplars in the mem-
ory. Without needing any manual annotation, we achieve
this compression by generating 0-1masks on discrimina-
tive pixels from class activation maps (CAM) [49]. We
propose an adaptive mask generation model called class-
incremental masking (CIM) to explicitly resolve two dif-
ficulties of using CAM: 1) transforming the heatmaps of
CAM to 0-1masks with an arbitrary threshold leads to
a trade-off between the coverage on discriminative pix-
els and the quantity of exemplars, as the total memory is
fixed; and 2) optimal thresholds vary for different object
classes, which is particularly obvious in the dynamic envi-
ronment of CIL. We optimize the CIM model alternatively
with the conventional CIL model through a bilevel opti-
mization problem [40]. We conduct extensive experiments
on high-resolution CIL benchmarks including Food-101,
ImageNet-100, and ImageNet-1000, and show that using
the compressed exemplars by CIM can achieve a new state-
of-the-art CIL accuracy, e.g., 4.8 percentage points higher
than FOSTER [42] on 10-Phase ImageNet-1000. Our code
is available at https://github.com/xfflzl/CIM-CIL.
| 1. Introduction
Dynamic AI systems have a continual learning nature to
learn new class data. They are expected to adapt to new
classes while maintaining the knowledge of old classes, i.e.,
free from forgetting problems [31]. To evaluate this, the
following protocol of class-incremental learning (CIL) was
proposed by Rebuffi et al. [36]. The model training goes
Phase i
herding
herdingPhase i+1
Phase i
herding
distillingPhase i+1
all samples
Phase i+1 Phase i
herding
New Data Old Exemplars
New Data Old Exemplars
Original
Images in (a) JPEG
Images in (c) Distilled
Images in (b) JPEG
herding
samplesPhase i
herding
maskingPhase i+1
herding
samplesNew Data Old Exemplars
New Data Old Exemplars
Masked
Images in (d) (a) iCaRL [baseline] (b) Mnemonics [related]
(c) MRDC [related] (d) CIM-based CIL [ours] New Data Old Exemplars
New Data Old Exemplars
New Data Old Exemplars
New Data Old Exemplars Figure 1. The phase-wise training data in different methods. (a)
iCaRL [36] is the baseline method using full new class data and
few-shot old class exemplars. (b) Mnemonics [27] distills all
training samples into few-shot exemplars without increasing their
quantity. (c) MRDC [43] compresses each exemplar uniformly
into a low-resolution image using JPEG [41]. (d) Our approach
based on the proposed class-incremental masking (CIM) down-
samples only non-discriminative pixels in the image. The legend
shows the symbols of special images generated by the methods.
through a number of phases. Each phase has new class
data added and old class data discarded, and the resultant
model is evaluated on the test data of all seen classes. A
straightforward way to retain old class knowledge is keep-
ing around a few old class exemplars in the memory and
using them to re-train the model in subsequent phases. The
number of exemplars is usually limited, e.g., 5∼20exem-
plars per class [12, 17, 25, 27, 36, 42, 44, 46, 48], as the total
memory in CIL strictly budgeted, e.g., 2kexemplars.
This leads to a serious data imbalance between old and
new classes, e.g., 20per old class vs.1.3kper new class (on
ImageNet-1000 [9]), as illustrated in Figure 1a. The train-
ing is thus always dominated by new classes, and forgetting
problems occur for old classes. Liu et al. [27] tried to miti-
gate this problem by parameterizing and distilling the exem-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11371
plars, without increasing the number of them (Figure 1b).
Wang et al. [43] traded off between the quality and quantity
of exemplars by uniformly compressing exemplar images
with JPEG [41] (Figure 1c). As shown in Figure 1d, our
approach is also based on image compression. The idea is
to downsample only non-discriminative pixels (e.g., back-
ground) and keep discriminative pixels (i.e., representative
cues of foreground objects) as the original. In this way, we
do not sacrifice the discriminativeness of exemplars when
increasing their quantity. In particular, we aim for adaptive
compression in dynamic environments of CIL, where the
intuition is later phases need to be more conservative (i.e.,
less downsampling) as the model needs more visual cues to
classify the increased number of classes.
To achieve selective and adaptive compression, we need
the location labels of discriminative pixels. Without extra
labeling, we automatically generate the labels by utilizing
the model’s own “attention” on discriminative features, i.e.,
class activation maps (CAM) [49]. We take this method as
a feasible baseline, and based on it, we propose an adaptive
version called class-incremental masking (CIM). Specifi-
cally, for each input image (with its class label), we use
its feature maps and classifier weights (corresponding to
its class label) to compute a CAM by channel-wise mul-
tiplication, aggregation, and normalization. Then, we ap-
ply hard thresholding to generate a 0-1mask.1We notice
that when generating the masks in the dynamic environ-
ments of CIL, the optimal hyperparameters (such as the
value of hard threshold and the choice of activation func-
tions) vary for different classes as well as in different incre-
mental phases. Our adaptive version CIM tackles this by pa-
rameterizing a mask generation model and optimizing it in
an end-to-end manner across all incremental phases. In each
phase, the learned CIM model adaptively generates class-
and phase-specific masks. We find that the compressed ex-
emplars based on these masks have stronger representative-
ness, compared to using the conventional CAM.
Technically, we have two models to optimize, i.e., the
CIL model and the CIM model.2These two cannot be
optimized separately as they are dependent on computa-
tion: 1) the CIM model compresses exemplars to input
into the CIL model; 2) the two models share network pa-
rameters. We exploit a global bilevel optimization prob-
lem (BOP) [7, 40] to alternate their training processes at
two levels. This BOP goes through all incremental train-
ing phases. In particular, for each phase, we perform a lo-
cal BOP with two steps to tune the parameters of the CIM
model: 1) a temporary model is trained with the compressed
exemplars as input; and 2) a validation loss on the uncom-
1Note that we do not use mask labels to do image compression because
storing them is expensive. Instead, we expand the mask to a bounding box,
as elaborated in Section 4.
2Note that the CIM model is actually a plug-in branch in the CIL model,
which is detailed in Section 4.2.pressed new data is computed and the gradients are back-
propagated to optimize the parameters of CIM. To evalu-
ate CIM, we conduct extensive experiments by plugging
it in recent CIL methods,3LUCIR [17], DER [46], and
FOSTER [42], on three high-resolution benchmarks, Food-
101 [3], ImageNet-100 [17], and ImageNet-1000 [9]. We
find that using the compressed exemplars by CIM brings
consistent and significant improvements, e.g., 4.2%and
4.8%higher than the SOTA method FOSTER [42], respec-
tively, in the 5-phase and 10-phase settings of ImageNet-
1000, with a total memory budget for 5kexemplars.
|
Oh_BlackVIP_Black-Box_Visual_Prompting_for_Robust_Transfer_Learning_CVPR_2023 | Abstract
With the surge of large-scale pre-trained models (PTMs),
fine-tuning these models to numerous downstream tasks be-
comes a crucial problem. Consequently, parameter effi-
cient transfer learning (PETL) of large models has grasped
huge attention. While recent PETL methods showcase im-
pressive performance, they rely on optimistic assumptions:
1) the entire parameter set of a PTM is available, and 2)
a sufficiently large memory capacity for the fine-tuning is
equipped. However, in most real-world applications, PTMs
are served as a black-box API or proprietary software with-
out explicit parameter accessibility. Besides, it is hard to
meet a large memory requirement for modern PTMs. In
this work, we propose black-box visual prompting (Black-
VIP), which efficiently adapts the PTMs without knowl-
edge about model architectures and parameters. Black-
VIP has two components; 1) Coordinator and 2) simul-
taneous perturbation stochastic approximation with gradi-
ent correction (SPSA-GC). The Coordinator designs input-
dependent image-shaped visual prompts, which improves
few-shot adaptation and robustness on distribution/location
shift. SPSA-GC efficiently estimates the gradient of a tar-
get model to update Coordinator. Extensive experiments
on 16 datasets demonstrate that BlackVIP enables robust
adaptation to diverse domains without accessing PTMs’
parameters, with minimal memory requirements. Code:
https://github.com/changdaeoh/BlackVIP
| 1. Introduction
Based on their excellent transferability, large-scale pre-
trained models (PTMs) [7, 17, 54] have shown remarkable
success on tasks from diverse domains and absorbed in-
creasing attention in machine learning communities. By
witnessing PTMs’ success, Parameter-Efficient Transfer
Learning (PETL) methods that efficiently utilize the PTMs
†Work done at University of Seoul
‡Corresponding author; Work partly done at University of Seoul
Figure 1. While FT updates the entire model, VP has a small num-
ber of parameters in the input pixel space. However, VP still re-
quires a large memory capacity to optimize the parameters through
backpropagation. Moreover, FT and VP are only feasible if the
PTM’s parameters are accessible. Meanwhile, BlackVIP does not
assume the parameter-accessibility by adopting a black-box opti-
mization (SPSA-GC) algorithm rather than relying on backpropa-
gation. Besides, BlackVIP reparameterizes the visual prompt with
a neural network and optimizes tiny parameters with SPSA-GC.
Based on the above properties, BlackVIP can be widely adopted
in realistic and resource-limited transfer learning scenarios.
are recently emerging. While the standard fine-tuning (FT)
and its advanced variants [38, 73] update the entire or large
portion of a PTM [16], PETL methods aim to achieve com-
parable performance to FT by optimizing a small number of
learnable parameters.
Among them, prompt-based approaches [3,5,33,40,41]
have been widely investigated from diverse research areas.
For vision PTMs, Visual Prompt Tuning [33] injects a few
additional learnable prompt tokens inside of ViT’s [17] lay-
ers or embedding layer and only optimizes them. Bahng
et al. [3] investigate visual prompting (VP), which adopts
the learnable parameters on input pixel space as a visual
prompt, while no additional modules are inserted into the
pre-trained visual model. Besides, prompt learning meth-
ods for VLM are also actively studied [35, 78, 81, 83].
While existing PETL methods show impressive perfor-
mance with few learnable parameters, they rely on two
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24224
optimistic assumptions. First, the previous PETL as-
sumes that the full parameters of the PTM are accessible.
However, many real-world AI applications are served as
API and proprietary software, and they do not reveal the
implementation-level information or full parameters due to
commercial issues, e.g., violating model ownership. As a
result, exploiting high-performing PTMs to specific down-
stream tasks not only in the white-box setting but also
black-box setting (limited accessibility to the model’s de-
tail) is a crucial but unexplored problem. Second, exist-
ing methods require a large memory capacity. While PETL
approaches have few learnable parameters, they require a
large amount of memory for backpropagating the gradient
throughout the large-scale PTM parameters to learnable pa-
rameters. Therefore, users who want to adopt a large-scale
PTM should satisfy large memory requirements despite the
small learnable parameters. Besides, if the users entrust
PTM fine-tuning to the model owner with their specific data,
data-privacy concerns will inevitably arise [74].
To alleviate the above unrealistic assumptions, we
are pioneering black-box visual prompting (BlackVIP)
approach, which enables the parameter-efficient transfer
learning of pre-trained black-box vision models from the
low-resource user perspective (illustrated in Figure 1).
BlackVIP works based on the following two core compo-
nents: 1) pixel space input-dependent visual prompting and
2) a stable zeroth-order optimization algorithm.
Firstly, we augment an input image by attaching an vi-
sual prompt per pixel. It is noted that input space prompt-
ing does not require the accessibility on parts of architec-
ture [37, 78] or the first embedding layer [35, 81, 83] of
PTM. While the previous works only introduce a pixel-level
prompt to a small fraction of the fixed area, such as out-
side of the image [3], BlackVIP designs the prompt with the
same shape as the original given image to cover the entire
image view. Therefore, our prompt has a higher capability
and can flexibly change the semantics of the original image.
In addition, we reparameterize the prompt with a neural net-
work. Specifically, we propose the Coordinator , an asym-
metric autoencoder-style network that receives the original
image and produces a corresponding visual prompt for each
individual image. As a result, Coordinator automatically
designs each prompt conditioned on the input rather than
the shared manual design of a previous work [3]. By opti-
mizing the reparameterized model instead of the prompt it-
self, we greatly reduce the number of parameters (from 69K
of VP [3] to 9K) so that suitable for black-box optimization.
Next, unlike other PETL approaches, BlackVIP adopts a
zeroth-order optimization (ZOO) that estimates the zeroth-
order gradient for the coordinator update to relax the as-
sumption that requires access to the huge PTM parame-
ters to optimize the prompt via backpropagation. There-
fore, BlackVIP significantly reduces the required mem-ory for fine-tuning. Besides, we present a new ZOO al-
gorithm, Simultaneous Perturbation Stochastic Approx-
imation with Gradient Correction (SPSA-GC) based on
(SPSA) [58]. SPSA-GC first estimates the gradient of the
target black-box model based on the output difference of
perturbed parameters and then corrects the initial estimates
in a momentum-based look-ahead manner. By integrating
the Coordinator and SPSA-GC, BlackVIP achieves signifi-
cant performance improvement over baselines.
Our main contributions are summarized as follows:
• To our best knowledge, this is the first paper that ex-
plores the input-dependent visual prompting on black-
box settings. For this, we devise Coordinator, which
reparameterizes the prompt as an autoencoder to han-
dle the input-dependent prompt with tiny parameters.
• We propose a new ZOO algorithm, SPSA-GC, that
gives look-ahead corrections to the SPSA’s estimated
gradient resulting in boosted performance.
• Based on Coordinator and SPSA-GC, BlackVIP
adapts the PTM to downstream tasks without parame-
ter access and large memory capacity. We extensively
validate BlackVIP on 16 datasets and demonstrate its
effectiveness regarding few-shot adaptability and ro-
bustness on distribution/object-location shift.
|
Noh_Disentangled_Representation_Learning_for_Unsupervised_Neural_Quantization_CVPR_2023 | Abstract
The inverted index is a widely used data structure to
avoid the infeasible exhaustive search. It accelerates re-trieval significantly by splitting the database into multipledisjoint sets and restricts distance computation to a smallfraction of the database. Moreover , it even improves searchquality by allowing quantizers to exploit the compact distri-bution of residual vector space. However , we firstly pointout a problem that an existing deep learning-based quan-
tizer hardly benefits from the residual vector space, unlike
conventional shallow quantizers. To cope with this problem,we introduce a novel disentangled representation learningfor unsupervised neural quantization. Similar to the con-cept of residual vector space, the proposed method enablesmore compact latent space by disentangling information of
the inverted index from the vectors. Experimental results onlarge-scale datasets confirm that our method outperforms
the state-of-the-art retrieval systems by a large margin.
| 1. Introduction
Measuring the distances among feature vectors is a fun-
damental requirement in various fields of computer vision.One of the tasks most relevant to distance measurement is
the nearest neighbor search, which finds the closest data
in the database from a query. The task is especially chal-
lenging in high-dimensional and large-scale databases dueto huge computational costs and memory overhead.
By relaxing the complexity, Approximate Nearest
Neighbor (ANN) search is popular in practice. Recent ap-proaches for ANN typically learn the compact representa-
tion by exploiting Multi-Codebook Quantization (MCQ) [ 2,
9,16]. Compared to hashing-based approaches [ 1,10,12],
the MCQ provides a more informative asymmetric distanceestimator where the query side is not compressed. More-over, all possible distances between the query and code-words can be stored in a lookup table for efficiency.
Although the MCQ accelerates the distance computation
with the lookup table, exhaustive search on the large-scale
*Corresponding authordataset is still prohibited. The Inverted File with Asym-metric Distance Computation (IVFADC) [ 16] is proposed
for non-exhaustive ANN search by cooperating with the in-verted index [ 30]. It splits the database into multiple disjoint
sets and restricts distance computations to small portions
close to the query to accelerate the retrieval speed. More-over, the compactness of residual vector space between data
points and inverted indices substantially enhances the quan-tization quality.
Thanks to the rapid advances in deep learning, most ar-
eas of computer vision benefit from its great learning capac-
ity compared to shallow methods. However, the state-of-
the-art methods of unsupervised quantization remain shal-
low for a long time because selecting the maximum value
(i.e. argmax), which is an essential operation of quanti-zation, is not differentiable. Inspired by a recent gener-
ative model with discrete hidden variables [ 35], the Un-
supervised Neural Quantization (UNQ) [ 23] introduced an
encoder-decoder-based architecture for ANN search. Thelarge learning capacity of deep neural architecture signifi-
cantly improves the retrieval quality compared to conven-
tional shallow methods.
Despite the outperforming performance of the UNQ, its
superiority is validated only on the exhaustive search. Toverify its effectiveness on non-exhaustive search, we con-
duct an experiment of non-exhaustive UNQ with an inverted
index. Interestingly, we observe that this deep architecturedoes not benefit from the residual vector space and it evenharms the search quality as reported in Table 1. We hypoth-
esize the reasons for this performance degradation from twoperspectives. First, both the residual vector space and la-tent space of the neural network transform the data into aquantization-friendly distribution, thus deep quantizer hasa scant margin to be improved by the residual space. Sec-ond, residual space sacrifices the distributional characteris-tics of each cluster, since the information of cluster centerin the original space is removed. For conventional shallowquantizers, the drawback of residual space is obscured byits huge advantage of making a compact distribution. How-ever, deep quantizer only takes the disadvantages (informa-tion loss) from residual space without leveraging the effec-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12001
tiveness such as compactness of residuals.
In this paper, we focus on extending the application
of deep architectures for non-exhaustive search. To thisend, we learn a disentangled representation to harmonizea deep architecture with the inverted index, inspired by re-cent representation learning techniques for generative mod-
els [ 8,34]. In our disentangled representation learning, both
encoder and decoder get information of cluster center as anadditional input. Since the information of cluster center is
redundant to decoder if latent feature contains information
of cluster center, the encoder is trained to remove the infor-
mation of cluster centers from the latent embedding. Thedisentangled representation learning is similar to concept ofthe residual vector space that provides more compact dis-
tribution by taking out the information of cluster centers.The experimental results verify that the learning disentan-
gled representation enables the neural quantization to col-
laborate with inverted index and outperforms the state-of-the-art methods.
The contributions of our paper include:
• We point out that the residual encoding of the inverted
index is incompatible with the neural multi-codebookquantization method.
• We propose a novel disentangled representation learn-
ing for neural multi-codebook quantization to combinedeep quantization and inverted index.
• The experimental results show that the proposed
method outperforms the state-of-the-art retrieval sys-
tems by a large margin.
|
Olber_Detection_of_Out-of-Distribution_Samples_Using_Binary_Neuron_Activation_Patterns_CVPR_2023 | Abstract
Deep neural networks (DNN) have outstanding perfor-
mance in various applications. Despite numerous efforts of
the research community, out-of-distribution (OOD) samples
remain a significant limitation of DNN classifiers. The abil-
ity to identify previously unseen inputs as novel is crucial
in safety-critical applications such as self-driving cars, un-
manned aerial vehicles, and robots. Existing approaches to
detect OOD samples treat a DNN as a black box and evalu-
ate the confidence score of the output predictions. Unfortu-
nately, this method frequently fails, because DNNs are not
trained to reduce their confidence for OOD inputs. In this
work, we introduce a novel method for OOD detection. Our
method is motivated by theoretical analysis of neuron ac-
tivation patterns (NAP) in ReLU-based architectures. The
proposed method does not introduce a high computational
overhead due to the binary representation of the activation
patterns extracted from convolutional layers. The extensive
empirical evaluation proves its high performance on vari-
ous DNN architectures and seven image datasets.
| 1. Introduction
Even the most efficient deep neural network (DNN) ar-
chitectures, designed for image recognition tasks, cannot
ensure that they will not malfunction during their opera-
tion. Thus, deployment of those safety-critical applications,
such as in self-driving cars, unmanned aerial vehicles, and
robots, is still an unresolved problem [9, 35]. The use of
safety mechanisms, such as runtime monitors, is a viable
strategy to keep the system in a safe state despite of DNN
failure. The design and development of such monitors in the
context of safety-critical applications is a significant chal-
lenge [10,48]. Therefore, it is required to define robust met-
rics that can allow to detect and control of DNN’s failures
at runtime and mitigate potential hazards caused by their
performance limitations.
DNNs are trained over a set of inputs sampled from real-world scenarios. However, due to the large variation of the
input images, the training dataset cannot contain all possi-
ble variants of input samples. Although it is expected that
the trained models can perform well on unknown inputs, es-
pecially those that are similar to the training data, it cannot
be guaranteed that they will perform well for OOD noisy
samples that present the objects not considered before [15].
While DNN training techniques should allow a network to
achieve high generalization capabilities, it is also crucial to
ensure the dependability of safety-critical systems to train
a model so that any outlying input will result in low confi-
dence of the network’s decision.
The fundamental challenge to ensure the safety of DNNs
is to estimate if a given input sample comes from the same
data distribution for which a DNN was trained. This is very
hard to estimate because the network usually extrapolates
its decision while receiving new image samples. Another
cause of the incorrect recognition of outlying samples can
be the distributional shift of input data over time (e.g. the
time-dependent variations of an object’s appearance) [19].
In the vast literature, this problem has been formulated
as a problem of detecting whether input data are from an
in-distribution (ID) or out-of-distribution (OOD). This has
been studied for many years and discussed in the follow-
ing aspects: sample rejection, anomaly detection, open-set
samples recognition, familiar vs unfamiliar samples or un-
certainty estimation [2, 24, 38].
In this work, we present a novel algorithm for the identi-
fication of OOD image samples. In our method, we extract
the binary neuron activation patterns on various hidden lay-
ers of a DNN and compare them with the ones collected
in the training procedure. By measuring the Hamming dis-
tances between extracted binary patterns of any test sample
and the patterns extracted during the training, we can iden-
tify OOD samples.
The main contributions of this paper are the following:
• We introduce NAP - an algorithm that extracts bi-
nary patterns from both fully connected and convolu-
tional layers and estimates a classifier’s predictive un-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3378
certainty based on the patterns. The proposed method
outperforms state-of-the-art OOD detection methods.
Moreover, the algorithm is straightforward, making it
simple to incorporate into existing DNN architectures.
• We provide an extended empirical evaluation compar-
ing the impact of the activation patterns collected from
different layers of DNN which may inspire future re-
search in this area.
• We publish the largest evaluation framework for OOD
detection. This framework contains 17 OOD methods
(including the proposed NAP-based method) that can
be directly tested on two state-of-the-art DNN archi-
tectures and 7 datasets allowing for simple extension
of the framework for new methods, architectures, and
datasets.1
|
Mu_Progressive_Backdoor_Erasing_via_Connecting_Backdoor_and_Adversarial_Attacks_CVPR_2023 | Abstract
Deep neural networks (DNNs) are known to be vulnera-
ble to both backdoor attacks as well as adversarial attacks.
In the literature, these two types of attacks are commonly
treated as distinct problems and solved separately, since
they belong to training-time and inference-time attacks re-
spectively. However, in this paper we find an intriguing con-
nection between them: for a model planted with backdoors,
we observe that its adversarial examples have similar be-
haviors as its triggered images, i.e., both activate the same
subset of DNN neurons. It indicates that planting a back-
door into a model will significantly affect the model’s ad-
versarial examples. Based on these observations, a novel
Progressive Backdoor Erasing (PBE) algorithm is proposed
to progressively purify the infected model by leveraging un-
targeted adversarial attacks. Different from previous back-
door defense methods, one significant advantage of our ap-
proach is that it can erase backdoor even when the clean
extra dataset is unavailable. We empirically show that,
against 5 state-of-the-art backdoor attacks, our PBE can
effectively erase the backdoor without obvious performance
degradation on clean samples and outperforms existing de-
fense methods.
| 1. Introduction
Deep neural networks (DNNs) have been widely adopted
in many safety-critical applications ( e.g., face recognition
and autonomous driving), thus more attention has been paid
to the security of deep learning. It has been demonstrated
that DNNs are prone to potential threats in both their infer-
ence as well as training phases. Inference-time attack (a.k.a.
adversarial attack [5, 25]) aims to fool a trained model into
making incorrect predictions with small adversarial pertur-
bations. In contrast, training-time attack (a.k.a. backdoor
attack [13]) attempts to plant a backdoor into a model in
the training phase, so that the infected model would mis-
classify the testing images as the target-label whenever a
pre-defined trigger (e.g., several pixels) is embedded into
them ( i.e., triggered testing images).
0 1 2 3 4 5 6 7 8 9
Predicted label0
1
2
3
4
5
6
7
8
9 Target label0.17 0.33 0.03 0.05 0.01 0.02 0.02 0.03 0.18 0.16
0.04 0.06 0.37 0.06 0.10 0.01 0.03 0.04 0.23 0.07
0.07 0.26 0.03 0.03 0.01 0.02 0.02 0.01 0.13 0.43
0.19 0.00 0.08 0.16 0.20 0.10 0.21 0.05 0.01 0.01
0.04 0.01 0.14 0.01 0.14 0.41 0.16 0.06 0.02 0.03
0.06 0.00 0.24 0.21 0.03 0.21 0.10 0.14 0.01 0.01
0.01 0.01 0.18 0.44 0.12 0.03 0.08 0.11 0.01 0.01
0.02 0.03 0.27 0.31 0.16 0.06 0.07 0.01 0.03 0.06
0.08 0.01 0.09 0.10 0.23 0.30 0.02 0.13 0.01 0.04
0.45 0.15 0.06 0.04 0.03 0.02 0.04 0.01 0.07 0.13
0.00.10.20.30.40.50.6(a) For benign model.
0 1 2 3 4 5 6 7 8 9
Predicted label0
1
2
3
4
5
6
7
8
9 Target label0.38 0.05 0.10 0.10 0.07 0.10 0.04 0.05 0.05 0.07
0.06 0.43 0.09 0.09 0.06 0.07 0.05 0.04 0.04 0.06
0.06 0.04 0.49 0.09 0.06 0.08 0.05 0.05 0.03 0.05
0.06 0.04 0.08 0.52 0.06 0.08 0.04 0.04 0.03 0.05
0.06 0.04 0.10 0.09 0.46 0.08 0.05 0.04 0.03 0.05
0.06 0.04 0.09 0.09 0.07 0.47 0.04 0.05 0.03 0.05
0.05 0.04 0.10 0.09 0.06 0.07 0.45 0.04 0.03 0.05
0.06 0.04 0.09 0.09 0.06 0.08 0.04 0.46 0.03 0.05
0.07 0.04 0.08 0.09 0.06 0.07 0.04 0.04 0.44 0.06
0.06 0.05 0.08 0.09 0.06 0.08 0.04 0.04 0.04 0.46
0.10.20.30.40.5 (b) For infected model.
Figure 1. Predicted labels v.s. Target-labels for 10,000randomly
sampled adversarial examples from CIFAR-10, with respect to
benign andinfected models. (a) For a benign model, the predicted
labels obey uniform distribution; (b) for infected models under
WaNet backdoor attack [20], its adversarial examples are highly
likely to be classified as the target-label (the matrix diagonals).
Figure 2. Illustration of our observations. For benign models,
conducting an untargeted adversarial attack will make an image
move close to anyclass ( e.g., Class 0or Class 2) in feature space.
But for infected models, adversarial attack will make it move close
to the target-label class ( e.g., Class l)
Due to the obvious differences between backdoor and
adversarial attacks, they are often treated as two different
problems and solved separately in the literature. But in this
paper, we illustrate that there is an underlying connection
between them, i.e., planting a backdoor into one model will
significantly affect the model’s adversarial examples. More-
over, based on such findings we propose a new method to
defend against backdoor attacks by leveraging adversarial
attack techniques ( i.e., generating adversarial examples).
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20495
In particular, we observe that: for a model planted with
backdoors, its adversarial examples have similar behaviors
as its triggered images. This is significantly different from a
benign model without backdoors. Specifically, for a benign
model , the predicted class labels of its adversarial examples
obey a uniform distribution, as shown in Fig.1a. However,
for an infected model , we surprisingly observe that its ad-
versarial examples are highly likely to be predicted as the
backdoor target-label , as shown in Fig.1b. As we know,
triggered images will also be predicted as the backdoor
target-label by an infected model. Therefore, it means that
adversarial examples have similar behaviors as its triggered
images for an infected model. Particularly, these phenom-
ena are present regardless of the target-label, the backdoor
attack setting ( i.e., all-to-one or all-to-all settings), and even
for most trigger embedding mechanisms ( e.g., adding [6],
blending [3] or warping [20]).
To find the underlying reason of such phenomena, we
measure the feature similarity of those adversarial images
and triggered images. Briefly, we find that after planting a
backdoor into one model, the features of adversarial images
change significantly. Particularly, the features of adversar-
ial image ex′are surprisingly very similar to that of triggered
image xt, as illustrated in Fig.2 and Fig.3. It indicates that
both the ex′andxthave similar behaviors, i.e.,both ac-
tivate the same subset of DNN neurons . Note that such
connection between adversarial and backdoor attack could
be leveraged to design backdoor defense methods.
Backdoor attacks made great advances in recent years,
evolved from visible trigger [6] to invisible trigger [3, 16,
20], from poisoning label to clean-label attacks [1]. For ex-
ample, WaNet [20] uses affine transformation as trigger em-
bedding mechanism, which could significant improve the
invisibility of trigger. In contrast, the research on backdoor
defenses lag behind a little. Even for the state-of-the-art
backdoor defense methods [12,14,17], most of them can be
evaded by the advanced modern backdoor attacks. More-
over, a clean extra dataset is often required by those defense
methods to erase backdoor from infected models.
In this paper, we propose a new backdoor defense
method based on the discovered connections between ad-
versarial and backdoor attacks, which could not only de-
fend against modern backdoor attacks but also work with-
out a clean extra dataset. Specifically, at the beginning the
training data (containing poisoning images) are randomly
sampled to build an initial extra dataset. Next, we use them
to purify the infected model by leveraging adversarial at-
tack techniques. And then, the purified model is used to
identify clean images from training data, which are used to
update the extra dataset. With an alternating procedure, the
infected model as well as the extra dataset are progressively
purified. So, we call our approach Progressive Backdoor
Erasing (PBE).Regarding how to purify the infected model, we gener-
ate adversarial examples and use them to fine-tune the in-
fected model. Since adversarial images could come from
arbitrary class, such fine-tuning procedure works like asso-
ciating triggered images to arbitrary class instead of just the
target class, which breaks the foundation of backdoor at-
tacks ( i.e., building a strong correlation between a trigger
pattern and a target-label [12]). That is why our approach
can erase backdoor from infected models.
As for identifying clean images, since clean images have
similar prediction results for both benign and infected mod-
els, we could effectively identify them by using the previ-
ously obtained purified model. Note that if a clean extra
dataset is available, we can skip the step of purifying extra
dataset, and only run the step of purifying model once.
A big advantage of our approach is that it does not need
the clean extra dataset and it can progressively filter poi-
soning training data to obtain clean data. In our approach,
the purified model could help to obtain clean data, in return
the obtained clean data could help to further purify model.
Thus, the alternating iterations could progressively improve
each other. To the best of knowledge, our approach is the
first work to defend against backdoor attack without a clean
extra dataset.
Our main contributions are summarized as follows:
•We observe an underlying connection between back-
door attacks and adversarial attacks, i.e., for an in-
fected model, its adversarial examples have similar be-
haviors as its triggered samples. And an theoretical
analysis is given to justify our observation.
•According to our observations, we propose a progres-
sive backdoor defense method, which achieves the
state-of-the-art defensive performance, even when a
clean extra dataset is unavailable.
|
Ohkawa_AssemblyHands_Towards_Egocentric_Activity_Understanding_via_3D_Hand_Pose_Estimation_CVPR_2023 | Abstract
We present AssemblyHands , a large-scale benchmark
dataset with accurate 3D hand pose annotations, to facili-
tate the study of egocentric activities with challenging hand-
object interactions. The dataset includes synchronized ego-
centric and exocentric images sampled from the recent As-
sembly101 dataset, in which participants assemble and dis-
assemble take-apart toys. To obtain high-quality 3D hand
pose annotations for the egocentric images, we develop an
efficient pipeline, where we use an initial set of manual an-
notations to train a model to automatically annotate a much
larger dataset. Our annotation model uses multi-view fea-
ture fusion and an iterative refinement scheme, and achieves
an average keypoint error of 4.20 mm, which is 85% lower
than the error of the original annotations in Assembly101.
AssemblyHands provides 3.0M annotated images, includ-
ing 490K egocentric images, making it the largest existing
benchmark dataset for egocentric 3D hand pose estimation.
Using this data, we develop a strong single-view baseline of
3D hand pose estimation from egocentric images. Further-
more, we design a novel action classification task to evalu-
ate predicted 3D hand poses. Our study shows that having
higher-quality hand poses directly improves the ability to
recognize actions.
| 1. Introduction
Recognizing human activities is a decades-old problem
in computer vision [17]. With recent advancements in user-
assistive augmented reality and virtual reality (AR/VR) sys-
tems, there is an increasing demand for recognizing ac-
tions from the egocentric (first-person) viewpoint. Popu-
lar AR/VR headsets such as Microsoft HoloLens, Magic
Leap, and Meta Quest are typically equipped with egocen-
tric cameras to capture a user’s interactions with the real or
virtual world. In these scenarios, the user’s hands manip-
* Work done during internship.
Action classification
OriginalAssembly101AssemblyHands(Ours)û
Exocentric images3D pose annotations + egocentric images
TrainingMulti-view annotationEgocentric image3D hand pose“position”ûPose estimationAction classification
Egocentric image3D hand pose“screw”üü
Pose estimationFigure 1. High-quality 3D hand poses as an effective represen-
tation for egocentric activity understanding. AssemblyHands
provides high-quality 3D hand pose annotations computed from
multi-view exocentric images sampled from Assembly101 [28],
which originally comes with inaccurate annotations computed
from egocentric images (see the incorrect left-hand pose predic-
tion). As we experimentally demonstrate on an action classifica-
tion task, models trained on high-quality annotations achieve sig-
nificantly higher accuracy.
ulating objects is a very important modality of interaction.
In particular, hand poses ( e.g., 3D joint locations) play a
central role in understanding and enabling hand-object in-
teraction [3, 18], pose-based action recognition [7, 20, 28],
and interactive interfaces [10, 11].
Recently, several large-scale datasets for understanding
egocentric activities have been proposed, such as EPIC-
KITCHENS [5], Ego4D [8], and Assembly101 [28]. In par-
ticular, Assembly101 highlights the importance of 3D hand
poses in recognizing procedural activities such as assem-
bling toys. 3D hand poses are compact representations, and
are highly indicative of actions and even the objects that are
interacted with– for example, the “screwing” hand motion is
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12999
Step1: Auto-annotation from exocentric images Step2: 3D hand pose estimation
from egocentric images
MVExoNetIterative refinement
3D hand joints (GT)
SVEgoNet
Training Step3: Evaluation by
action classification
Action
classifier time
verb: “position”
Figure 2. Construction of AssemblyHands dataset and a benchmark task for egocentric 3D hand pose estimation. We first use
manual annotations and an automatic annotation network (MVExoNet) to generate accurate 3D hand poses for multi-view images sampled
from the Assembly101 dataset [28]. These annotations are used to train a single-view 3D hand pose estimation network (SVEgoNet) from
egocentric images. Finally, the predicted hand poses are evaluated by the action classification task.
a strong cue for the presence of a screwdriver. Notably, the
authors of Assembly101 found that, for classifying assem-
bly actions, learning from 3D hand poses is more effective
than solely using video features. However, a drawback of
this study is that the 3D hand pose annotations in Assem-
bly101 are not always accurate, as they are computed from
an off-the-shelf egocentric hand tracker [11]. We observed
that the provided poses are often inaccurate (see Fig. 1), es-
pecially when hands are occluded by objects from the ego-
centric perspective. Thus, the prior work has left us with
an unresolved question: How does the quality of 3D hand
poses affect action recognition performance?
To systematically answer this question, we propose a
new benchmark dataset named AssemblyHands . It in-
cludes a total of 3.0M images sampled from Assembly101,
annotated with high-quality 3D hand poses. We not only
acquire manual annotations, but also use them to train an
accurate automatic annotation model that uses multi-view
feature fusion from exocentric ( i.e., third-person) images;
please see Fig. 2 for an illustration. Our model achieves
4.20 mm average keypoint error compared to manual anno-
tations, which is 85% lower than the original annotations
provided in Assembly101. This automatic pipeline enables
us to efficiently scale annotations to 490K egocentric im-
ages from 34 subjects, making AssemblyHands the largest
egocentric hand pose dataset to date, both in terms of scale
and subject diversity. Compared to recent hand-object inter-
action datasets, such as DexYCB [3] and H2O [18], our As-
semblyHands features significantly more hand-object com-
binations, as each multi-part toy can be disassembled and
assembled at will,
Given the annotated dataset, we first develop a strong
baseline for egocentric 3D hand pose estimation, using
2.5D heatmap optimization and hand identity classification.Then, to evaluate the effectiveness of predicted hand poses,
we propose a novel evaluation scheme: action classification
from hand poses. Unlike prior benchmarks on egocentric
hand pose estimation [7, 18, 24], we offer detailed analysis
of the quality of 3D hand pose annotation, its influence on
the performance of an egocentric pose estimator, and the
utility of predicted poses for action classification.
Our contributions are summarized as follows:
• We offer a large-scale benchmark dataset, dubbed
AssemblyHands, with 3D hand pose annotations for
3.0M images sampled from the Assembly101 dataset,
including 490K egocentric images.
• We propose an automatic annotation pipeline with
multi-view feature fusion and iterative refinement,
leading to 85% error reduction in the hand pose an-
notations.
• We define a benchmark task for egocentric 3D hand
pose estimation with the evaluation from action classi-
fication. We provide a strong single-view baseline that
optimizes 2.5D keypoint heatmaps and classifies hand
identity. Our results confirm that having high-quality
3D hand poses significantly improves egocentric ac-
tion recognition performance.
|
Mao_Doubly_Right_Object_Recognition_A_Why_Prompt_for_Visual_Rationales_CVPR_2023 | Abstract
Many visual recognition models are evaluated only on
their classification accuracy, a metric for which they obtain
strong performance. In this paper, we investigate whether
computer vision models can also provide correct rationales
for their predictions. We propose a “doubly right” ob-
ject recognition benchmark, where the metric requires the
model to simultaneously produce both the right labels as
well as the right rationales. We find that state-of-the-art
visual models, such as CLIP , often provide incorrect ratio-
nales for their categorical predictions. However, by trans-
ferring the rationales from language models into visual rep-
resentations through a tailored dataset, we show that we
can learn a “why prompt, ” which adapts large visual repre-
sentations to produce correct rationales. Visualizations and
empirical experiments show that our prompts significantly
improve performance on doubly right object recognition, in
addition to zero-shot transfer to unseen tasks and datasets.
| 1. Introduction
Computer vision models today are able to achieve high
accuracy – sometimes super-human – at correctly recogniz-
ing objects in images. However, most models today are not
evaluated on whether they get the prediction right for the
right reasons [14, 19, 48, 53]. Learning models that can ex-
plain their own decision is important for building trustwor-
thy systems, especially in applications that require human-
machine interactions [2, 15, 37, 50]. Rationales that justify
the prediction can largely improve user trust [54], which is
a crucial metric that the visual recognition field should push
forward in the future.
Existing methods in interpretability have investigated
how to understand which features contribute to the mod-
els’ prediction [33,34,44,46,47,52,60]. However, saliency
explanations are often imprecise, require domain expertise
to understand, and also cannot be evaluated. [20, 22] have
instead explored verbal rationales to justify the decision-
making. However, they require manual collections of the
Vision ModelGreek Salad
Vision Model
Why Prompt
This is a photo of a greek salad because there are various toppings such as diced tomatoes, onions, peppers, and jalapenosFigure 1. Visual reasoning for doubly right object recognition task.
Motivated by prompting in NLP, we learn a whyprompt from mul-
timodal data, which allows us to instruct visual models to predict
both the right category and the correct rationales that justify the
prediction.
plausible rationales in the first place, which subsequently
are limited to small-scale datasets and tasks [24, 56].
Scalable methods for explainability have been developed
in natural language processing (NLP) through prompting .
By adding additional instructions to the input, such as the
sentence “think step-by-step,” language models then output
descriptions of their reasoning through the chain of thought
process [57]. Since the explanations are verbal, they are
easily understandable by people, and since the mechanism
emerges without explicit supervision, it is highly scalable.
In this paper, we investigate whether visual representations
can also explain their reasoning through visual chain-of-
thought prompts.
Our paper first introduces a benchmark for doubly right
object recognition, where computer vision models must pre-
dict both correct categorical labels as well as correct ra-
tionales. Our benchmark is large, and covers many cate-
gories and datasets. We found that the visual representa-
tions do not have double right capability out-of-the-box on
our benchmark. The recent large-scale image-language pre-
trained models [41, 49] can retrieve open-world language
descriptions that are closest to the image embedding in the
feature space, serving as verbal explanations. However, the
models often select the wrong rationales.
Instead, we propose a framework to explicitly trans-
fer the chain-of-thought reasoning from NLP models into
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2722
vision models. We first query the large-scale language
model [9] via the chain-of-thought reasoning for object cat-
egory, where we obtain language rationales that explain dis-
criminative features for an object. We then collect images
containing both the category and the rationale features us-
ing Google image search. We then train visual prompts
to transfer the verbal chain of thought to visual chain of
thought with contrastive learning, where features of im-
ages and their rationales are pulled together. Our “why”
prompts obtain up to 26 points gain at doubly right per-
formance when evaluated on our benchmark. In addition,
visualizations and quantitative results show that our why
prompts zero-shot transfer to unseen tasks and datasets. We
believe this “doubly right” object recognition task is a fu-
ture direction which the visual recognition field should go
forward for. Our data and code is available at https:
//github.com/cvlab-columbia/DoubleRight .
|
Pang_Standing_Between_Past_and_Future_Spatio-Temporal_Modeling_for_Multi-Camera_3D_CVPR_2023 | Abstract
This work proposes an end-to-end multi-camera 3D
multi-object tracking (MOT) framework. It emphasizes
spatio-temporal continuity and integrates both past and fu-
ture reasoning for tracked objects. Thus, we name it “Past-
and-Future reasoning for Tracking” (PF-Track). Specifi-
cally, our method adopts the “tracking by attention” frame-
work and represents tracked instances coherently over time
with object queries. To explicitly use historical cues, our
“Past Reasoning” module learns to refine the tracks and
enhance the object features by cross-attending to queries
from previous frames and other objects. The “Future Rea-
soning” module digests historical information and predicts
robust future trajectories. In the case of long-term occlu-
sions, our method maintains the object positions and en-
ables re-association by integrating motion predictions. On
the nuScenes dataset, our method improves AMOTA by a
large margin and remarkably reduces ID-Switches by 90%
compared to prior approaches, which is an order of mag-
nitude less. The code and models are made available at
https://github.com/TRI-ML/PF-Track.
| 1. Introduction
Reasoning about object trajectories in 3D is the cor-
nerstone of autonomous navigation. While many LiDAR-
based approaches exist [36, 58, 63], their applicability is
limited by the cost and reliability of the sensor. Detecting,
tracking, and forecasting object trajectories only with cam-
eras is hence a critical problem. Significant progress has
been achieved on these tasks separately, but they have been
historically primarily studied in isolation and combined into
a full-stack pipeline in an ad-hoc fashion.
In particular, 3D detection has attracted a lot of atten-
tion [20,24,25,28,53], but associating these detections over
time has been mostly done independently from localiza-
*Work done while interning at Toyota Research Institute.
†Corresponding to Ziqi Pang at ziqip2@illinois.edu and Yu-
Xiong Wang at yxw@illinois.edu .
t=0Front-rightCameraBack-rightCameraBack-rightCameraBackCamerat=t!t=t"t=T……
PastReasoningàBetterTrackQualityFutureReasoningàAddressOcclusions…………Figure 1. We visualize the output of our model by projecting pre-
dicted 3D bounding boxes onto images. In the beginning, image-
based detection can be inaccurate ( t= 0) due to depth ambiguity.
With “Past Reasoning,” the bounding box quality ( t=t1) gradu-
ally improves by leveraging historical information. With “Future
Reasoning,” our PF-Track predicts the long-term motions of ob-
jects and maintains their states even under occlusions ( t=t2) and
camera switches. This enables re-association without explicit re-
identification ( t=T), as the object ID does not switch. Our PF-
Track further combines past and future reasoning in a joint frame-
work to improve spatio-temporal coherence.
tion [19, 31, 43]. Recently, a few approaches to end-to-
end detection and tracking have been proposed, but they
operate on neighboring frames and fail to integrate longer-
term spatio-temporal cues [7, 12, 33, 65]. In the predic-
tion literature, on the other hand, it is common to assume
the availability of ground truth object trajectories and HD-
Maps [4,8,11,59]. A few attempts for a more realistic eval-
uation have been made [16, 21], focusing only on the pre-
diction performance.
In this paper, we argue that multi-object tracking can be
dramatically improved by jointly optimizing the detection-
tracking-prediction pipeline, especially in a camera-based
system. We provide an intuitive example from our real-
world experiment in Fig. 1. At first, the pedestrian is fully
visible, but a model with only single-frame information
makes a prediction with large deviation (frame t= 0 in
Fig. 1). After this, integrating the temporal information
from the past gradually corrects the error over time (frame
t=t1in Fig. 1), by capitalizing on the notion of spatio-
temporal continuity. Moreover, as the pedestrian becomes
fully occluded (frame t=t2in Fig. 1), we can still pre-
dict their location by using the aggregated past informa-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17928
tion to estimate a future trajectory. Finally, we can suc-
cessfully track the pedestrian on re-appearance even on a
different camera via long-term prediction, resulting in cor-
rect re-association (frame t=Tin Fig. 1). The above ro-
bust spatio-temporal reasoning is enabled by seamless, bi-
directional integration of past and future information, which
starkly contrasts with the mainstream pipelines for vision-
based, multi-camera, 3D multi-object tracking (3D MOT).
To this end, we propose an end-to-end framework for
joint 3D object detection, tracking, and trajectory predic-
tion for the task of 3D MOT, as shown in Fig. 2, adopting
the “tracking by attention” [34,64,65] paradigm. Compared
to our closest baseline under the same paradigm [65], we are
different in explicit past and future reasoning: a 3D object
query consistently represents the object over time, propa-
gates the spatio-temporal information of the object across
frames, and generates the corresponding bounding boxes
and future trajectories. To exploit spatio-temporal cues, our
algorithm leverages simple attention operations to capture
object dynamics and interactions, which are then used for
track refinement and robust, long-term trajectory prediction.
Finally, we close the loop by integrating predicted trajecto-
ries back into the tracking module to replace missing detec-
tions ( e.g., due to an occlusion). To highlight the capabil-
ity of joint past and future reasoning, our method is named
“Past-and-Future reasoning for Tracking” (PF-Track).
We provide a comprehensive evaluation of PF-Track on
nuScenes [4] and demonstrate that joint modeling of past
and future information provides clear benefits for object
tracking. In particular, PF-Track decreases ID-Switches
by over 90% compared to previous multi-camera 3D MOT
methods.
To summarize, our contributions are as follows.
1. We propose an end-to-end vision-only 3D MOT frame-
work that utilizes object-level spatio-temporal reasoning
for both past and future information.
2. Our framework improves the quality of tracks by cross-
attending to features from the “ past.”
3. We propose a joint tracking and prediction pipeline,
whose constituent part is “Future Reasoning”, and
demonstrate that tracking can explicitly benefit from
long-term prediction into the “ future .”
4. Our method establishes new state-of-the-art on large-
scale nuScenes dataset [4] with significant improvement
for both AMOTA and ID-Switch.
|
Li_StyleGene_Crossover_and_Mutation_of_Region-Level_Facial_Genes_for_Kinship_CVPR_2023 | Abstract
High-fidelity kinship face synthesis has many potential
applications, such as kinship verification, missing child
identification, and social media analysis. However, it
is challenging to synthesize high-quality descendant faces
with genetic relations due to the lack of large-scale, high-
quality annotated kinship data. This paper proposes RFG
(Region-level Facial Gene) extraction framework to address
this issue. We propose to use IGE (Image-based Gene En-
coder), LGE (Latent-based Gene Encoder) and Gene De-
coder to learn the RFGs of a given face image, and the
relationships between RFGs and the latent space of Style-
GAN2. As cycle-like losses are designed to measure the L2
distances between the output of Gene Decoder and image
encoder, and that between the output of LGE and IGE, only
*Corresponding Authorface images are required to train our framework, i.e. no
paired kinship face data is required. Based upon the pro-
posed RFGs, a crossover and mutation module is further
designed to inherit the facial parts of parents. A Gene Pool
has also been used to introduce the variations into the mu-
tation of RFGs. The diversity of the faces of descendants
can thus be significantly increased. Qualitative, quantita-
tive, and subjective experiments on FIW, TSKinFace, and
FF-Databases clearly show that the quality and diversity of
kinship faces generated by our approach are much better
than the existing state-of-the-art methods.
| 1. Introduction
Humans can identify kinship through photographs based
on the resemblance between parents and children. Many
works have investigated this intrinsic relation in the fields
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20960
of kinship verification [9, 32, 42] and genetics [4, 5, 8, 19].
With the popularity of face synthesis and editing technol-
ogy in recent years, high-fidelity kinship face synthesis has
also attracted much attention. This task, aiming to synthe-
size the faces of descendants based on the appearance of
the parents, has many potential applications, such as find-
ing long-lost children, crime investigations, kinship verifi-
cation, and multimedia social applications.
In recent years, many efforts have been made to make
use of generative models [7, 12, 15, 16, 27, 29, 38, 43, 45, 48]
for kinship face synthesis. These works can be catego-
rized into two paradigms: one-stage and two-stage. The
one-stage paradigm [12, 29, 38, 45] treats this problem as
an image-to-image translation task and trains a one-to-one
kinship face generator with paired data. However, these ap-
proaches can only produce low-resolution images and the
resultant images can be blurry and lack diversity. Further,
it would be quite difficult to obtain annotated kinship data.
By contrast, the two-stage paradigm [7, 15, 16, 27, 43, 48]
first extracts the genetic representation and assembles them
into children’s representation based on the parents’ faces.
To obtain genetic representation, existing methods try to
learn the inheritance and variation of facial appearances by
training deep neural networks [14, 27, 48] or via a knowl-
edge rule [7]. However, the learned genetic representation
is prone to overfitting due to the lack of high-quality kin-
ship annotated training data, resulting in a lack of diversity
in the generated children. In addition, these methods cannot
provide fine-grained attributes representation, and thus the
generated facial attributes lack interpretability.
In this paper, the facial genetic process is abstracted as
the exchange and mutation of the parents’ facial parts. We
propose an Image-based Gene Encoder (IGE) to construct
an independent representation for each facial part, called
a Region-level Facial Gene (RFG), which is used to con-
trol the synthesis of facial regions. We further simulate the
crossover and mutation process to assemble the RFGs of de-
scendants by using those of the parents, and our proposed
Gene Pool used in the mutation process can significantly
increase the diversity of the generated descendants. We use
the pre-trained StyleGAN2 [24] as the generator to synthe-
size high-fidelity faces. To achieve this, we use a Gene De-
coder to map RFGs to the W+space of StyleGAN2. Since
IGE requires a facial parsing mask to generate the RFG, we
additionally train a Latent-based Gene Encoder (LGE) to
directly map the latent code of StyleGAN2 to RFGs. Thus,
facial parsing mask is not required for the RFG extraction
in the inference stage. The main contributions of this paper
are summarized as follows:
• We propose StyleGene to synthesize high-fidelity kin-
ship faces with controllable facial genetic regions, via
modeling the facial genetic relations based on the pro-
posed region-level facial genes.• A novel genetic strategy is further introduced by sim-
ulating the crossover and mutation process to generate
the RFGs of descendants. We introduce a Gene Pool
into the mutation process to significantly increase the
diversity of the kinship face.
• We validate the effectiveness of our approach on sev-
eral benchmarks, demonstrating the superiority of our
StyleGene framework over other state-of-the-art meth-
ods, in terms of the quality and diversity of the gener-
ated kinship faces.
|
Lu_Decomposed_Soft_Prompt_Guided_Fusion_Enhancing_for_Compositional_Zero-Shot_Learning_CVPR_2023 | Abstract
Compositional Zero-Shot Learning (CZSL) aims to rec-
ognize novel concepts formed by known states and objects
during training. Existing methods either learn the combined
state-object representation, challenging the generalization
of unseen compositions, or design two classifiers to identify
state and object separately from image features, ignoring
the intrinsic relationship between them. To jointly eliminate
the above issues and construct a more robust CZSL system,
we propose a novel framework termed Decomposed Fusion
withSoftPrompt (DFSP)1, by involving vision-language
models (VLMs) for unseen composition recognition. Specif-
ically, DFSP constructs a vector combination of learnable
soft prompts with state and object to establish the joint rep-
resentation of them. In addition, a cross-modal decomposed
fusion module is designed between the language and image
branches, which decomposes state and object among lan-
guage features instead of image features. Notably, being
fused with the decomposed features, the image features can
be more expressive for learning the relationship with states
and objects, respectively, to improve the response of un-
seen compositions in the pair space, hence narrowing the
domain gap between seen and unseen sets. Experimental
results on three challenging benchmarks demonstrate that
our approach significantly outperforms other state-of-the-
art methods by large margins.
| 1. Introduction
Given an unseen concept, such as green tiger , even
though this is a nonexistent stuff humans have never seen,
they may associate the known state green with an image of
tiger immediately. Inspired by this, Compositional Zero-
Shot Learning (CZSL) is proposed with the purpose of
*Song Guo and Jingcai Guo are the corresponding authors
1Code is available at: https://github.com/Forest-art/
DFSP.git
a photo of
[state] [object]narrow the
domian gaplearn the joint
representation
old cat
(unseen)DFSP
vftfofsf
Fusion
seenunseenolddry catdogs vff o vffFigure 1. The overview of DFSP. Our method aims to narrow
the domain gap between seen and unseen compositions by fus-
ing decomposed features foandfswith image feature fv, while
learn the joint representation between state and object in language
branch. Being fused with the state and object features, image fea-
ture can learn the response of them respectively and improve the
sensitiveness of unseen compositions.
equipping models with the ability to recognize novel con-
cepts generated as humans do. Specifically, CZSL learns
on visible primitive composed concepts (state and object)
in the training phase, and recognizes unseen compositions
in the inference phase.
Some prior algorithms [20, 26] design two classifiers
to identify state and object separately, while these mod-
els overlook the intrinsic relation between them. After the
primitive concepts are obtained, the association between
state and object could be established again through graph
neural network (GNN) [24] or external knowledge compo-
sitions [14]. Nevertheless, these are post-processing meth-
ods and these classifiers are separated from image features
with strong correlation, ignoring entanglement. Some other
methods [28, 29] are to directly treat the combination as an
entity, converting CZSL into a general zero-shot recognition
problem. Generally, the visual features are projected into a
shared semantic space and the distance between entities is
optimized, such as Euclidean distance [44]. If too much
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23560
attention is paid to the composed concepts in the training
stage, the model can not be generalized well to unseen com-
positions, causing the domain gap between seen and unseen
sets. In summary, these methods are all visual recognition
models, which are limited by the strong entanglement of
states and objects in image features.
In contrast, we focus on designing novel approaches
based on vision-language models (VLMs) to cope with
CZSL challenges. Since state and object are two separate
words in the text, they are less entangled in language fea-
tures than image features and could be decomposed more
easily and precisely. Certainly, state and object are also
intrinsically linked in the text, such as ripe apple instead
ofold apple . Constructing the combination in the form of
text can also establish the joint representation of state and
object to pair with images. Meanwhile, the decomposed
state and object features can also be independently associ-
ated with the image feature, easing the excessive bias of the
model towards seen compositions and enhancing the unseen
response (shown in Fig. 1). To improve CZSL with VLMs,
we design Decomposed Fusion with SoftPrompt (DFSP),
an efficient framework aimed to both learn about the joint
representation of primitive concepts and shrink the domain
gap between seen and unseen composition sets, as shown in
Fig. 2. To be specific, DFSP is designed as a fully learnable
soft prompt including prefix, state and object, which con-
structs the joint representation between primitive concepts
and can be fine-tuned well for new supervised tasks. We
then design a decomposed fusion module (DFM) for state
and object, which decomposes features extracted from text
encoder, such as Bert [6], etc. Meanwhile, the decomposed
language features and image features of DFSP interact with
information in a cross-modal fusion module, which is cru-
cial for learning high-quality language-aware visual repre-
sentations. During the phase of fusion, the image can es-
tablish separate relationships with the state and object, and
then is paired with the composed prompt feature in the pair
space, improving its response even for unseen compositions
to shrink the domain gap.
Generally, this paper makes the following contributions:
• A novel framework named Decomposed Fusion with
Soft Prompt (DFSP) is proposed, which is based on
vision-language paradigm aiming to cope with CZSL.
• The Decomposed Fusion Module is designed for
CZSL specifically, which decomposes the concepts of
language features and fuses them with image features
to improve the response of unseen compositions.
• We design a learnable soft prompt to construct the
joint-representation of state and object, which can be
more precisely decomposed than images.• Extensive experiments demonstrate the effectiveness
of DFSP, which greatly outperforms the state-of-the-
art CZSL approaches on both closed-world and open-
world.
|
Li_SECAD-Net_Self-Supervised_CAD_Reconstruction_by_Learning_Sketch-Extrude_Operations_CVPR_2023 | Abstract
Reverse engineering CAD models from raw geometry
is a classic but strenuous research problem. Previous
learning-based methods rely heavily on labels due to the
supervised design patterns or reconstruct CAD shapes that
are not easily editable. In this work, we introduce SECAD-
Net, an end-to-end neural network aimed at reconstructing
compact and easy-to-edit CAD models in a self-supervised
manner. Drawing inspiration from the modeling language
that is most commonly used in modern CAD software, we
propose to learn 2D sketches and 3D extrusion parame-
ters from raw shapes, from which a set of extrusion cylin-
ders can be generated by extruding each sketch from a 2D
plane into a 3D body. By incorporating the Boolean op-
eration ( i.e., union), these cylinders can be combined to
closely approximate the target geometry. We advocate the
use of implicit fields for sketch representation, which al-
lows for creating CAD variations by interpolating latent
codes in the sketch latent space. Extensive experiments on
both ABC and Fusion 360 datasets demonstrate the effec-
tiveness of our method, and show superiority over state-of-
the-art alternatives including the closely related method for
supervised CAD reconstruction. We further apply our ap-
*Corresponding author: jianwei.guo@nlpr.ia.ac.cnproach to CAD editing and single-view CAD reconstruc-
tion. Code will be released at https://github.com/
BunnySoCrazy/SECAD-Net .
| 1. Introduction
CAD reconstruction is one of the most sought-after ge-
ometric modeling technologies, which plays a substantial
role in reverse engineering in case of the original design
document is missing or the CAD model of a real object is
not available. It empowers users to reproduce CAD mod-
els from other representations and supports the designer to
create new variations to facilitate various engineering and
manufacturing applications.
The advance in 3D scanning technologies has promoted
the paradigm shift from time-consuming and laborious
manual dimensions to automatic CAD reconstruction. A
typical line of works [3,6,35,47] first reconstructs a polygon
mesh from the scanned point cloud, then followed by mesh
segmentation and primitive extraction to obtain a bound-
ary representation (B-rep). Finally, a CAD shape parser is
applied to convert the B-rep into a sequence of modeling
operations. Recently, inspired by the substantial success
of point set learning [1, 32, 49] and deep 3D representa-
tions [9,28,30], a number of methods have exploited neural
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16816
networks to improve the above pipeline, e.g., detecting and
fitting primitives to raw point clouds directly [25,27,40]. A
few works ( e.g., CSG-Net [39], UCSG-Net [19], and CSG-
Stump [33]) further parse point cloud inputs into a construc-
tive solid geometry (CSG) tree by predicting a set of prim-
itives that are then combined with Boolean operations. Al-
though achieving encouraging compact representation, they
only output a set of simple primitives with limited types
(e.g., planes, cylinders, spheres), which restricts their rep-
resentation capability for reconstructing complex and more
general 3D shapes. CAPRI-Net [59] introduces quadric sur-
face primitives and the difference operation based on BSP-
Net [8] to produce complicated convex and concave shapes
via a CSG tree. However, controlling the implicit equation
and parameters of quadric primitives is difficult for design-
ers to edit the reconstructed models. Thus, the editability of
those methods is quite limited.
In this paper, we develop a novel and versatile deep neu-
ral framework, named SECAD-Net, to reconstruct high-
quality and editable CAD models. Our approach is inspired
by the observation that a CAD model is usually designed as
a command sequence of operations [7, 38, 50, 51, 57], i.e.,
a set of planar 2D sketches are first drawn then extruded
into 3D solid shapes for Boolean operations to create the
final model. At the heart of our approach is to learn the
sketch and extrude modeling operations, rather than CSG
with parametric primitives. To determine the position and
axis of each sketch plane, SECAD-Net first learns multiple
extrusion boxes to decompose the entire shape into multiple
local regions. Afterward, for the local profile in each box,
we utilize a fully connected network to learn the implicit
representation of the sketch. An extrusion operator is then
designed to calculate the implicit expression of the cylinders
according to the predicted sketch and extrusion parameters.
We finally apply a union operation to assemble all extrusion
cylinders into the final CAD model.
Benefiting from our representation, our approach is flex-
ible and efficient to construct a wide range of 3D shapes.
As the predictions of our method are fully interpretable, it
allows users to express their ideas to create variations or im-
prove the design by operating on 2D sketches or 3D cylin-
ders intuitively. To summarize, our work makes the follow-
ing contributions:
• We present a novel deep neural network for reverse en-
gineering CAD models with self-supervision, leading
to faithful reconstructions that closely approximate the
target geometry.
• SECAD-Net is capable of learning implicit sketches
and differentiable extrusions from raw 3D shapes with-
out the guidance of ground truth sketch labels.
• Extensive experiments demonstrate the superiority of
SECAD-Net through comprehensive comparisons. Wealso showcase its immediate applications to CAD in-
terpolation, editing, and single-view reconstruction.
|
Luo_Zero-Shot_Model_Diagnosis_CVPR_2023 | Abstract
When it comes to deploying deep vision models, the be-
havior of these systems must be explicable to ensure confi-
dence in their reliability and fairness. A common approach
to evaluate deep learning models is to build a labeled test
set with attributes of interest and assess how well it per-
forms. However, creating a balanced test set (i.e., one that
is uniformly sampled over all the important traits) is of-
ten time-consuming, expensive, and prone to mistakes. The
question we try to address is: can we evaluate the sensi-
tivity of deep learning models to arbitrary visual attributes
without an annotated test set ?
This paper argues the case that Zero-shotModel Diag-
nosis (ZOOM) is possible without the need for a test set nor
labeling. To avoid the need for test sets, our system relies
on a generative model and CLIP . The key idea is enabling
the user to select a set of prompts (relevant to the prob-
lem) and our system will automatically search for seman-
tic counterfactual images (i.e., synthesized images that flip
the prediction in the case of a binary classifier) using the
generative model. We evaluate several visual tasks (classi-
fication, key-point detection, and segmentation) in multiple
visual domains to demonstrate the viability of our method-
ology. Extensive experiments demonstrate that our method
is capable of producing counterfactual images and offering
sensitivity analysis for model diagnosis without the need for
a test set.
| 1. Introduction
Deep learning models inherit data biases, which can be
accentuated or downplayed depending on the model’s ar-
chitecture and optimization strategy. Deploying a computer
vision deep learning model requires extensive testing and
evaluation, with a particular focus on features with poten-
tially dire social consequences (e.g., non-uniform behav-
ior across gender or ethnicity). Given the importance of
the problem, it is common to collect and label large-scale
datasets to evaluate the behavior of these models across
attributes of interest. Unfortunately, collecting these test
*Equal contribution.
CLIP
StyleGAN
Generator
How would a change
in [attribute] affect
[my model] 's prediction?User Diagnosis Request Neural Tools Diagnosis Outcomes
[attribute] = "green eye"
"vertical pupil"
"pointed ear"
...
[my model] = cat/dog classifier
...
Zero-shot counterfactual images Zero-shot sensitivity histogram
"green eye""vertical pupil" "pointed ear"......
Figure 1. Given a differentiable deep learning model (e.g., a
cat/dog classifier) and user-defined text attributes, how can we de-
termine the model’s sensitivity to specific attributes without us-
ing labeled test data? Our system generates counterfactual images
(bottom right) based on the textual directions provided by the user,
while also computing the sensitivity histogram (top right).
datasets is extremely time-consuming, error-prone, and ex-
pensive. Moreover, a balanced dataset, that is uniformly
distributed across all attributes of interest, is also typically
impractical to acquire due to its combinatorial nature. Even
with careful metric analysis in this test set, no robustness
nor fairness can be guaranteed since there can be a mis-
match between the real and test distributions [25]. This
research will explore model diagnosis without relying on
a test set in an effort to democratize model diagnosis and
lower the associated cost.
Counterfactual explainability as a means of model diag-
nosis is drawing the community’s attention [5,20]. Counter-
factual images visualize the sensitive factors of an input im-
age that can influence the model’s outputs. In other words,
counterfactuals answer the question: “How can we modify
the input image x(while fixing the ground truth) so that the
model prediction would diverge from ytoˆy?”. The param-
eterization of such counterfactuals will provide insights into
identifying key factors of where the model fails. Unlike ex-
isting image-space adversary techniques [4,18], counterfac-
tuals provide semantic perturbations that are interpretable
by humans. However, existing counterfactual studies re-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11631
quire the user to either collect uniform test sets [10], anno-
tate discovered bias [15], or train a model-specific explana-
tion every time the user wants to diagnose a new model [13].
On the other hand, recent advances in Contrastive
Language-Image Pretraining (CLIP) [24] can help to over-
come the above challenges. CLIP enables text-driven ap-
plications that map user text representations to visual man-
ifolds for downstream tasks such as avatar generation [7],
motion generation [37] or neural rendering [22, 30]. In
the domain of image synthesis, StyleCLIP [21] reveals that
text-conditioned optimization in the StyleGAN [12] latent
space can decompose latent directions for image editing,
allowing for the mutation of a specific attribute without dis-
turbing others. With such capability, users can freely edit
semantic attributes conditioned on text inputs. This paper
further explores its use in the scope of model diagnosis.
The central concept of the paper is depicted in Fig. 1.
Consider a user interested in evaluating which factors con-
tribute to the lack of robustness in a cat/dog classifier (target
model). By selecting a list of keyword attributes, the user is
able to (1) see counterfactual images where semantic vari-
ations flip the target model predictions (see the classifier
score in the top-right corner of the counterfactual images)
and (2) quantify the sensitivity of each attribute for the tar-
get model (see sensitivity histogram on the top). Instead of
using a test set, we propose using a StyleGAN generator
as the picture engine for sampling counterfactual images.
CLIP transforms user’s text input, and enables model diag-
nosis in an open-vocabulary setting. This is a major advan-
tage since there is no need for collecting and annotating im-
ages and minimal user expert knowledge. In addition, we
are not tied to a particular annotation from datasets (e.g.,
specific attributes in CelebA [16]).
To summarize, our proposed work offers three major im-
provements over earlier efforts:
• The user requires neither a labeled, balanced test
dataset, and minimal expert knowledge in order to
evaluate where a model fails (i.e., model diagnosis). In
addition, the method provides a sensitivity histogram
across the attributes of interest.
• When a different target model or a new user-defined
attribute space is introduced, it is not necessary to re-
train our system, allowing for practical use.
• The target model fine-tuned with counterfactual im-
ages not only slightly improves the classification per-
formance, but also greatly increases the distributional
robustness against counterfactual images.
|
Niu_Visibility_Constrained_Wide-Band_Illumination_Spectrum_Design_for_Seeing-in-the-Dark_CVPR_2023 | Abstract
Seeing-in-the-dark is one of the most important and
challenging computer vision tasks due to its wide appli-
cations and extreme complexities of in-the-wild scenar-
ios. Existing arts can be mainly divided into two threads:
1) RGB-dependent methods restore information using de-
graded RGB inputs only ( e.g., low-light enhancement), 2)
RGB-independent methods translate images captured under
auxiliary near-infrared (NIR) illuminants into RGB domain
(e.g., NIR2RGB translation). The latter is very attractive
since it works in complete darkness and the illuminants are
visually friendly to naked eyes, but tends to be unstable due
to its intrinsic ambiguities. In this paper, we try to robustify
NIR2RGB translation by designing the optimal spectrum
of auxiliary illumination in the wide-band VIS-NIR range,
while keeping visual friendliness. Our core idea is to quan-
tify the visibility constraint implied by the human vision sys-
tem and incorporate it into the design pipeline. By model-
ing the formation process of images in the VIS-NIR range,
the optimal multiplexing of a wide range of LEDs is auto-
matically designed in a fully differentiable manner, within
the feasible region defined by the visibility constraint. We
also collect a substantially expanded VIS-NIR hyperspec-
tral image dataset for experiments by using a customized
50-band filter wheel. Experimental results show that the
task can be significantly improved by using the optimized
wide-band illumination than using NIR only. Codes Avail-
able: https://github.com/MyNiuuu/VCSD .
| 1. Introduction
Seeing-in-the-dark is critical for modern industries, be-
cause of its promising applications in nighttime photogra-
phy and visual surveillance. However, it remains challeng-
ing due to complex degradation mechanisms and dynamics
of in-the-wild environments.
To achieve this task, a number of methods have been pro-
*Corresponding authorposed, which can be roughly divided into two threads. The
first thread features RGB-dependent methods [3, 4, 29, 42,
44,45,52] that aim to fully exploit the RGB input, even with
severe degradations. These methods have gained great suc-
cess through directly learning the mapping from low-light
input to normal-light output, in the presence of complex
noises and color discrepancies. However, even state-of-the-
art methods along this thread may struggle with in-the-wild
data captured under nearly complete darkness.
In contrast, the second thread features RGB-independent
methods [24, 28, 33, 38, 40] for non-interfering surveillance
that try to recover RGB information from images of invis-
ible ranges, without requiring any RGB input. The most
attractive characteristics lie in its applicability to complete
darkness and the visual friendliness of auxiliary illumina-
tion to naked eyes. NIR2RGB is one of the representative
tasks of this thread, which aims to translate near-infrared
images to RGB images.
As for auxiliary illumination in the NIR range, the in-
dustry practice is to use NIR LEDs, usually centered at
850 nm or940 nm . However, the captured images are
almost monochromatic and lack visual color and texture,
which makes NIR2RGB translation ambiguous. The funda-
mental reasons for the ambiguities are two folds: 1) The
spectral sensitivities of commodity RGB cameras almost
overlap around both 850 nm and940 nm , making it hard
to recover three-channel color from a single intensity ob-
servation. 2) Reflectance spectra of many materials become
almost indistinguishable beyond 850 nm , which leads to ob-
vious structure gaps from RGB images. As a result, exist-
ing studies that tried to directly convert such NIR images
to VIS images, even with the most advanced deep learn-
ing techniques, can hardly provide satisfying results due
to these fundamental restrictions. In [24], Liu et al. pro-
posed to properly multiplex different NIR LEDs, ranging
from 700 nm to1000 nm , to robustify the NIR2RGB task,
and achieves apparently better results than using traditional
850 nm or940 nm LEDs. However, structure gaps still ex-
ist due to the restriction of wavelengths in the NIR range,
making the results far from satisfying.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13976
The basic motivation of these methods arises from the in-
visibility of human naked eyes to NIR lights, so as to reduce
visual interference and light pollution. However, up to now,
none of these works have explicitly formulated the visibility
of certain illumination. Liu et al. [24] empirically picked up
the NIR range beyond 700 nm , and there is a clear tendency
that LEDs closer to this prescribed boundary are preferred
according to their results. A natural question is: Is there
an exact boundary between visible and invisible? This is
important since it determines how much information in the
VIS range can be utilized to help RGB recovery.
Inspired by the aforementioned methods, we propose to
quantify and incorporate the human vision system into our
model, which enables us to significantly robustify this task
via illumination spectrum design in the wide-band spectral
range from 420 nm to890 nm . Similar to [24], we directly
optimize the spectral curve by training an image enhance-
ment model on hyperspectral datasets. Specifically, based
on the human vision system, we establish a Visibility Con-
strained Spectrum Design (VCSD) model to quantify the
visibility of certain spectra, and to assure the prescribed vis-
ibility level will not be violated. To achieve this, a visibility
threshold ˆΨis introduced, which serves as the visibility up-
per bound during the spectrum design process. In practice,
this threshold can be changed according to the desired level
of visibility, without destroying the validity of our method.
According to the upper bounded visibility level, the model
scales down the designed LED spectrum (if necessary) to
assure that the new spectrum is friendly to naked eyes. Af-
ter that, we design a physic-based Imaging Process Simu-
lation (IPS) model which synthesizes images using the cor-
responding LED spectrum, camera spectral sensitivity, and
the reflectance spectrum of the scene. The IPS model also
contains a noise model to consider the noise effect during
the realistic imaging process. Since we consider the spec-
trum from 420 nm to890 nm , we synthesize one VIS image
with lights shorter than 700 nm and one VIS-NIR image
with the full spectrum. Through deep learning, we directly
minimize the reconstruction loss and finally get the optimal
LED spectral curve that can be physically realized by driv-
ing LEDs with appropriate voltage and current.
We evaluate the effectiveness of our model and designed
curve on hyperspectral datasets including our proposed and
previous [32] datasets. Compared to existing methods, our
model clearly achieves superior results, demonstrating the
powerfulness of wide-band illumination spectrum design
under visibility constraints.
The main highlights of this work are:
• For the first time, we propose a paradigm that quan-
tifies and incorporates the human vision system for
seeing-in-the-dark, which enables us to significantly
improve the task via illumination spectrum design in
a wide-band coverage from 420 nm to890 nm .• A novel Visibility Constrained Spectrum Design
(VCSD) model is proposed to formulate and assure the
visibility level of certain spectra to human naked eyes
during the optimization process. The visibility thresh-
old can be changed according to the desired level of
visibility, without destroying the validity of the model.
• We design a physic-based Imaging Process Simula-
tion (IPS) module which synthesizes the input images
based on the imaging process and the noise model.
• We contribute a VIS-NIR wide-band hyperspectral im-
age dataset to supplement existing ones in terms of
quality and quantity.
|
Oh_Recovering_3D_Hand_Mesh_Sequence_From_a_Single_Blurry_Image_CVPR_2023 | Abstract
Hands, one of the most dynamic parts of our body, suffer
from blur due to their active movements. However, previous
3D hand mesh recovery methods have mainly focused on
sharp hand images rather than considering blur due to the
absence of datasets providing blurry hand images. We first
present a novel dataset BlurHand, which contains blurry
hand images with 3D groundtruths. The BlurHand is con-
structed by synthesizing motion blur from sequential sharp
hand images, imitating realistic and natural motion blurs.
In addition to the new dataset, we propose BlurHandNet, a
baseline network for accurate 3D hand mesh recovery from
a blurry hand image. Our BlurHandNet unfolds a blurry
input image to a 3D hand mesh sequence to utilize tem-
poral information in the blurry input image, while previ-
ous works output a static single hand mesh. We demon-
strate the usefulness of BlurHand for the 3D hand mesh
recovery from blurry images in our experiments. The pro-
posed BlurHandNet produces much more robust results on
blurry images while generalizing well to in-the-wild images.
The training codes and BlurHand dataset are available at
https://github.com/JaehaKim97/BlurHand_RELEASE.
| 1. Introduction
Since hand images frequently contain blur when hands
are moving, developing a blur-robust 3D hand mesh esti-
mation framework is necessary. As blur makes the bound-
ary unclear and hard to recognize, it significantly degrades
the performance of 3D hand mesh estimation and makes
the task challenging. Despite promising results of 3D hand
mesh estimation from a single sharp image [5,13,16,17,22],
research on blurry hands is barely conducted.
A primary reason for such lack of consideration is the ab-
sence of datasets that consist of blurry hand images with ac-
curate 3D groundtruth (GT). Capturing blurry hand datasets
*Authors contributed equally.
(a)Examples of the presented BlurHand dataset.
(b)Illustration of the temporal unfolding.
Figure 1. Proposed BlurHand dataset and BlurHandNet. (a)
We present a novel BlurHand dataset, providing natural blurry
hand images with accurate 3D annotations. (b) While most pre-
vious methods produce a single 3D hand mesh from a sharp im-
age, our BlurHandNet unfolds the blurry hand image into three
sequential hand meshes.
is greatly challenging. The standard way of capturing mark-
erless 3D hand datasets [8,23,50] consists of two stages: 1)
obtaining multi-view 2D grounds ( e.g., 2D joint coordinates
and mask) manually [50] or using estimators [14, 15, 44]
and 2) triangulating the multi-view 2D grounds to the 3D
space. Here, manual annotations or estimators in the first
stage are performed from images. Hence, they become un-
reliable when the input image is blurry, which results in tri-
angulation failure in the second stage.
Contemplating these limitations, we present the Blur-
Hand, whose examples are shown in Figure 1a. Our Blur-
Hand, the first blurry hand dataset, is synthesized from In-
terHand2.6M [23], which is a widely adopted video-based
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
554
hand dataset with accurate 3D annotations. Following state-
of-the-art blur synthesis literature [25, 26, 39], we approxi-
mate the blurry images by averaging the sequence of sharp
hand frames. As such technique requires high frame rates
of videos, we employ a widely used video interpolation
method [27] to complement the low frame rate (30 frames
per second) of InterHand2.6M. We note that our synthetic
blur dataset contains realistic and challenging blurry hands.
For a given blurry hand image, the most straightforward
baseline is sequentially applying state-of-the-art deblurring
methods [3, 29, 30, 46] on blurry images and 3D hand mesh
estimation networks [21, 22, 38] on the deblurred image.
However, such a simple baseline suffers from two limita-
tions. First, since hands contain challenging blur caused
by complex articulations, even state-of-the-art deblurring
methods could not completely deblur the image. Therefore,
the performance of the following 3D hand mesh estima-
tion networks severely drops due to remaining blur artifacts.
Second, since conventional deblurring approaches only re-
store the sharp images corresponding to the middle of the
motion, it limits the chance to make use of temporal infor-
mation, which might be useful for 3D mesh estimation. In
other words, the deblurring process restricts networks from
exploiting the motion information in blurry hand images.
To overcome the limitations, we propose BlurHandNet,
which recovers a 3D hand mesh sequence from a single
blurry image, as shown in Figure 1b. Our BlurHandNet
effectively incorporates useful temporal information from
the blurry hand. The main components of BlurHandNet
are Unfolder and a kinematic temporal Transformer (KT-
Former). Unfolder outputs hand features of three timesteps,
i.e., middle and both ends of the motion [12, 28, 32, 36].
The Unfolder brings benefits to our method in two aspects.
First, Unfolder enables the proposed BlurHandNet to out-
put not only 3D mesh in the middle of the motion but also
3D meshes at both ends of the motion, providing more in-
formative results related to motion. We note that this prop-
erty is especially beneficial for the hands, where the motion
has high practical value in various hand-related works. For
example, understanding hand motion is essential in the do-
main of sign language [2,34] and hand gestures [40], where
the movement itself represents meaning. Second, extract-
ing features from multiple time steps enables the following
modules to employ temporal information effectively. Since
hand features in each time step are highly correlated, ex-
ploiting temporal information benefits reconstructing more
accurate 3D hand mesh estimation.
To effectively incorporate temporal hand features from
the Unfolder, we propose KTFormer as the following mod-
ule. The KTFormer takes temporal hand features as input
and leverages self-attention to enhance the temporal hand
features. The KTFormer enables the proposed BlurHand-
Net to implicitly consider both the kinematic structure andtemporal relationship between the hands in three timesteps.
The KTFormer brings significant performance gain when
coupled with Unfolder, demonstrating that employing tem-
poral information plays a key role in accurate 3D hand mesh
estimation from blurry hand images.
With a combination of BlurHand and BlurHandNet, we
first tackle 3D hand mesh recovery from blurry hand im-
ages. We show that BlurHandNet produces robust results
from blurry hands and further demonstrate that BlurHand-
Net generalizes well on in-the-wild blurry hand images by
taking advantage of effective temporal modules and Blur-
Hand. As this problem is barely studied, we hope our work
could provide useful insights into the following works. We
summarize our contributions as follows:
• We present a novel blurry hand dataset, BlurHand,
which contains natural blurry hand images with accu-
rate 3D GTs.
• We propose the BlurHandNet for accurate 3D hand
mesh estimation from blurry hand images with novel
temporal modules, Unfolder and KTFormer.
• We experimentally demonstrate that the proposed
BlurHandNet achieves superior 3D hand mesh estima-
tion performance on blurry hands.
|
Qiu_Looking_Through_the_Glass_Neural_Surface_Reconstruction_Against_High_Specular_CVPR_2023 | Abstract
Neural implicit methods have achieved high-quality 3D
object surfaces under slight specular highlights. However,
high specular reflections (HSR) often appear in front of tar-
get objects when we capture them through glasses. The
complex ambiguity in these scenes violates the multi-view
consistency, then makes it challenging for recent methods
to reconstruct target objects correctly. To remedy this is-
sue, we present a novel surface reconstruction framework,
NeuS-HSR, based on implicit neural rendering. In NeuS-
HSR, the object surface is parameterized as an implicit
signed distance function (SDF). To reduce the interference
of HSR, we propose decomposing the rendered image into
two appearances: the target object and the auxiliary plane.
We design a novel auxiliary plane module by combining
physical assumptions and neural networks to generate the
auxiliary plane appearance. Extensive experiments on syn-
thetic and real-world datasets demonstrate that NeuS-HSR
outperforms state-of-the-art approaches for accurate and
robust target surface reconstruction against HSR. Code is
available at https://github.com/JiaxiongQ/
NeuS-HSR .
| 1. Introduction
Reconstructing 3D object surfaces from multi-view im-
ages is a challenging task in computer vision and graph-
ics. Recently, NeuS [45] combines the surface render-
ing [3, 12, 35, 52] and volume rendering [8, 29], for recon-
structing objects with thin structures and achieves good per-
formance on the input with slight specular reflections. How-
ever, when processing the scenes under high specular reflec-
tions (HSR), NeuS fails to recover the target object surfaces,
as shown in the second row of Fig. 1. High specular reflec-
tions are ubiquitous when we use a camera to capture the
target object through glasses. As shown in the first row of
Fig. 1, in the captured views with HSR, we can recognize
the virtual image in front of the target object. The virtual
*Bo Ren is the corresponding author.
View 1 View 27 View 56 … …Supervision NeuS NeuS -HSRFigure 1. 3D object surface reconstruction under high specular
reflections (HSR). Top: A real-world scene captured by a mobile
phone. Middle: The state-of-the-art method NeuS [45] fails to re-
construct the target object ( i.e., the Buddha). Bottom: We propose
NeuS-HSR, which recovers a more accurate target object surface
than NeuS.
image introduces the photometric variation on the object
surface visually, which degrades the multi-view consistency
and encodes extreme ambiguities for rendering, then con-
fuses NeuS to reconstruct the reflected objects instead of
the target object.
To adapt to the HSR scenes, one intuitive solution is
firstly applying reflection removal methods to reduce HSR,
then reconstructing the target object with the enhanced tar-
get object appearance as the supervision. However, most
recent single-image reflection removal works [4, 9, 23, 24,
26, 40] need the ground-truth background or reflection as
supervision, which is hard to be acquired. Furthermore, for
these reflection removal methods, testing scenes should be
present in the training sets, which limits their generaliza-
tion. These facts demonstrate that explicitly using the re-
flection removal methods to enhance the target object ap-
pearance is impractical. A recent unsupervised reflection
removal approach, NeRFReN [18] decomposes the ren-
dered image into reflected and transmitted parts by implicit
representations. However, it is limited by constrained view
directions and simple planar reflectors. When we apply it to
scenes for multi-view reconstruction, as Fig. 3 presents, it
takes the target object as the content in the reflected image
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20823
Supervision=+rayrayCameraCameraNeuSNeuS-HSR
Target ObjectAuxiliary PlaneObjectsurfaceHighspecularreflectionsObjectrenderingweightsPlanerenderingweightsFigure 2. NeuS-HSR. High specular reflections (HSR) make NeuS
tend to reconstruct the reflected object in HSR. NeuS-HSR phys-
ically decomposes the rendered image into the target object and
auxiliary plane parts, which encourages NeuS to focus on the tar-
get object.
and fails to generate the correct transmitted image for target
object recovery.
The two-stage intuitive solution struggles in our task
as discussed above. To tackle this issue, we consider a
more effective decomposition strategy than NeRFReN, to
enhance the target object appearance for accurate surface
reconstruction in one stage. To achieve our goal, we con-
struct the following assumptions:
Assumption 1 A scene that suffers from HSR can be de-
composed into the target object and planar reflector com-
ponents. Except for the target object, HSR and most other
contents in a view are reflected and transmitted through the
planar reflectors ( i.e., glasses).
Assumption 2 Planar reflectors intersect with the camera
view direction since all view direction vectors generally
point to the target object and pass through planar reflec-
tors.
Based on the above physical assumptions, we propose
NeuS-HSR, a novel object reconstruction framework to re-
cover the target object surface against HSR from a set of
RGB images. For Assumption 1, as Fig. 2 shows, we de-
sign an auxiliary plane to represent the planar reflector since
we aim to enhance the target object appearance through it.
With the aid of the auxiliary plane, we faithfully separate
the target object and auxiliary plane parts from the super-
vision. For the target object part, we follow NeuS [45]
to generate the target object appearance. For the auxiliary
plane part, we design an auxiliary plane module with the
view direction as the input for Assumption 2, by utilizing
neural networks to generate attributes (including the nor-
mal and position) of the view-dependent auxiliary plane.
When the auxiliary plane is determined, we acquire the aux-
iliary plane appearance based on the reflection transforma-
tion [16] and neural networks. Finally, we add two appear-
ances and then obtain the rendered image, which is super-
vised by the captured image for one-stage training.
We conduct a series of experiments to evaluate NeuS-
HSR. The experiments demonstrate that NeuS-HSR is su-
perior to other state-of-the-art methods on the synthetic
dataset and recovers high-quality target objects from HSR-
effect images in real-world scenes.
Transmitted Image
Supervision
Reflected Image
NeuSFigure 3. Decomposition of NeRFReN [18]. NeRFReN fails to
separate specular reflections and the target object appearance in
this view, then makes NeuS fail to recover the target object surface.
To summarize, our main contributions are as follows:
• We propose to recover the target object surface, which
suffers from HSR, by separating the target object and aux-
iliary plane parts of the scene.
• We design an auxiliary plane module to generate the ap-
pearance of the auxiliary plane part physically to enhance
the appearance of the target object part.
• Extensive experiments on synthetic and real-world scenes
demonstrate that our method reconstructs more accurate
target objects than other state-of-the-art methods quanti-
tatively and qualitatively.
|
Memmel_Modality-Invariant_Visual_Odometry_for_Embodied_Vision_CVPR_2023 | Abstract
Effectively localizing an agent in a realistic, noisy setting
is crucial for many embodied vision tasks. Visual Odome-
try (VO) is a practical substitute for unreliable GPS and
compass sensors, especially in indoor environments. While
SLAM-based methods show a solid performance without
large data requirements, they are less flexible and robust
w.r.t. to noise and changes in the sensor suite compared
to learning-based approaches. Recent deep VO models,
however, limit themselves to a fixed set of input modalities,
e.g., RGB and depth, while training on millions of sam-
ples. When sensors fail, sensor suites change, or modali-
ties are intentionally looped out due to available resources,
e.g., power consumption, the models fail catastrophically.
Furthermore, training these models from scratch is even
more expensive without simulator access or suitable exist-
ing models that can be fine-tuned. While such scenarios
get mostly ignored in simulation, they commonly hinder a
model’s reusability in real-world applications. We propose
a Transformer-based modality-invariant VO approach that
can deal with diverse or changing sensor suites of naviga-
tion agents. Our model outperforms previous methods while
training on only a fraction of the data. We hope this method
opens the door to a broader range of real-world applica-
tions that can benefit from flexible and learned VO models.
| 1. Introduction
Artificial intelligence has found its way into many com-
mercial products that provide helpful digital services. To in-
crease its impact beyond the digital world, personal robotics
and embodied AI aims to put intelligent programs into bod-
ies that can move in the real world or interact with it [15].
One of the most fundamental skills embodied agents must
learn is to effectively traverse the environment around them,
allowing them to move past stationary manipulation tasks
and provide services in multiple locations instead [40]. The
ability of an agent to locate itself in an environment is vi-
tal to navigating it successfully [12, 64]. A common setup
is to equip an agent with an RGB-D (RGB andDepth )
*Work done on exchange at EPFL
Figure 1. An agent is tasked to navigate to a goal location us-
ingRGB-D sensors. Because GPS+Compass are not available,
the location is inferred from visual observations only. Neverthe-
less, sensors can malfunction, or availability can change during
test-time (indicated by ∼), resulting in catastrophic failure of the
localization. We train our model to react to such scenarios by ran-
domly dropping input modalities. Furthermore, our method can
be extended to learn from multiple arbitrary input modalities, e.g.,
surface normals, point clouds, or internal measurements.
camera and a GPS+Compass sensor and teach it to nav-
igate to goals in unseen environments [2]. With extended
data access through simulators [28, 39, 40, 47, 57], photo-
realistic scans of 3D environments [7, 28, 46, 56, 58], and
large-scale parallel training, recent approaches reach al-
most perfect navigation results in indoor environments [55].
However, these agents fail catastrophically in more real-
istic settings with noisy, partially unavailable, or failing
RGB-D sensor readings, noisy actuation, or no access to
GPS+Compass [6, 64].
Visual Odometry (VO) is one way to close this per-
formance gap and localize the agent from only RGB-D
observations [2], and deploying such a model has been
shown to be especially beneficial when observations are
noisy [12, 64]. However, those methods are not robust to
any sensory changes at the test-time, such as a sensor fail-
ing, underperforming, or being intentionally looped out.
In practical applications [43], low-cost hardware can also
experience serious bandwidth limitations, causing RGB (3
channels) and Depth (1 channel) to be transferred at dif-
ferent rates. Furthermore, mobile edge devices must bal-
ance battery usage by switching between passive ( e.g.,RGB)
and active ( e.g.,LIDAR ) sensors depending on the specific
episode. Attempting to solve this asymmetry by keeping
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21549
separate models in memory, relying on active sensors, or
using only the highest rate modality is simply infeasible for
high-speed and real-world systems. Finally, a changing sen-
sor suite represents an extreme case of sensor failure where
access to a modality is lost during test-time. These points
demonstrate the usefulness of a certain level of modality in-
variance in a VO framework. Those scenarios decrease the
robustness of SLAM-based approaches [32] and limit the
transferability of models trained on RGB-D to systems with
only a subset or different sensors.
We introduce “optional” modalities as an umbrella term
to describe settings where input modalities may be of lim-
ited availability at test-time. Figure 1 visualizes a typical
indoor navigation pipeline, but introduces uncertainty about
modality availability ( i.e. at test-time, only a subset of all
modalities might be available). While previous approaches
completely neglect such scenarios, we argue that explicitly
accounting for “optional” modalities already during train-
ingof VO models allows for better reusability on platforms
with different sensor suites and trading-off costly or unre-
liable sensors during test-time. Recent methods [12, 64]
use Convolution Neural Network (ConvNet) architectures
that assume a constant channel size of the input, which
makes it hard to deal with multiple ”optional” modalities.
In contrast, Transformers [51] are much more amenable to
variable-sized inputs, facilitating the training of models that
can optionally accept one or multiple modalities [4].
Transformers are known to require large amounts of data
for training from scratch. Our model’s data requirements
are significantly reduced by incorporating various biases:
We utilize multi-modal pre-training [4, 17, 30], which not
only provides better initializations but also improves perfor-
mance when only a subset of modalities are accessible dur-
ing test-time [4]. Additionally, we propose a token-based
action prior. The action taken by the agent has shown to
be beneficial for learning VO [35,64] and primes the model
towards the task-relevant image regions.
We introduce the Visual Odometry Transformer (VOT),
a novel modality-agnostic framework for VO based on the
Transformer architecture. Multi-modal pre-training and an
action prior drastically reduce the data required to train the
architecture. Furthermore, we propose explicit modality-
invariance training. By dropping modalities during train-
ing, a single VOT matches the performance of separate uni-
modal approaches. This allows for traversing different sen-
sors during test-time and maintaining performance in the
absence of some training modalities.
We evaluate our method on point-goal navigation in the
Habitat Challenge 2021 [1] and show that VOT outper-
forms previous methods [35] with training on only 5%
of the data. Beyond this simple demonstration, we stress
that our framework is modality-agnostic and not limited to
RGB-D input or discrete action spaces and can be adaptedto various modalities, e.g., point clouds, surface normals,
gyroscopes, accelerators, compass, etc. To the best of our
knowledge, VOT is the first widely applicable modality-
invariant Transformer-based VO approach and opens up ex-
citing new applications of deep VO in both simulated and
real-world applications. We make our code available at
github.com/memmelma/VO-Transformer.
|
Luo_GeoLayoutLM_Geometric_Pre-Training_for_Visual_Information_Extraction_CVPR_2023 | Abstract
Visual information extraction (VIE) plays an important
role in Document Intelligence. Generally, it is divided
into two tasks: semantic entity recognition (SER) and rela-
tion extraction (RE). Recently, pre-trained models for doc-
uments have achieved substantial progress in VIE, partic-
ularly in SER. However, most of the existing models learn
the geometric representation in an implicit way, which has
been found insufficient for the RE task since geometric in-
formation is especially crucial for RE. Moreover, we reveal
another factor that limits the performance of RE lies in the
objective gap between the pre-training phase and the fine-
tuning phase for RE. To tackle these issues, we propose
in this paper a multi-modal framework, named GeoLay-
outLM, for VIE. GeoLayoutLM explicitly models the geo-
metric relations in pre-training, which we call geometric
pre-training. Geometric pre-training is achieved by three
specially designed geometry-related pre-training tasks. Ad-
ditionally, novel relation heads, which are pre-trained by
the geometric pre-training tasks and fine-tuned for RE, are
elaborately designed to enrich and enhance the feature rep-
resentation. According to extensive experiments on stan-
dard VIE benchmarks, GeoLayoutLM achieves highly com-
petitive scores in the SER task and significantly outperforms
the previous state-of-the-arts for RE ( e.g., the F1 score of
RE on FUNSD is boosted from 80.35% to 89.45%)1.
| 1. Introduction
Visual information extraction (VIE) is a critical part in
Document AI [3, 29, 47]. It has attracted more and more at-
tention from both the academic and industrial community.
VIE involves semantic entity recognition (SER, a.k.a. en-
tity labeling) and relation extraction (RE, a.k.a. entity link-
ing) from visually-rich documents (VrDs) such as forms
and receipts [3, 17, 22, 35, 39, 41, 45, 46]. Recent years have
witnessed the great power of pre-trained multi-modal mod-
els [1, 7, 8, 12, 15, 20–22, 30, 38, 40, 41, 43] in VIE tasks,
*Both authors contributed equally to this work.
1https://github.com/AlibabaResearch/AdvancedLiterateMachinery
True PositiveFalse PositiveFalse Negative
(a)(b)
True PositiveFalse PositiveFalse Negative
(a)(b)Figure 1. Incorrect relation predictions by the previous state-of-
the-art model LayoutLMv3 [15]. (a) LayoutLMv3 tends to link
two entities relying more on their semantics than the geometric
layout, i.e., the entity “212-450-4785” is linked to “Fax Number”
regardless of their relationship in layout. (b) LayoutLMv3 suc-
cessfully predicts the link in the upper half part but misses the link
below, although both links are similar in geometric layout. These
two examples clearly show the importance of geometric infor-
mation in relation extraction (RE) .
Precision Recall F1
LayoutLMv3 75.82 85.45 80.35
+ geometric constraint 79.87 85.45 82.57
Table 1. The RE performance improvement by introducing a sim-
ple geometric restriction (on the FUNSD dataset).
especially the SER task. Compared with SER, the RE task,
which aims at predicting the relation between semantic en-
tities in documents, has not been fully explored and remains
a challenging problem [12, 22]. RE is essential to provide
additional structural information closer to human compre-
hension of the VrDs [45]. It makes the open-layout infor-
mation extraction possible, e.g., for open-layout key-value
linking and form-like items grouping.
It is widely accepted that document layout understand-
ing is crucial for VIE [1, 7, 8, 15, 21, 22, 30, 38, 40, 41, 43],
especially for RE [12, 22]. The geometric relationships, a
specific form for describing document layout, are impor-
tant for document layout representations [22, 27, 31]. Most
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7092
previous pre-trained models for VrDs learn layout represen-
tations implicitly by adding coordinates into the model in-
puts, combining the relative position encoding or supervis-
ing by alignment-related pre-training tasks like text-image
alignment [15, 30, 43] and masked vision language model-
ing [1,7,12,15,21,22,22,40,41,43]. However, it is not guar-
anteed that the geometric layout information is well learned
in these models. Taking the state-of-the-art model Lay-
outLMv3 as an example, we find it would make mistakes
in certain relatively simple scenarios, where the geometric
relations between entities are not complicated. As shown in
Fig. 1, LayoutLMv3 seems to link two entities depending
more on the semantics than the geometric layout. This in-
dicates that its layout understanding is not sufficiently dis-
criminative. To further verify our conjecture, we conduct
an experiment by filtering the false positive relations using
a simple geometric restriction (the linkings between entities
should not point up beyond a certain distance), the precision
would increase by a large margin (more than 4 points) while
the recall is controlled unchanged, as detailed in Tab. 1.
This experiment proves that LayoutLMv3 does not fully
exploit the useful geometric relationship information. Be-
sides, most existing methods did not directly take the rela-
tion modeling into consideration in pre-training. They usu-
ally adopt token/segment-level classification or regression,
which might underperform on downstream tasks related to
relation modeling. Therefore, it is necessary to learn a bet-
ter layout representation for document pre-trained models
by modeling the geometric relationships between entities
explicitly during pre-training.
During RE fine-tuning, previous works usually learn a
task head like a single linear or bilinear layer [12, 22] from
scratch. On the one hand, since the higher-level pair re-
lationship features, which are beyond the token or text-
segment features in documents, are complex, we argue that
a single linear or bilinear layer is not always adequate to
make full use of the encoded features for RE. On the other
hand, the RE task head initialized randomly is prone to
overfitting with limited fine-tuning data. Since the pre-
trained backbone has shown tremendous potential [4, 5],
why not pre-train the task head in some way simultane-
ously? Several works [10, 14, 26] have proved that smaller
gapbetween pre-training and fine-tuning leads to better per-
formance for downstream tasks. Hence, there is still consid-
erable room for the design and usage of the RE task head.
Based on the above observations, we establish a multi-
modal pre-trained framework (termed as GeoLayoutLM )
for VIE, in which a geometric pre-training strategy is de-
signed to explicitly utilize the geometric relationships be-
tween text-segments, and elaborately-designed RE heads
are introduced to mitigate the gap between pre-training
and fine-tuning on the downstream relation extraction task.
Specifically, three geometric relations are defined: the re-lation between two text-segments ( GeoPair ), that among
multiple text-segment pairs ( GeoMPair ), and that among
three text-segments ( GeoTriplet ). Correspondingly, three
self-supervised pre-training tasks are proposed. GeoPair re-
lation is modeled by the Direction and Distance Modeling
(DDM ) task in which GeoLayoutLM needs to tell the di-
rection of a directed pair and identify whether a segment is
the nearest to another one in the direction. Furthermore, we
design a brand-new pre-training objective called Detection
ofDirection Exceptions ( DDE ) for GeoMPair, enabling our
model to capture the common pattern of directions among
segment pairs, enhance the pair feature representation and
discover the detached ones. For GeoTriplet, we propose
aCollinearity Identification of Triplet ( CIT) task to iden-
tify whether three segments are collinear, which takes a step
forward to the modeling of multi-segments relations. It is
important for non-local layout feature learning especially
in form-like documents. Additionally, novel relation heads
are proposed to learn better relation features, which are pre-
trained by the geometric pre-training tasks to absorb prior
knowledge about geometry, thus mitigating the gap between
pre-training and fine-tuning. Extensive experiments on five
public benchmarks demonstrate the effectiveness of the pro-
posed GeoLayoutLM.
Our contributions are summarized as follows:
1) This paper introduces three geometric relations in dif-
ferent levels and designs three brand-new geometric
pre-training tasks correspondingly for learning the ge-
ometric layout representation explicitly. To the best
of our knowledge, GeoLayoutLM is the first to ex-
plore the geometric relations of multi-pair and multi-
segments in document pre-training.
2) Novel relation heads are proposed to benefit the re-
lation modeling. Besides, the relation heads are pre-
trained by the proposed geometric tasks and fine-tuned
for RE, thus mitigating the object gap between pre-
training and fine-tuning.
3) Experimental results on visual information extraction
tasks including key-value linking as relation extrac-
tion, entity grouping as relation extraction, and seman-
tic entity recognition show that the proposed GeoLay-
outLM significantly outperforms previous state-of-the-
arts with good interpretability. Moreover, our model
has notable advantages in few-shot RE learning.
|
Luo_VideoFusion_Decomposed_Diffusion_Models_for_High-Quality_Video_Generation_CVPR_2023 | Abstract
A diffusion probabilistic model (DPM), which constructs
a forward diffusion process by gradually adding noise to
data points and learns the reverse denoising process to gen-
erate new samples, has been shown to handle complex data
distribution. Despite its recent success in image synthesis,
applying DPMs to video generation is still challenging due
to high-dimensional data spaces. Previous methods usually
adopt a standard diffusion process, where frames in the
same video clip are destroyed with independent noises,
ignoring the content redundancy and temporal correlation.
This work presents a decomposed diffusion process via
resolving the per-frame noise into a base noise that is
shared among all frames and a residual noise that varies
along the time axis. The denoising pipeline employs two
jointly-learned networks to match the noise decomposition
accordingly. Experiments on various datasets confirm
that our approach, termed as VideoFusion, surpasses both
*Work done at Alibaba group.
†Corresponding author.GAN-based and diffusion-based alternatives in high-quality
video generation. We further show that our decomposed
formulation can benefit from pre-trained image diffusion
models and well-support text-conditioned video creation.
| 1. Introduction
Diffusion probabilistic models (DPMs) are a class of
deep generative models, which consist of : i) a diffusion
process that gradually adds noise to data points, and ii) a
denoising process that generates new samples via iterative
denoising [14, 18]. Recently, DPMs have made awesome
achievements in generating high-quality and diverse im-
ages [20–22, 25, 27, 36].
Inspired by the success of DPMs on image generation,
many researchers are trying to apply a similar idea to video
prediction/interpolation [13, 44, 48]. While study about
DPMs for video generation is still at an early stage [16] and
faces challenges since video data are of higher dimensions
and involve complex spatial-temporal correlations.
Previous DPM-based video-generation methods usually
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10209
adopt a standard diffusion process, where frames in the
same video are added with independent noises and the
temporal correlations are also gradually destroyed in noised
latent variables. Consequently, the video-generation DPM
is required to reconstruct coherent frames from independent
noise samples in the denoising process. However, it is quite
challenging for the denoising network to simultaneously
model spatial and temporal correlations.
Inspired by the idea that consecutive frames share most
of the content, we are motivated to think: would it be
easier to generate video frames from noises that also
have some parts in common? To this end, we modify
the standard diffusion process and propose a decomposed
diffusion probabilistic model, termed as VideoFusion, for
video generation. During the diffusion process, we resolve
the per-frame noise into two parts, namely base noise
and residual noise , where the base noise is shared by
consecutive frames. In this way, the noised latent variables
of different frames will always share a common part,
which makes the denoising network easier to reconstruct
a coherent video. For intuitive illustration, we use the
decoder of DALL-E2 [25] to generate images conditioned
on the same latent embedding. As shown in Fig. 2a, if
the images are generated from independent noises, their
content varies a lot even if they share the same condition.
But if the noised latent variables share the same base noise,
even an image generator can synthesize roughly correlated
sequences (shown in Fig. 2b). Therefore, the burden of the
denoising network of video-generation DPM can be largely
alleviated.
Furthermore, this decomposed formulation brings addi-
tional benefits. Firstly, as the base noise is shared by all
frames, we can predict it by feeding one frame to a large
pretrained image-generation DPM with only one forward
pass. In this way, the image priors of the pretrained
model could be efficiently shared by all frames and thereby
facilitate the learning of video data. Secondly, the base
noise is shared by all video frames and is likely to be related
to the video content. This property makes it possible for
us to better control the content or motions of generated
videos. Experiments in Sec. 4.7 show that, with adequate
training, VideoFusion tends to relate the base noise with
video content and the residual noise to motions (Fig. 1).
Extensive experiments show that VideoFusion can achieve
state-of-the-art results on different datasets and also well
support text-conditioned video creation.
|
Li_Rethinking_Feature-Based_Knowledge_Distillation_for_Face_Recognition_CVPR_2023 | Abstract
With the continual expansion of face datasets, feature-
based distillation prevails for large-scale face recognition.
In this work, we attempt to remove identity supervision in
student training, to spare the GPU memory from saving
massive class centers. However, this naive removal leads to
inferior distillation result. We carefully inspect the perfor-
mance degradation from the perspective of intrinsic dimen-
sion, and argue that the gap in intrinsic dimension, namely
the intrinsic gap, is intimately connected to the infamous
capacity gap problem. By constraining the teacher’s search
space with reverse distillation, we narrow the intrinsic gap
and unleash the potential of feature-only distillation. Re-
markably, the proposed reverse distillation creates univer-
sally student-friendly teacher that demonstrates outstand-
ing student improvement. We further enhance its effective-
ness by designing a student proxy to better bridge the intrin-
sic gap. As a result, the proposed method surpasses state-
of-the-art distillation techniques with identity supervision
on various face recognition benchmarks, and the improve-
ments are consistent across different teacher-student pairs.
| 1. Introduction
Despite the unceasing emergence of larger and more
powerful models for face recognition (FR), industrial de-
ployment continues to demand for accurate and light-
weight solutions. Among other compression techniques like
pruning [27] and quantization [21], knowledge distillation
(KD) has been proven to be effective in producing high-
performing compact model from well-trained teacher. Un-
like classic KD [17] and its variants [14, 24, 43, 44] who
distill on logits, most of the existing works on FR distill on
features [11, 13] or feature-relations [8, 20, 35]. One key
*Equal contribution.†Corresponding author.
4.04.55.05.56.0
Teacher's In.D
IR34 IR50 IR70 IR1006062646668MR-all Accuracy/%FI FO ReFO (ours) FO In.DFigure 1. IResNet18 (IR18) is distilled by four different teach-
ers. Feature-only distillation (FO) shows performance degrada-
tion comparing to feature-based distillation with ID supervision
(FI). The proposed method (ReFO) significantly uplifts the perfor-
mance of FO distillation. For both FI and FO, the student perfor-
mance drops with larger teachers of lower intrinsic dimension. In
line plot: student performance (%) on MR-all benchmark [9]. In
bar plot: teacher’s intrinsic dimension (In.D).
reason is that the massive and still growing number of iden-
tities (IDs) in FR datasets, such as the 2 million IDs in Web-
Face42M [45], make it too expensive to save extra teacher’s
class centers for logits distillation.
The ground truth supervision from ID labels, which we
call ID supervision, is still retained when training student
models for better distillation results. Nonetheless, it is not
only non-trivial to find the right balancing weight [15, 33],
the obtained class centers are also not needed during infer-
ence in an open-set FR problem. This motivates the com-
plete removal of class centers in the student training for a
number of benefits: 1) speed, the student distillation breaks
free from the need of keeping any class center, providing
further training speed-up with even lower GPU memory oc-
cupancy; 2) access to unlabeled dataset, removing the de-
pendency on ID labels conveniently opens the door to the
vast quantity of unlabeled or uncleaned face images like
WebFace260M [45]; and 3) better focus on feature space,
which is what really matters in an open-set problem. Hence,
in this work, we are motivated to investigate feature distilla-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20156
tion for face recognition without ID supervision, which we
callfeature-only (FO) distillation .
The capacity gap problem is widely observed in various
KD applications [7, 19, 30, 37], where the student finds it
increasingly difficult to learn from more powerful teacher
due to larger mismatch in network capacity. In FO distilla-
tion, the naive removal of ID supervision degrades student
performance with more severe capacity gap problem. As
shown in Fig. 1, comparing to the conventional feature dis-
tillation with ID supervision (FI distillation), the IResNet18
(IR18) students trained by four other teachers all experience
drops in performance when ID supervision is removed.
Pertinent works commonly agree that differing model
sizes cause the capacity gap issue [7, 20, 30, 40]. Some
remedies were proposed to mitigate the problem such as
early stopping [7] and training teacher assistants as inter-
mediate agents [30]. Liu et al. [26] further proved the im-
portance of teacher-student structural compatibility. For a
given teacher, their best student from Neural Architecture
Search outperformed other candidates of similar model size
in the search space. However, recent works like [3, 32]
showed that teachers of the same structure, same parameter
size and comparable accuracy can also have differing dis-
tillation results on the same student. Hence, there must be
other factors contributing to the capacity gap problem other
than model size and model structure.
In this work, we argue that the teacher-student gap in in-
trinsic dimension, namely the intrinsic gap , plays a part.
The intrinsic dimension [2, 16, 36] of a feature space is
the minimum number of variables needed to unambigu-
ously describe all points in the feature space. Specifically
for a model, lower intrinsic dimension is often associated
with better generalization power and better performance for
both general classification [2] and face recognition [16]. In
Fig. 1, as the teacher gets stronger with lower intrinsic di-
mension, we observe a drop in student performance with
wider intrinsic gap for both FI distillation and FO distilla-
tion. If narrower intrinsic gap is related to better distillation
result, can the capacity gap problem be mitigated by closing
the intrinsic gap? This sparkles the idea that whether it is
possible to narrow the intrinsic gap by raising teacher’s in-
trinsic dimension for easier student-learning, neither chang-
ing its model size nor model structure.
Firstly, we revisit FO distillation and point out the intrin-
sic gap as another factor that could cause ineffective dis-
tillation. Then a reverse distillation strategy is proposed
to solve the problem by injecting knowledge about higher
intrinsic dimensional feature space into the teacher train-
ing. With reverse-distilled teachers, students trained with
just FO distillation loss like mean-square-error (MSE) show
performance on par or even better than competitors trained
by sophisticatedly designed distillation loss with ID super-
vision [20, 35]. The proposed method is thus fast and ver-satile, it can be online or offline and easily portable to unla-
beled datasets. On top of that, we further improve the dis-
tillation results by allowing the teacher to learn from more
light-weight student proxies. This better closes the intrin-
sic gap and we are able to obtain state-of-the-art (SOTA)
student models on popular face recognition benchmarks.
To summarize, the contribution of this work includes:
• We reconsider the capacity gap issue in FO distillation
and provide an alternative view from the perspective
of the intrinsic dimension. The gap in the intrinsic di-
mension between the teacher and the student is found
to be related to the distillation performance.
• We propose a novel training scheme that narrows the
teacher-student intrinsic gap via reverse distillation in
the teacher training. Furthermore, we enhance its ef-
fectiveness by designing light-weight student proxies
as the reverse distillation targets. Students trained
by the new teachers show consistent performance im-
provement on FO distillation.
• Our method pushes the limit of FO distillation with
easier-to-learn teacher. With only feature distillation
loss, resulting students are shown to be superior than
students trained by other SOTA distillation techniques
with ID supervision.
|
Nguyen_Re-Thinking_Model_Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2023 | Abstract
Model inversion (MI) attacks aim to infer and recon-
struct private training data by abusing access to a model.
MI attacks have raised concerns about the leaking of sen-
sitive information (e.g. private face images used in train-
ing a face recognition system). Recently, several algorithms
for MI have been proposed to improve the attack perfor-
mance. In this work, we revisit MI, study two fundamental
issues pertaining to all state-of-the-art (SOTA) MI algo-
rithms , and propose solutions to these issues which lead to
a significant boost in attack performance for all SOTA MI.
In particular, our contributions are two-fold: 1) We ana-
lyze the optimization objective of SOTA MI algorithms, ar-
gue that the objective is sub-optimal for achieving MI, and
propose an improved optimization objective that boosts at-
tack performance significantly. 2) We analyze “MI overfit-
ting”, show that it would prevent reconstructed images from
learning semantics of training data, and propose a novel
“model augmentation” idea to overcome this issue. Our
proposed solutions are simple and improve all SOTA MI at-
tack accuracy significantly. E.g., in the standard CelebA
benchmark, our solutions improve accuracy by 11.8% and
achieve for the first time over 90% attack accuracy. Our
findings demonstrate that there is a clear risk of leak-
ing sensitive information from deep learning models. We
urge serious consideration to be given to the privacy im-
plications. Our code, demo, and models are available
athttps://ngoc-nguyen-0.github.io/re-
thinking_model_inversion_attacks/ .
| 1. Introduction
Privacy of deep neural networks (DNNs) has attracted
considerable attention recently [2, 3, 23, 31, 32]. Today,
DNNs are being applied in many domains involving pri-
vate and sensitive datasets, e.g., healthcare, and security.
There is a growing concern of privacy attacks to gain knowl-
edge of confidential datasets used in training DNNs. One
*Equal Contribution†Corresponding Authorimportant category of privacy attacks is Model Inversion
(MI) [5, 8, 11, 12, 16, 36, 37, 39, 40] (Fig. 1). Given ac-
cess to a model, MI attacks aim to infer and reconstruct fea-
tures of the private dataset used in the training of the model.
For example, a malicious user may attack a face recognition
system to reconstruct sensitive face images used in training.
Similar to previous work [5,36,39], we will use face recog-
nition models as the running example.
Related Work. MI attacks were first introduced in [12],
where simple linear regression is the target of attack. Re-
cently, there is a fair amount of interest to extend MI to com-
plex DNNs. Most of these attacks [5, 36, 39] focus on the
whitebox setting and the attacker is assumed to have com-
plete knowledge of the model subject to attack. As many
platforms provide downloading of entire trained DNNs for
users [5, 39], whitebox attacks are important. [39] proposes
Generative Model Inversion (GMI) attack, where generic
public information is leveraged to learn a distributional
prior via generative adversarial networks (GANs) [13, 35],
and this prior is used to guide reconstruction of private
training samples. [5] proposes Knowledge-Enriched Dis-
tributional Model Inversion (KEDMI), where an inversion-
specific GAN is trained by leveraging knowledge provided
by the target model. [36] proposes Variational Model Inver-
sion (VMI), where a probabilistic interpretation of MI leads
to a variational objective for the attack. KEDMI and VMI
achieve SOTA attack performance (See Supplementary for
further discussion of related work).
In this paper , we revisit SOTA MI, study two issues
pertaining to all SOTA MI and propose solutions to these is-
sues that are complementary and applicable to all SOTA MI
(Fig. 1). In particular, despite the range of approaches pro-
posed in recent works, common and central to all these ap-
proaches is an inversion step which formulates reconstruc-
tion of training samples as an optimization. The optimiza-
tion objective in the inversion step involves the identity loss ,
which is the same for all SOTA MI and is formulated as the
negative log-likelihood for the reconstructed samples under
the model being attacked. While ideas have been proposed
to advance other aspects of MI, effective design of the iden-
tity loss has not been studied .
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16384
An Improved |
Qu_Modality-Agnostic_Debiasing_for_Single_Domain_Generalization_CVPR_2023 | Abstract
Deep neural networks (DNNs) usually fail to general-
ize well to outside of distribution (OOD) data, especially in
the extreme case of single domain generalization (single-
DG) that transfers DNNs from single domain to multi-ple unseen domains. Existing single-DG techniques com-monly devise various data-augmentation algorithms, andremould the multi-source domain generalization methodol-ogy to learn domain-generalized (semantic) features. Nev-ertheless, these methods are typically modality-specific,thereby being only applicable to one single modality (e.g.,image). In contrast, we target a versatile Modality-AgnosticDebiasing (MAD) framework for single-DG, that enablesgeneralization for different modalities. Technically, MADintroduces a novel two-branch classifier: a biased-branch
encourages the classifier to identify the domain-specific (su-
perficial) features, and a general-branch captures domain-generalized features based on the knowledge from biased-branch. Our MAD is appealing in view that it is pluggableto most single-DG models. We validate the superiority of
our MAD in a variety of single-DG scenarios with different
modalities, including recognition on 1D texts, 2D images,3D point clouds, and semantic segmentation on 2D images.More remarkably, for recognition on 3D point clouds andsemantic segmentation on 2D images, MAD improves DSUby 2.82% and 1.5% in accuracy and mIOU.
| 1. Introduction
Deep neural networks (DNNs) have achieved remarkable
success in various tasks under the assumption that train-ing and testing domains are independent and sampled fromidentical or sufficiently similar distribution [ 2,48]. How-
ever, this assumption often does not hold in most real-world scenarios. When deploying DNNs to unseen or out-of-distribution (OOD) testing domains, inevitable perfor-mance degeneration is commonly observed. The difficulty
mainly originates from that the backbone of DNNs ex-
*Corresponding authorSamples in Training Domain OOD Samples Data Augmentation
Can not directly
transfer ...Images
Images
Point CloudsPoint Clouds
Figure 1. Most existing single-DG techniques devise various
data augmentation algorithms to introduce various image texturesand styles, pursuing the learning of domain-generalized features.However, these approaches are modality-specific, and only appli-cable to single modality (e.g., image). Hence it is difficult to di-rectly employ such single-DG approach for 3D point clouds, sincethe domain shifts in 3D point clouds only reflect the geometric dif-
ferences rather than texture and style differences.
tracts more domain-specific (superficial) features together
with domain-generalized (semantic) features. Therefore,the classifier is prone to paying much attention to thosedomain-specific features, and learning unintended decisionrule [ 53]. To mitigate this issue, several appealing so-
lutions have been developed, including Domain Adapta-
tion (DA) [18,32,36,40,41] and Domain Generalization
(DG) [31,56,62,65]. Despite showing encouraging per-
formances on OOD data, their real-world applications arestill limited due to the requirement to have the data from
other domain (i.e., the unseen target domain or multiplesource domains with different distributions). In this work,we focus on an extreme case in domain generalization: sin-
gle domain generalization (single-DG) , in which DNNs are
trained with single source domain data and then required togeneralize well to multiple unseen target domains.
Previous researches [ 19,55] demonstrate that the specific
local textures and image styles tailored to each domain aretwo main causes, resulting in domain-specific features forimages. To alleviate this, recent works [ 30,37,58,63] de-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24142
sign a variety of data-augmentation algorithms to introduce
diversified textures and image styles. The DG methodolo-gies are then remolded with these data-augmentation algo-rithms to facilitate the learning of domain-generalized fea-tures. Nevertheless, such solution for single-DG is typicallymodality-specific and only applicable to the single modal-ity inputs of images. When coming a new modality (e.g.3D point clouds), it is difficult to directly apply these tech-niques to tackle single-DG problem. This is due to the factthat the domain shift in 3D point clouds is interpreted asthe differences of 3D structural information among multi-
ple domains, instead of the texture and style differences in
2D images [ 10,39]. Figure 1conceptually illustrates the
issue, which has been seldom explored in the literature.
In this paper, we propose to address this limitation
from the standpoint of directly strengthening the capac-ity of classifier to identify domain-specific features, and
meanwhile emphasize the learning of domain-generalizedfeatures. Such way completely eliminates the need ofmodality-specific data augmentations, thereby leading to aversatile modality-agnostic paradigm for single-DG. Tech-
nically, to materialize this idea, we design a novel Modality-
Agnostic Debiasing (MAD) framework, that facilitates sin-gle domain generalization under a wide variety of modali-ties. In particular, MAD integrates the basic backbone forfeature extraction with a new two-branch classifier struc-
ture. One branch is the biased-branch that identifies thosesuperficial and domain-specific features with a multi-head
cooperated classifier. The other branch is the general-branch that learns to capture the domain-generalized rep-resentations on the basis of the knowledge derived from thebiased-branch. It is also appealing in view that our MAD
can be seamlessly incorporated into most existing single-
DG models with data-augmentation, thereby further boost-ing single domain generalization.
We analyze and evaluate our MAD under a variety of
single-DG scenarios with different modalities, ranging fromrecognition on 2D images, 3D point clouds, 1D texts, tosemantic segmentation on 2D images. Extensive experi-
ments demonstrate the superior advantages of MAD whenbeing plugged into a series of existing single-DG techniqueswith data-augmentation (e.g., Mixstyle [ 65] and DSU [ 30]).
More remarkably, for recognition on point cloud bench-mark, MAD significantly improves DSU in the accuracy
from 33.63% to 36.45%. For semantic segmentation on im-age benchmark, MAD advances DSU with mIoU improve-ment from 42.3% to 43.8%.
|
Qiu_Graph_Representation_for_Order-Aware_Visual_Transformation_CVPR_2023 | Abstract
This paper proposes a new visual reasoning formula-
tion that aims at discovering changes between image pairs
andtheirtemporalorders. Recognizingscenedynamicsand
theirchronologicalordersisafundamentalaspectofhuman
cognition. The aforementioned abilities make it possible to
follow step-by-step instructions, reason about and analyze
events, recognize abnormal dynamics, and restore scenes
to their previous states. However, it remains unclear how
well current AI systems perform in these capabilities. Al-
though a series of studies have focused on identifying and
describing changes from image pairs, they mainly consider
thosechangesthatoccursynchronously,thusneglectingpo-
tential orders within those changes. To address the above
issue, we first propose a visual transformation graph struc-
ture for conveying order-aware changes. Then, we bench-
marked previous methods on our newly generated dataset
andidentifiedtheissuesofexistingmethodsforchangeorder
recognition. Finally, we show a significant improvement in
order-awarechangerecognitionbyintroducinganewmodel
that explicitly associates different changes and then identi-
fies changes and their orders in a graph representation.
| 1. Introduction
The Only Constant in Life Is Change.
- Heraclitus
Humans conduct numerous reasoning processes beyond
objectandmotionrecognition. Throughtheseprocesses,we
cancaptureawiderangeofinformationwithjustaglimpse
ofascenario. Toachievehuman-levelvisualunderstanding,
variousstudieshaverecentlyfocusedondifferentaspectsof
visualreasoning,suchascompositional[1–4],causal[5,6],
abstract [7–9], abductive [10,11], and commonsense visual
reasoning [12, 13]. Due to the ever-changing visual sur-
rounding,perceivingandreasoningoverscenedynamicsare
essential. However, most existing visual reasoning studies
focus on scenes in fixed periods of time. Therefore, this
study focuses on a new formulation of visual reasoning for
identifying scene dynamics.
Encoder
VTGen
Before-changeAfter-change
Type: moveObj.: cyan, rubber, cubePos0.: purple, rubber, spherePos1.: groundType: addObj.: gray, rubber, cylinderPos0.: -Pos1.: cyan, rubber, cubeType: addObj.: gray, rubber, cubePos0.: -Pos1.: purple, rubber, sphereType: moveObj.: gray, metal, spherePos0.: groundPos1.: blue, rubber, sphere
Visual transformation graphFigure 1. Overview of the proposed order-aware change recog-
nition model VTGen (top). From an image pair observed before
andaftermultiplesynchronousandasynchronouschanges,VTGen
generates a visual transformation graph (bottom) where nodes in-
dicate change contents (including type, object attributes, original
positiondescribedbywhatisunderneathit,andnewposition,and
directed edges indicate temporal orders of changes.
Due to variations in the spatial positions of objects and
the temporal order of human activities, changes within a
pair of observations could occur simultaneously or asyn-
chronously. Several recent studies have already discussed
recognizinganddescribingsynchronouschangesfromapair
ofimages vianatural languagetexts [14–16]while neglect-
ing the potential orders between changes. However, iden-
tifying temporal orders is an integral aspect of revealing
how scene dynamics occur in time, making it possible to
restore scenes to their previous states. Temporal orders are
also critical in a variety of applications, such as room rear-
rangement[17],assemblyoperation[18,19],andinstruction
following [20,21]. Change order recognition presents new
challenges as it requires reasoning over underlying tempo-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22793
ralevents,anditalsocomplicatessinglechangerecognition
due to entangled appearances and localization. However,
despite its complexity, humans exhibit high performance in
findingmultiplechangesanddeterminingtheirtemporalor-
ders from a pair of images. For example, even human chil-
dren under age five can perform assembly operations with
toysingameslikeLEGObuildingblocks. Therefore,inthis
work, we are particularly interested in how well the current
AI methods perform in order-aware change recognition.
Similartodiscoveringdynamicsandordersfromapairof
scene observations, there is a group of works that discusses
the step-by-step assembly of objects from their component
parts [18,22–24]. Such part assembly studies tend to focus
on recovering the sequence steps for rebuilding objects and
are thus highly useful in robotic applications used for as-
sembly operations or instruction following. However, part
assembly operations focus on the reconstruction of objects
fromtheirparts,andnotfindingthedifferencesbetweentwo
discrete scene observations. Moreover, instead of directly
recovering all steps from two single observations, existing
partassemblymethodsrequireadditionalinformation,such
as language instructions or demonstration videos, for their
step generation processes.
As shown in Figure 1, this study proposes a new task to
identifyorder-awarechangesdirectlyfromapairofimages.
Most existing studies generate a single sentence [14,15],
paragraph[16],ortriplets[25]fordescribingchanges. How-
ever, sentences are lengthy and less suitable for simultane-
ouslyindicatingchangecontentsandtheirorders,andmake
model analysis and evaluation opaque. Hence, we propose
the use of an order-aware transformation graph (Figure 1
bottom). Changecontentsarerepresentedbynodesandtheir
chronological orders by directed edges. To diagnose model
performance,wegeneratedadataset,namedorder-awarevi-
sualtransformation(OVT),consistingofasynchronousand
synchronous changes between scene observations.
We then conducted benchmark experiments using ex-
isting methods and found they showed seriously degraded
performance in terms of order-aware change recognition.
Although neglected by existing methods, associations be-
tween changes, and disentangled representations of change
contents and orders are useful in identifying order-aware
changes. Therefore, we propose a novel method called vi-
sualtransformationgraphgenerator(VTGen)thatexplicitly
associates different changes and generates a graph that de-
scribes change contents and their orders in a disentangled
manner. VTGen achieved state-of-the-art performance in
theOVTdatasetandanexistingbenchmarkCLEVR-Multi-
Change [16], and outperformed existing methods by large
margins. However,wealsofoundasignificantperformance
gap between the best-performing model and humans. We
hopeourresearchandOVTdatasetcancontributetoachiev-
ing human-level visual reasoning in scene dynamics.Our contributions are three-fold: i. We propose a novel
taskandadatasetnamedOVTfororder-awarevisualtrans-
formation. ii. We report on benchmark evaluations of ex-
isting change recognition methods in order-aware change
recognitionanddiscusstheirshortcomings. iii. Wepropose
anovelmethodVTGenthatachievesstate-of-the-artperfor-
mance in the OVT dataset and an existing change recogni-
tion benchmark.
|
Metzger_Guided_Depth_Super-Resolution_by_Deep_Anisotropic_Diffusion_CVPR_2023 | Abstract
Performing super-resolution of a depth image using
the guidance from an RGB image is a problem that con-
cerns several fields, such as robotics, medical imaging,
and remote sensing. While deep learning methods have
achieved good results in this problem, recent work high-
lighted the value of combining modern methods with more
formal frameworks. In this work, we propose a novel ap-
proach which combines guided anisotropic diffusion with a
deep convolutional network and advances the state of the
art for guided depth super-resolution. The edge transfer-
ring/enhancing properties of the diffusion are boosted by
the contextual reasoning capabilities of modern networks,
and a strict adjustment step guarantees perfect adherence
to the source image. We achieve unprecedented results in
three commonly used benchmarks for guided depth super-
resolution. The performance gain compared to other meth-
ods is the largest at larger scales, such as ×32 scaling.
Code1for the proposed method is available to promote re-
producibility of our results.
| 1. Introduction
It is a primordial need for visual data analysis to in-
crease the resolution of images after they have been cap-
tured. In many fields one is faced with images that, for
technical reasons, have too low resolutions for the intended
purposes, e.g., MRI scans in medical imaging [48], multi-
spectral satellite images in Earth observation [22], thermal
surveillance images [1] and depth images in robotics [9]. In
some cases, an image of much higher resolution is available
in a different imaging modality, which can act as a guide
for super-resolving the low-resolution source image, by in-
jecting the missing high-frequency content. For instance,
in Earth observation, the guide is often a panchromatic im-
age (hence the term ”pan-sharpening”), whereas in robotics
a conventional RGB image is often attached to the same
*Equal contribution.
1https://github.com/prs- eth/Diffusion- Super-
Resolution
Diffusion
AdjustmentGuide Source
InitializationDiffusion coef ficients
Diffused result Adjusted result
Diffusion-adjustment loop
CNNFigure 1. We super-resolve a low-resolution depth image by find-
ing the equilibrium state of a constrained anisotropic diffusion pro-
cess. Learned diffusion coefficients favor smooth depth within ob-
jects and suppress diffusion across discontinuities. They are de-
rived from the guide with a neural feature extractor that is trained
by back-propagating through the diffusion process.
platform as a TOF camera or laser scanner. In this paper,
we focus on super-resolving depth images guided by RGB
images, but the proposed framework is generic and can be
adapted to other sensor combinations, too.
Research into guided super-resolution has a long his-
tory [16, 29]. The proposed solutions range from classical,
entirely hand-crafted schemes [10] to fully learning-based
methods [15], while some recent works have combined
the two schools of thought, with promising results [5, 32].
Many classical methods boil down to an image-specific op-
timization problem that must be solved at inference time,
which often makes them slow and memory-hungry. More-
over, they are limited to low-level image properties of the
guide, such as color and contrast, and lack the high-level
image understanding and contextual reasoning of modern
neural networks. On the positive side, by design, they can
not overfit the peculiarities of a training set and tend to gen-
eralize better. Recent work on guided super-resolution has
focused on deep neural networks. Their superior ability to
capture latent image structure has greatly advanced the state
of the art over traditional, learning-free approaches. Still,
these learning-based methods tend to struggle with sharp
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18237
discontinuities and often produce blurry edges in the super-
resolved depth maps. Moreover, like many deep learning
systems, they degrade – often substantially – when applied
to images with even slightly different characteristics. Note
also that standard feed-forward architectures cannot guar-
antee a consistent solution: feeding the source and guiding
images through an encoder-decoder structure to obtain the
super-resolved target will, by itself, not ensure that down-
sampling the target will reproduce the source.
We propose a novel approach for guided depth super-
resolution which combines the strengths of optimization-
based and deep learning-based super-resolution. In short,
our method is a combination of anisotropic diffusion (based
on the discretized version of the heat equation) with deep
feature learning (based on a convolutional backbone). The
diffusion part resembles classical optimization approaches,
solved via an iterative diffusion-adjustment loop. Every it-
eration consists of (1) an anisotropic diffusion step [2, 4,
23, 30], with diffusion weights driven by the guide in such
a way that diffusion (i.e., smoothing) is low across high-
contrast boundaries and high within homogeneous regions;
and (2) an adjustment step that rescales the depth values
such that they exactly match the low-resolution source when
downsampled. To harness the unmatched ability of deep
learning to extract informative image features, the diffusion
weights are not computed from raw brightness values but
are set by passing the guide through a (fully) convolutional
feature extractor. An overview of the method is depicted
in Fig. 1. The technical core of our method is the insight
that such a feature extractor can be trained end-to-end to
optimally fulfill the requirements of the subsequent opti-
mization, by back-propagating gradients through the iter-
ations of the diffusion loop. Despite its apparent simplicity,
this hybrid approach delivers excellent super-resolution re-
sults. In our experiments, it consistently outperforms prior
art on three different datasets, across a range of upsampling
factors from ×4 to×32. In our experiments, we compare
it to six recent learning methods as well as five different
learning-free methods. For completeness, we also include
a learning-free version of our diffusion-adjustment scheme
and show that it outperforms all other learning-free meth-
ods. Beyond the empirical performance, our method in-
herits several attractive properties from its ingredients: the
diffusion-based optimization scheme ensures strict adher-
ence to the depth values of the source, crisp edges, and a
degree of interpretability; whereas deep learning equips the
very local diffusion weights with large-scale context infor-
mation, and offers a tractable, constant memory footprint at
inference time. In summary, our contributions are:
1. We develop a hybrid framework for guided super-
resolution that combines deep feature learning and
anisotropic diffusion in an integrated, end-to-end train-
able pipeline;2. We provide an implementation of that scheme with
constant memory demands, and with inference time
that is constant for a given upsampling factor and
scales linearly with the number of iterations;
3. We set a new state of the art for the Middlebury [38],
NYUv2 [39] and DIML [20] datasets, for upsampling
factors from 4 ×to 32×, and provide empirical evi-
dence that our method indeed guarantees exact consis-
tency with the source image.
|
Luo_Towards_Generalisable_Video_Moment_Retrieval_Visual-Dynamic_Injection_to_Image-Text_Pre-Training_CVPR_2023 | Abstract
The correlation between the vision and text is essen-
tial for video moment retrieval (VMR), however, existing
methods heavily rely on separate pre-training feature ex-
tractors for visual and textual understanding. Without suf-
ficient temporal boundary annotations, it is non-trivial to
learn universal video-text alignments. In this work, we
explore multi-modal correlations derived from large-scale
image-text data to facilitate generalisable VMR. To address
the limitations of image-text pre-training models on captur-
ing the video changes, we propose a generic method, re-
ferred to as Visual-Dynamic Injection (VDI), to empower
the model’s understanding of video moments. Whilst ex-
isting VMR methods are focusing on building temporal-
aware video features, being aware of the text descriptions
about the temporal changes is also critical but originally
overlooked in pre-training by matching static images with
sentences. Therefore, we extract visual context and spa-
tial dynamic information from video frames and explicitly
enforce their alignments with the phrases describing video
changes ( e.g. verb). By doing so, the potentially relevant
visual and motion patterns in videos are encoded in the cor-
responding text embeddings (injected) so to enable more
accurate video-text alignments. We conduct extensive ex-
periments on two VMR benchmark datasets (Charades-STA
and ActivityNet-Captions) and achieve state-of-the-art per-
formances. Especially, VDI yields notable advantages when
being tested on the out-of-distribution splits where the test-
ing samples involve novel scenes and vocabulary.
| 1. Introduction
Video moment retrieval (VMR) aims at locating a video
moment by its temporal boundary in a long and untrimmed
video according to a natural language sentence [3, 13]. It
*Corresponding authors
(a) Separated TrainingTextVision√√TextVision(b) Video Text Joint TrainingxTextMomentTextVideo√(d) Ours
(c) Image Text Joint TrainingActionsActionsTextImage√xVisual EncoderTextual EncoderVisual-Dynamic Injection√
MomentTextA person opens the door xFigure 1. Contemporary methods lack moment-text correlations.
Our method takes the advantage of image-text pre-trained models
and learns moment-text correlations by visual-dynamic injection.
is a critical task which has been extensively studied in a va-
riety of real-world applications including human-computer
interaction [5], and intelligent surveillance [9]. In practice,
raw videos are usually unscripted and unstructured, while
the words being chosen for describing the same video mo-
ments can be varied from person to person [45, 63]. To
be generalisable to different scenes, VMR is fundamentally
challenging as it requires the comprehension of arbitrary
complex visual and motion patterns in videos and an un-
bounded vocabulary with their intricate relationships.
For the fine-grained retrieval objective of VMR, the pre-
cise segment-wise temporal boundary labels are intuitively
harder to be collected than conventional image/video-level
annotations. In this case, rather than training from scratch
with a limited number of temporally labelled videos, ex-
isting VMR solutions [3, 13, 14, 62] heavily rely on single-
modal pre-training [8, 48] for visual and textual understand-
ing (Fig. 1 (a)). By doing so, they focus on modelling the
correlations between the pre-learned features of videos and
sentences. Nonetheless, without sufficient training data, it
is non-trivial to derive universal video-text alignments so to
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23045
generalise to novel scenes and vocabulary.
Separately, the recent successes achieved by joint vision-
language pre-training in zero-shot learning [21, 42] demon-
strate the potential of adapting the multi-modal correlations
derived from large-scale visual-textual data to facilitate gen-
eralisable VMR. Whilst it is intuitive to adopt the video-
text pre-learned features [34, 38, 50] for moment retrieval
(Fig. 1 (b)), it has been shown that the models pre-trained
with coarse-grained video-level labels can not transfer well
to localisation-based tasks like VMR due to their unaware-
ness of fine-grained alignments between text and frames or
clips [2]. Such a misalignment problem is less likely to exist
in pre-training by image-text matching. However, image-
based pre-training models [21, 42] are less sensitive to the
changes in videos and the words describing such dynamics
in text [17]. This is inherent in matching sentences and im-
ages with static content but is significant in understanding
video actions and activities (Fig. 1 (c)). It is suboptimal to
directly apply image-text pre-learned features on VMR.
In this work, we propose a generic method for exploit-
ing large-scale image-text pre-training models to benefit
generalisable VMR by the universal visual-textual corre-
lations derived in pre-training, dubbed as Visual-Dynamic
Injection (VDI). The key idea is to explore the visual con-
text and spatial dynamic information from videos and in-
ject that into text embeddings to explicitly emphasise the
phrases describing video changes ( e.g. verb) in sentences
(Fig. 1 (d)). Such visual and dynamic information in text is
critical for locating video moments composed of arbitrary
evolving events but unavailable or overlooked in image-
text pre-training. Specifically, we consider it essential for
VMR models to answer two questions: “what are the ob-
jects” and “how do the objects change”. The visual context
information indicates the content in the frames, e.g. back-
grounds (scenes), appearances of objects, poses of subjects,
etc. Meanwhile, the spatial dynamic is about the location
changes of different salient entities in a video, which po-
tentially implies the development of their interactions. VDI
is a generic formulation, which can be integrated into any
existing VMR model. The only refinement is to adapt the
text encoder by visual-dynamic information injection dur-
ing training. Hence, no additional computation costs are
introduced in inference.
Our contributions are three-folded: (1)To our best
knowledge, this is the first attempt on injecting visual and
dynamic information to image-text pre-training models to
enable generalisable VMR. (2)We propose a novel method
for VMR called Visual-Dynamic Injection (VDI). The VDI
method is a generic formulation that can be integrated into
existing VMR models and benefits them from the universal
visual-textual alignments derived from large-scale image-
text data. (3)The VDI achieves the state-of-the-art perfor-
mances on two standard VMR benchmark datasets. Moreimportantly, it yields notable performance advantages when
being tested on the out-of-distribution splits where the test-
ing samples involve novel scenes and vocabulary. VDI’s
superior generalisation ability demonstrates its potential
for adapting image-text pre-training to video understanding
tasks requiring fine-grained visual-textual comprehensions.
|
Qin_Learning_To_Exploit_the_Sequence-Specific_Prior_Knowledge_for_Image_Processing_CVPR_2023 | Abstract
The hardware image signal processing (ISP) pipeline is
the intermediate layer between the imaging sensor and the
downstream application, processing the sensor signal into
an RGB image. The ISP is less programmable and con-
sists of a series of processing modules. Each processing
module handles a subtask and contains a set of tunable hy-
perparameters. A large number of hyperparameters form
a complex mapping with the ISP output. The industry typi-
cally relies on manual and time-consuming hyperparameter
tuning by image experts, biased towards human perception.
Recently, several automatic ISP hyperparameter optimiza-
tion methods using downstream evaluation metrics come
into sight. However, existing methods for ISP tuning treat
the high-dimensional parameter space as a global space for
optimization and prediction all at once without inducing the
structure knowledge of ISP . To this end, we propose a se-
quential ISP hyperparameter prediction framework that uti-
lizes the sequential relationship within ISP modules and the
similarity among parameters to guide the model sequence
process. We validate the proposed method on object detec-
tion, image segmentation, and image quality tasks.
| 1. Introduction
Hardware ISPs are low-level image processing pipelines
that convert RAW images into high-quality RGB images.
Typically, ISPs include a series of processing modules [5],
each of which handles a subtask such as denoising, white
balance, demosaicing, or sharpening. Compared to soft-
ware image processing pipelines, hardware ISPs are faster,
more power-efficient, and widely used in real-time prod-
ucts [7, 34], including cameras [28], smartphones, surveil-lance [21], IoT and driven-assistance systems. ISPs are
highly modular and less programmable but with a set of tun-
able hyperparameters. The industry always relies on man-
ual and costly hyperparameter tuning by image experts [1]
to adapt the ISP to different application scenarios.
The ISP is always designed as a sequential pipeline [5]
and the configurable hyperparameters of various modules
from any ISP aggregate to be a complex parameter space
(with tens to hundreds of parameters), making the manual
tuning process time-consuming. It is also difficult to sub-
jectively find optimal hyperparameters settings for various
downstream tasks (such as object detection and image seg-
mentation [29, 30]). Recently, several automatic ISP hyper-
parameter optimization methods [17,31] using downstream
evaluation metrics come into sight. These methods tuning
hyperparameters for downstream tasks include derivative-
free [18] or gradient methods [12, 25, 27] based on differ-
entiable approximation. There are also methods to demon-
strate the potential of predicting specific hyperparameters
for each image or scene [22]. However, existing meth-
ods treat the high-dimensional parameter space as a global
black-box space for optimization and prediction all at once,
while ignoring the inherent sequence of the ISP modules
and the critical intra-correlation among hyperparameters.
Inspired by the operating principles and structure knowl-
edge of ISPs, we first propose a sequential ISP hyperpa-
rameter prediction framework (as shown in Fig. 1) that con-
tains Sequential CNN model and Global Similarity Group-
ing. The sequential CNN model runs recurrently by predict-
ing a group of parameters from several ISP modules, not all
parameters, at each step. Meanwhile, the predicted param-
eters, along with the network’s hidden layer and the input
data, are in turn encoded as prior knowledge for predicting
the following grouping of parameters. The Global Similar-
ity Grouping module divides ISP parameters into multiple
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22314
ISP Hyperparameter Space
ISP Hyperparameter Space
Sequential
Process
Block1 Block2 Block3 Block4 Block5
Image Processing Pipelines (ISP)RAW
InputRGB
OutputBlock1 Block2 Block3 Block4 Block5
Image Processing Pipelines (ISP)RAW
InputRGB
OutputExisting Methods (A)
Proposed Method (B)
SimilarityGroupingOptimize All at Once
ConcatenationFigure 1. (A) Previous methods treat the hyperparameter space
as a black box optimization problem and estimate all parameters
at once without considering the prior knowledge of the ISP. (B)
Our proposed method first decouples ISP structural knowledge and
treats ISP tuning as a sequential prediction problem. It is effective
to introduce sequence information and similarity relations in the
high-dimensional hyperparameter space.
disjoint groupings. The sequence order between groupings
is explored heuristically using prior knowledge of the order
of ISP modules. Given the flexibility, groupings are deter-
mined based on similarity among parameters, not limited to
the same module parameters. The correlation of parameter
activation maps learned through the model is used as the
basis for parameter groupings.
Our contributions can be summarized as the following:
• We propose a new sequential CNN structure to exploit
the sequence processing knowledge within ISP. The
potential sequential information among parameters is
used to guide the processing of the model.
• We exploit the correlation among parameters by the
proposed similarity grouping module. The flexible
parameter groupings allow the exploration of cross-
module relationships among parameters.
• We validate the effectiveness of our method in a variety
of downstream tasks, including object detection, image
segmentation, and image quality. In these applications,our method outperforms existing methods.
|
Li_SViTT_Temporal_Learning_of_Sparse_Video-Text_Transformers_CVPR_2023 | Abstract
Do video-text transformers learn to model temporal re-
lationships across frames? Despite their immense capacity
and the abundance of multimodal training data, recent work
has revealed the strong tendency of video-text models to-
wards frame-based spatial representations, while temporal
reasoning remains largely unsolved. In this work, we iden-
tify several key challenges in temporal learning of video-
text transformers: the spatiotemporal trade-off from limited
network size; the curse of dimensionality for multi-frame
modeling; and the diminishing returns of semantic informa-
tion by extending clip length. Guided by these findings, we
propose SViTT , a sparse video-text architecture that per-
forms multi-frame reasoning with significantly lower cost
than na ¨ıve transformers with dense attention. Analogous
to graph-based networks, SViTT employs two forms of
sparsity: edge sparsity that limits the query-key commu-
nications between tokens in self-attention, and node spar-
sity that discards uninformative visual tokens. Trained with
a curriculum which increases model sparsity with the clip
length, SViTT outperforms dense transformer baselines on
multiple video-text retrieval and question answering bench-
marks, with a fraction of computational cost. Project page:
http://svcl.ucsd.edu/projects/svitt .
| 1. Introduction
With the rapid development of deep neural networks for
computer vision and natural language processing, there has
been growing interest in learning correspondences across
the visual and text modalities. A variety of vision-language
pretraining frameworks have been proposed [12, 22, 29, 34]
for learning high-quality cross-modal representations with
weak supervision. Recently, progress on visual transform-
ers (ViT) [5,16,32] has enabled seamless integration of both
modalities into a unified attention model, leading to image-
text transformer architectures that achieve state-the-art per-
formance on vision-language benchmarks [1, 27, 44].
Progress has also occurred in video -language pretraining
by leveraging image-text models for improved frame-based
*Work done during an internship at Intel Labs.
Query Key / V alue
Edge sparsity
Node sparsity
One person takes a pillow and some homework from an old chair , then smiles and laughs.Figure 1. We propose SViTT , asparse video-text transformer for
efficient modeling of temporal relationships across video frames.
Top: Semantic information for video-text reasoning is highly lo-
calized in the spatiotemporal volume, making dense modeling in-
efficient and prone to contextual noises. Bottom :SViTT pur-
sues edge sparsity by limiting query-key pairs in self-attention, and
node sparsity by pruning redundant tokens from visual sequence.
reasoning [4, 9, 18]. Spatial modeling has the advantage
of efficient (linear) scaling to long duration videos. Per-
haps due to this, single-frame models have proven surpris-
ingly effective at video-text tasks, matching or exceeding
prior arts with complex temporal components [9,24]. How-
ever, spatial modeling creates a bias towards static appear-
ance and overlooks the importance of temporal reasoning in
videos. This suggests the question: Are temporal dynamics
not worth modeling in the video-language domain?
Upon a closer investigation, we identify a few key chal-
lenges to incorporating multi-frame reasoning in video-
language models. First, limited model size implies a trade-
off between spatial and temporal learning (a classic example
being 2D/3D convolutions in video CNNs [46]). For any
given dataset, optimal performance requires a careful bal-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18919
ance between the two. Second, long-term video models typ-
ically have larger model sizes and are more prone to over-
fitting. Hence, for longer term video models, it becomes
more important to carefully allocate parameters and control
model growth. Finally, even if extending the clip length im-
proves the results, it is subject to diminishing returns since
the amount of information provided by a video clip does
not grow linearly with its sampling rate. If the model size
is not controlled, the computational increase may not jus-
tify the gains in accuracy. This is critical for transformer-
based architectures, since self-attention mechanisms have
a quadratic memory and time cost with respect to input
length. In summary, model complexity should be adjusted
adaptively, depending on the input videos, to achieve the
best trade-off between spatial representation, temporal rep-
resentation, overfitting potential, and complexity. Since ex-
isting video-text models lack this ability, they either attain a
suboptimal balance between spatial and temporal modeling,
or do not learn meaningful temporal representations at all.
Motivated by these findings, we argue that video-text
models should learn to allocate modeling resources to the
video data. We hypothesize that, rather than uniformly ex-
tending the model to longer clips, the allocation of these re-
sources to the relevant spatiotemporal locations of the video
is crucial for efficient learning from long clips. For trans-
former models, this allocation is naturally performed by
pruning redundant attention connections. We then propose
to accomplish these goals by exploring transformer spar-
sification techniques. This motivates the introduction of a
Sparse Video-Text Transformer (SViTT ) inspired by graph
models. As illustrated in Fig. 1, SViTT treats video to-
kens as graph vertices, and self-attention patterns as edges
that connect them. We design SViTT to pursue sparsity
for both: edge sparsity aims at reducing query-key pairs in
attention module while maintaining its global reasoning ca-
pability; node sparsity reduces to identifying informative to-
kens (e.g., corresponding to moving objects or person in the
foreground) and pruning background feature embeddings.
To address the diminishing returns for longer input clips, we
propose to train SViTT with temporal sparse expansion , a
curriculum learning strategy that increases clip length and
model sparsity, in sync, at each training stage.
SViTT is evaluated on diverse video-text benchmarks
from video retrieval to question answering, comparing to
prior arts and our own dense modeling baselines. First, we
perform a series of ablation studies to understand the bene-
fit of sparse modeling in transformers. Interestingly, we find
that both nodes (tokens) and edges (attention) can be pruned
drastically at inference, with a small impact on test perfor-
mance. In fact, token selection using cross-modal attention
improves retrieval results by 1% without re-training.
We next perform full pre-training with the sparse mod-
els and evaluate their downstream performance. We observethatSViTT scales well to longer input clips where the accu-
racy of dense transformers drops due to optimization diffi-
culties. On all video-text benchmarks, SViTT reports com-
parable or better performance than their dense counterparts
with lower computational cost, outperforming prior arts in-
cluding those trained with additional image-text corpora.
The key contributions of this work are: 1) a video-text
architecture SViTT that unifies edge and node sparsity; 2)
a sparse expansion curriculum for training SViTT on long
video clips; and 3) empirical results that demonstrate its
temporal modeling efficacy on video-language tasks.
|
Luo_GradMA_A_Gradient-Memory-Based_Accelerated_Federated_Learning_With_Alleviated_Catastrophic_Forgetting_CVPR_2023 | Abstract
Federated Learning (FL) has emerged as a de facto ma-
chine learning area and received rapid increasing research
interests from the community. However, catastrophic forget-
ting caused by data heterogeneity and partial participation
poses distinctive challenges for FL, which are detrimental
to the performance. To tackle the problems, we propose a
new FL approach (namely GradMA), which takes inspira-
tion from continual learning to simultaneously correct the
server-side and worker-side update directions as well as
take full advantage of server’s rich computing and mem-
ory resources. Furthermore, we elaborate a memory reduc-
tion strategy to enable GradMA to accommodate FL with
a large scale of workers. We then analyze convergence of
GradMA theoretically under the smooth non-convex setting
and show that its convergence rate achieves a linear speed
up w.r.t the increasing number of sampled active workers.
At last, our extensive experiments on various image classi-
fication tasks show that GradMA achieves significant per-
formance gains in accuracy and communication efficiency
compared to SOTA baselines. We provide our code here:
https://github.com/lkyddd/GradMA.
| 1. Introduction
Federated Learning (FL) [18, 26] is a privacy-preserving
distributed machine learning scheme in which workers
jointly participate in the collaborative training of a central-
ized model by sharing model information (parameters or
updates) rather than their private datasets. In recent years,
FL has shown its potential to facilitate real-world appli-
cations, which falls broadly into two categories [10]: the
cross-silo FL and the cross-device FL. The cross-silo FL
corresponds to a relatively small number of reliable work-
ers, usually organizations, such as healthcare facilities [9]
and financial institutions [41], etc. In contrast, for the cross-
*Corresponding authordevice FL, the number of workers can be very huge and
unreliable, such as mobile devices [26], IoT [27] and au-
tonomous driving cars [22], among others. In this paper, we
focus on cross-device FL.
The privacy-preserving and communication-efficient
properties of the cross-device FL make it promising, but it
also confronts practical challenges arising from data hetero-
geneity (i.e., non-iid data distribution across workers) and
partial participation [5,12,20,39]. Specifically, the datasets
held by real-world workers are generated locally accord-
ing to their individual circumstances, resulting in the dis-
tribution of data on different workers being not identical.
Moreover, owing to the flexibility of worker participation
in many scenarios (e.g., IoT and mobile devices), workers
can join or leave the FL system at will, thus making the set
of active workers random and time-varying across commu-
nication rounds. Note that we consider a worker participates
or is active at round t(i.e., the index of the communication
round) if it is able to complete the computation task and
send back model information at the end of round t.
The above-mentioned challenges mainly bring catas-
trophic forgetting (CF) [25, 30, 37] to FL. In a typical FL
process, represented by FedAvg [26], a server updates the
centralized model by iteratively aggregating the model in-
formation from workers that generally is trained over sev-
eral steps locally before being sent to the server. On the
one hand, due to data heterogeneity, the model is updated
on private data in local training, which is prone to overfit
the current knowledge and forget the previous experience,
thus leading to CF [8]. In other words, the updates of the lo-
cal models are prone to drift and diverge increasingly from
the update of the centralized model [12]. This can seriously
deteriorate the performance of the centralized model. To
ameliorate this issue, a variety of existing efforts regular-
ize the objectives of the local models to align the central-
ized optimization objective [1, 12, 13, 17, 19]. On the other
hand, the server can only aggregate model information from
active workers per communication round caused by partial
participation. In this case, many existing works directly dis-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3708
card [1, 11, 12, 18, 26, 39] or implicitly utilize [7, 28], by
means of momentum, the information provided by work-
ers who have participated in the training but dropped out
in the current communication round (i.e., stragglers). This
results the centralized model, which tends to forget the ex-
perience of the stragglers, thus inducing CF. In doing so,
the convergence of popular FL approaches (e.g., FedAvg)
can be seriously slowed down by stragglers. Moreover, all
above approaches solely aggregate the collected informa-
tion by averaging in the server, ignoring the server’s rich
computing and memory resources that could be potentially
harnessed to boost the performance of FL [45].
In this paper, to alleviate CF caused by data heterogene-
ity and stragglers, we bring forward a new FL approach,
dubbed as GradMA ( Grad ient-Memory-based Accelerated
Federated Learning), which takes inspiration from contin-
ual learning (CL) [4,14,24,29,44] to simultaneously correct
the server-side and worker-side update directions and fully
utilize the rich computing and memory resources of the
server. Concretely, motivated by the success of GEM [24]
and OGD [4], two memory-based CL methods, we invoke
quadratic programming (QP) and memorize updates to cor-
rect the update directions. On the worker side, GradMA
harnesses the gradients of the local model in the previous
step and the centralized model, and the parameters differ-
ence between the local model in the current step and the
centralized model as constraints of QP to adaptively correct
the gradient of the local model. Furthermore, we maintain
a memory state to memorize accumulated update of each
worker on the server side. GradMA then explicitly takes
the memory state to constrain QP to augment the momen-
tum (i.e., the update direction) of the centralized model.
Here, we need the server to allocate memory space to store
memory state. However, it may be not feasible in FL scenar-
ios with a large size of workers, which can increase the stor-
age cost and the burden of computing QP largely. There-
fore, we carefully craft a memory reduction strategy to alle-
viate the said limitations. In addition, we theoretically ana-
lyze the convergence of GradMA in the smooth non-convex
setting.
To sum up, we highlight our contributions as follows:
• We formulate a novel FL approach GradMA, which
aims to simultaneously correct the server-side and
worker-side update directions and fully harness the
server’s rich computing and memory resources. Mean-
while, we tailor a memory reduction strategy for
GradMA to reduce the scale of QP and memory cost.
• For completeness, we analyze the convergence of
GradMA theoretically in the smooth non-convex set-
ting. As a result, the convergence result of GradMA
achieves the linear speed up as the number of selected
active workers increases.• We conduct extensive experiments on four com-
monly used image classification datasets (i.e., MNIST,
CIFAR-10, CIFAR-100 and Tiny-Imagenet) to show
that GradMA is highly competitive compared with
other state-of-the-art baselines. Meanwhile, ablation
studies demonstrate efficacy and indispensability for
core modules and key parameters.
|
Pu_Dynamic_Conceptional_Contrastive_Learning_for_Generalized_Category_Discovery_CVPR_2023 | Abstract
Generalized category discovery (GCD) is a recently pro-
posed open-world problem, which aims to automatically
cluster partially labeled data. The main challenge is that
the unlabeled data contain instances that are not only from
known categories of the labeled data but also from novel
categories. This leads traditional novel category discov-
ery (NCD) methods to be incapacitated for GCD, due to
their assumption of unlabeled data are only from novel
categories. One effective way for GCD is applying self-
supervised learning to learn discriminate representation for
unlabeled data. However, this manner largely ignores un-
derlying relationships between instances of the same con-
cepts (e.g., class, super-class, and sub-class), which re-
sults in inferior representation learning. In this paper,
we propose a Dynamic Conceptional Contrastive Learn-
ing (DCCL) framework, which can effectively improve clus-
tering accuracy by alternately estimating underlying vi-
sual conceptions and learning conceptional representation.
In addition, we design a dynamic conception generation
and update mechanism, which is able to ensure consis-
tent conception learning and thus further facilitate the opti-
mization of DCCL. Extensive experiments show that DCCL
achieves new state-of-the-art performances on six generic
and fine-grained visual recognition datasets, especially on
fine-grained ones. For example, our method significantly
surpasses the best competitor by 16.2% on the new classes
for the CUB-200 dataset. Code is available at https:
//github.com/TPCD/DCCL
| 1. Introduction
Learning recognition models ( e.g., image classification)
from labeled data has been widely studied in the field of
machine learning and deep learning [13, 18, 31]. In spite
of their tremendous success, supervised learning techniques
*Corresponding Author.
AnimalsTransportationSuper-classRepresentationPulling FeaturePushingRepresentationSub-ClassDistributionRabbitPorcupine
ClassDistribution
Whale
Bus
UnlabbelledFeatureBicycle
Train
Sub-classRepresentationLabbelledFeature
YellowTrainWhiteTrainFigure 1. Diagram of the proposed Dynamic Conceptional Con-
trastive Learning (DCCL). Samples from the conceptions should
be close to each other. For example, samples from the same classes
(bus) at the class level, samples belonging to the transportation
(bus and bicycle) at the super-class level, and samples from trains
with different colors at the sub-class level. Our DCCL potentially
learns the underlying conceptions in unlabeled data and produces
more discriminative representations.
rely heavily on huge annotated data, which is not suit-
able for open-world applications. Thus, the researchers
recently have paid much effort on learning with label-
imperfection data, such as semi-supervised learning [23,
33], self-supervised learning [12, 42], weakly-supervised
learning [41,45], few-shot learning [32,38], open-set recog-
nition [30] and learning with noisy labels [40], etc.
Recently, inspired by the fact that Humans can easily
and automatically learn new knowledge with the guidance
of previously learned knowledge, novel category discov-
ery (NCD) [9, 11, 28, 44, 47] is introduced to automatically
cluster unlabeled data of unseen categories with the help of
knowledge from seen categories. However, the implemen-
tation of NCD is under a strong assumption that all the un-
labeled instances belong to unseen categories, which is not
practical in real-world applications. To address this limi-
tation, Vaze et al. [35] extend NCD to the generalized cat-
egory discovery (GCD) [35], where unlabeled images are
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7579
from both novel and labeled categories.
GCD is a challenging open-world problem in that we
need to 1) jointly distinguish the known and unknown
classes and 2) discover the novel clusters without any anno-
tations. To solve this problem, Vaze et al. [35] leverage the
contrastive learning technique to learn discriminative repre-
sentation for unlabeled data and use k-means [21] to obtain
final clustering results. In this method, the labeled data are
fully exploited by supervised contrastive learning. How-
ever, self-supervised learning is applied to the unlabeled
data, which enforces samples to be close to their augmen-
tation counterparts while far away from others. As a con-
sequence, the underlying relationships between samples of
the same conceptions are largely overlooked and thus will
lead to degraded representation learning. Intuitively, sam-
ples that belong to the same conceptions should be similar
to each other in the feature space. The conceptions can be
regarded as: classes, super-classes, sub-classes, etc. For ex-
ample, as shown in Fig. 1, samples of the same class should
be similar to each other, e.g., samples of the bus, samples of
the bicycle. In addition, in the super-classes view, classes
of the transportation, e.g., Bus and Bicycle, should belong
to the same concept. Hence, the samples of transportation
should be closer than that of other concepts ( e.g., animals).
Similarly, samples belong to the same sub-classes ( e.g., red
train) should be closer to that of other sub-classes ( e.g.,
white train). Hence, embracing such conceptions and their
relationships can greatly benefit the representation learning
for unlabeled data, especially for unseen classes.
Motivated by this, we propose a Dynamic Conceptional
Contrastive Learning (DCCL) framework for GCD to ef-
fectively leverage the underlying relationships between un-
labeled data for representation learning. Specifically, our
DCCL includes two steps: Dynamic Conception Genera-
tion (DCG) and Dual-level Contrastive Learning (DCL). In
DCG, we dynamically generate conceptions based on the
hyper-parameter-free clustering method equipped with the
proposed semi-supervised conceptional consolidation. In
DCL, we propose to optimize the model with conception-
level and instance-level contrastive learning objectives,
where we maintain a dynamic memory to ensure compar-
ing with the up-to-date conceptions. The DCG and DCL
are alternately performed until the model converges.
We summarize the contributions of this work as follows:
• We propose a novel dynamic conceptional contrastive
learning (DCCL) framework to effectively leverage the
underlying relationships between unlabeled samples
for learning discriminative representation for GCD.
• We introduce a novel dynamic conception generation
and update mechanism to ensure consistent conception
learning, which encourages the model to produce more
discriminative representation.• Our DCCL approach consistently achieves superior
performance over state-of-the-art GCD algorithms on
both generic and fine-grained tasks.
|
Qiu_PSVT_End-to-End_Multi-Person_3D_Pose_and_Shape_Estimation_With_Progressive_CVPR_2023 | Abstract
Existing methods of multi-person video 3D human Pose
and Shape Estimation (PSE) typically adopt a two-stage
strategy, which first detects human instances in each frame
and then performs single-person PSE with temporal model.
However, the global spatio-temporal context among spa-
tial instances can not be captured. In this paper, we pro-
pose a new end-to-end multi-person 3D Pose and Shape
estimation framework with progressive VideoTransformer,
termed PSVT. In PSVT, a spatio-temporal encoder (STE)
captures the global feature dependencies among spatial ob-
jects. Then, spatio-temporal pose decoder (STPD) and
shape decoder (STSD) capture the global dependencies be-
tween pose queries and feature tokens, shape queries and
feature tokens, respectively. To handle the variances of ob-
jects as time proceeds, a novel scheme of progressive de-
coding is used to update pose and shape queries at each
frame. Besides, we propose a novel pose-guided attention
(PGA) for shape decoder to better predict shape parame-
ters. The two components strengthen the decoder of PSVT
to improve performance. Extensive experiments on the four
datasets show that PSVT achieves stage-of-the-art results.
| 1. Introduction
Multi-person 3D human Pose and Shape Estimation
(PSE) from monocular video requires localizing the 3D
joint coordinates of all persons and reconstructing their
human meshes (e.g. SMPL [29] model). As an essen-
tial task in computer vision, it has many applications in-
cluding human-robot interaction detection [22], virtual re-
ality [33], and human behavior understanding [8], etc. Al-
though remarkable progress has been achieved in PSE from
videos [4, 41, 50, 51] or images [5, 44, 45], capturing multi-
person spatio-temporal relations of pose and shape simulta-
neously is still challenging since the difficulty in modeling
long-range global interactions.
…1T
…1T
…………HumanDetectionSPSESPSE
STE…STD(a)Multi-stageframeworkofmulti-personPSEinvideo
(b)End-to-endframeworkofPSVTMerging…PersonNPerson1
…T1
…T1Figure 1. Comparison of multi-stage and end-to-end framework.
(a) Existing video-based methods [4, 16, 49, 50] perform single-
person pose and shape estimation (SPSE) on the cropped areas by
temporal modeling, such as Gated Recurrent Units (GRUs). (b)
PSVT achieves end-to-end multi-person pose and shape estimation
in video with spatial-temporal encoder (STE) and decoder (STD).
To tackle this challenge, as shown in Figure 1 (a), exist-
ing methods [4,16,50,51] employ a detection-based strategy
of firstly detecting each human instance, then cropping the
instance area in each frame and feeding it into the tempo-
ral model, such as the recurrent neural network [4, 6, 16].
However, this framework can not capture the spatial re-
lationship among human instances in an image and has
the limitation of extracting long-range global context. Be-
sides, the computational cost is expensive since it is pro-
portional to the number of instances in image and it needs
extra tracker [50] to identify each instance. Other temporal
smoothing methods [15, 47] adopt a post-processing mod-
ule to align the shape estimated by image-based PSE ap-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21254
proaches [5, 15, 23, 44, 45]. However, they can not capture
temporal information directly from visual image features
and lack the ability of long-range global interactions. These
multi-stage methods split space and time dimensions and
can not be end-to-end optimized.
To strengthen the long-range modeling ability, recently
developed Transformer models [7,46] have been introduced
in PSE. The Transformer-based mesh reconstruction ap-
proaches [24, 25, 36, 54] take each human joint as a to-
ken and capture the relationship of human joints by atten-
tion mechanism. However, the global context among dif-
ferent persons in spatio-temporal dimensions has not been
explored. Other Transformer-based human pose estimation
approaches [27, 57] explore the spatio-temporal context of
human joints for single-person, but not on the multi-person
mesh. Besides, these methods focus on capturing the rela-
tions among human joints, while ignoring the relations be-
tween human poses and shapes.
To tackle the above problems, we propose an end-to-
end multi-person 3D Pose and Shape estimation framework
with Video Transformer, termed PSVT, to capture long-
range spatio-temporal global interactions in the video. As
shown in Figure 1 (b), PSVT formulates the human in-
stance localization and fine-grained pose and mesh estima-
tion as a set prediction problem as [3, 42]. First, PSVT
extracts a set of spatio-temporal tokens from the deep vi-
sual features and applies a spatio-temporal encoder (STE)
on these visual tokens to learn the relations of feature to-
kens. Second, given a set of pose queries, a progressive
spatio-temporal pose decoder (STPD) learns to capture the
relations of human joints in both spatial and temporal di-
mensions. Third, with the guidance of pose tokens from
STPD, a progressive spatio-temporal shape decoder (STSD)
learns to reason the relations of human mesh and pose in
both spatial and temporal dimensions and further estimates
the sequence 3D human mesh based on the spatio-temporal
global context. Compared with previous shape estimation
works [4,5,44,45,50,51], PSVT achieves end-to-end multi-
person 3D pose and shape estimation in video.
In PSVT, different from previous methods, we propose
a novel progressive decoding mechanism (PDM) for se-
quence decoding and pose-guided attention (PGA) for de-
coder. PDM takes the output tokens from the last frame
as the initialized queries for next frame, which enables bet-
ter sequence decoding for STPD and STSD. PGA aligns
the pose tokens and shape queries and further computes the
cross-attention with feature tokens from encoder. With the
guidance of pose tokens, shape estimation can be more ac-
curate. Our contributions can be summarized as follows:
We propose a novel video Transformer framework,
termed PSVT, which is the first end-to-end multi-
person 3D human pose and shape estimation frame-
work with video Transformer.We propose a novel progressive decoding mechanism
(PDM) for the decoder of video Transformer, which
updates the queries at each frame in the attention block
to improve the pose and shape decoding.
We propose a novel pose-guided attention (PGA),
which can capture the spatio-temporal relations among
pose tokens, shape tokens, and feature tokens to im-
prove the performance of shape estimation.
Extensive experiments on four benchmarks show that
PSVT achieves new state-of-the-art results.
|
Metaxas_DivClust_Controlling_Diversity_in_Deep_Clustering_CVPR_2023 | Abstract
Clustering has been a major research topic in the field
of machine learning, one to which Deep Learning has re-
cently been applied with significant success. However, an
aspect of clustering that is not addressed by existing deep
clustering methods, is that of efficiently producing multi-
ple, diverse partitionings for a given dataset. This is par-
ticularly important, as a diverse set of base clusterings are
necessary for consensus clustering, which has been found
to produce better and more robust results than relying on
a single clustering. To address this gap, we propose Div-
Clust, a diversity controlling loss that can be incorporated
into existing deep clustering frameworks to produce mul-
tiple clusterings with the desired degree of diversity. We
conduct experiments with multiple datasets and deep clus-
tering frameworks and show that: a) our method effectively
controls diversity across frameworks and datasets with very
small additional computational cost, b) the sets of clus-
terings learned by DivClust include solutions that signifi-
cantly outperform single-clustering baselines, and c) using
an off-the-shelf consensus clustering algorithm, DivClust
produces consensus clustering solutions that consistently
outperform single-clustering baselines, effectively improv-
ing the performance of the base deep clustering frame-
work. Code is available at https://github.com/
ManiadisG/DivClust .
| 1. Introduction
The exponentially increasing volume of visual data,
along with advances in computing power and the develop-
ment of powerful Deep Neural Network architectures, have
revived the interest in unsupervised learning with visual
data. Deep clustering in particular has been an area where
significant progress has been made in the recent years. Ex-
isting works focus on producing a single clustering, which
is evaluated in terms of how well that clustering matches
*Corresponding authorthe ground truth labels of the dataset in question. However,
consensus, or ensemble, clustering remains under-studied
in the context of deep clustering, despite the fact that it has
been found to consistently improve performance over single
clustering outcomes [3, 17, 45, 73].
Consensus clustering consists of two stages, specifically
generating a set of base clusterings, and then applying a
consensus algorithm to aggregate them. Identifying what
properties ensembles should have in order to produce better
outcomes in each setting has been an open problem [18].
However, research has found that inter-clustering diver-
sity within the ensemble is an important, desirable fac-
tor [14,20,24,34,51], along with individual clustering qual-
ity, and that diversity should be moderated [15,22,51]. Fur-
thermore, several works suggest that controlling diversity
in ensembles is important toward studying its impact and
determining its optimal level in each setting [22, 51].
The typical way to produce diverse clusterings is to pro-
mote diversity by clustering the data multiple times with
different initializations/hyperparameters or subsets of the
data [3, 17]. This approach, however, does not guaran-
tee or control the degree of diversity, and is computation-
ally costly, particularly in the context of deep clustering,
where it would require the training of multiple models.
Some methods have been proposed that find diverse clus-
terings by including diversity-related objectives to the clus-
tering process, but those methods have only been applied
to clustering precomputed features and cannot be trivially
incorporated into Deep Learning frameworks. Other meth-
ods tackle diverse clustering by creating and clustering di-
verse feature subspaces, including some that apply this ap-
proach in the context of deep clustering [48, 61]. Those
methods, however, do not control inter-clustering diversity.
Rather, they influence it indirectly through the properties
of the subspaces they create. Furthermore, typically, exist-
ing methods have been focusing on producing orthogonal
clusterings or identifying clusterings based on independent
attributes of relatively simple visual data (e.g. color/shape).
Consequently, they are oriented toward maximizing inter-
clustering diversity, which is not appropriate for consensus
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3418
Figure 1. Overview of DivClust. Assuming clusterings AandB, the proposed diversity loss Ldivcalculates their similarity matrix SAB
and restricts the similarity between cluster pairs to be lower than a similarity upper bound d. In the figure, this is represented by the model
adjusting the cluster boundaries to produce more diverse clusterings. Best seen in color.
clustering [15, 22, 51].
To tackle this gap, namely generating multiple cluster-
ings with deep clustering frameworks efficiently and with
the desired degree of diversity, we propose DivClust. Our
method can be straightforwardly incorporated into exist-
ing deep clustering frameworks to learn multiple cluster-
ings whose diversity is explicitly controlled . Specifically,
the proposed method uses a single backbone for feature ex-
traction, followed by multiple projection heads, each pro-
ducing cluster assignments for a corresponding clustering.
Given a user defined diversity target, in this work expressed
in terms of the average NMI between clusterings, DivClust
restricts inter-clustering similarity to be below an appropri-
ate, dynamically estimated threshold. This is achieved with
a novel loss component, which estimates inter-clustering
similarity based on soft cluster assignments produced by the
model, and penalizes values exceeding the threshold. Im-
portantly, DivClust introduces minimal computational cost
and requires no hyperparameter tuning with respect to the
base deep clustering framework, which makes its use sim-
ple and computationally efficient.
Experiments on four datasets (CIFAR10, CIFAR100,
Imagenet-10, Imagenet-Dogs) with three recent deep clus-
tering methods (IIC [37], PICA [32], CC [44]) show that
DivClust can effectively control inter-clustering diversity
without reducing the quality of the clusterings. Further-
more, we demonstrate that, with the use of an off-the-shelf
consensus clustering algorithm, the diverse base clusterings
learned by DivClust produce consensus clustering solutions
that outperform the base frameworks, effectively improving
them with minimal computational cost. Notably, despite the
sensitivity of consensus clustering to the properties of the
ensemble, our method is robust across various diversity lev-
els, outperforming baselines in most settings, often by large
margins. Our work then provides a straightforward way for
improving the performance of deep clustering frameworks,
as well as a new tool for studying the impact of diversity in
deep clustering ensembles [51].
In summary, DivClust: a) can be incorporated in ex-isting deep clustering frameworks in a plug-and-play way
with very small computational cost, b) can explicitly and
effectively control inter-clustering diversity to satisfy user-
defined targets, and c) learns clusterings that can improve
the performance of deep clustering frameworks via consen-
sus clustering.
|
Qin_Reliable_and_Interpretable_Personalized_Federated_Learning_CVPR_2023 | Abstract
Federated learning can coordinate multiple users to par-
ticipate in data training while ensuring data privacy. The
collaboration of multiple agents allows for a natural con-
nection between federated learning and collective intelli-
gence. When there are large differences in data distri-
bution among clients, it is crucial for federated learning
to design a reliable client selection strategy and an inter-
pretable client communication framework to better utilize
group knowledge. Herein, a reliable personalized federated
learning approach, termed RIPFL, is proposed and fully in-
terpreted from the perspective of social learning. RIPFL
reliably selects and divides the clients involved in training
such that each client can use different amounts of social
information and more effectively communicate with other
clients. Simultaneously, the method effectively integrates
personal information with the social information generated
by the global model from the perspective of Bayesian de-
cision rules and evidence theory, enabling individuals to
grow better with the help of collective wisdom. An inter-
pretable federated learning mind is well scalable, and the
experimental results indicate that the proposed method has
superior robustness and accuracy than other state-of-the-
art federated learning algorithms.
| 1. Introduction
Federated learning is a new machine learning technique
with various applications in data privacy protection and data
security [19, 21, 37]. It can be viewed as social learning
involving multiple agents coinciding with collective intelli-
gence [3, 11, 13]. Unlike ordinary federated learning, per-
sonalized federated learning can address the problem of
data heterogeneity among clients and thereby improve their
capabilities in relatively more realistic scenarios [4, 18].
However, designing a reliable and interpretable federated
learning framework remains a significant challenge in the
*Corresponding author.
Figure 1. Cooperation between clients. Uninterpretable simple ag-
gregation produces a global model that is not helpful for all clients
because the information containing classes 2 and 3 may be nega-
tive for client 4, as well as for clients 1 and 4. Clients 1 and 2 can
well identify classes 1, 2, 3, and 4; however, unreliable random
selection does not guarantee the participation of clients 1 and 2
in aggregation, whereas the simultaneous selection of less-capable
clients such as clients 3 and 4 does. In this case, smart customers
cannot offer more help to the not-so-smart ones.
field of federated learning. FedProx [32], SCAFFOLD [15],
MOON [17] used the global model to impose different con-
straints on the client’s local training process. Consequently,
the knowledge of the global model was better absorbed.
Although [6, 33, 40] solved the problem of client het-
erogeneity in a personalized federated learning framework
by incorporating techniques such as clustering and knowl-
edge distillation. [22, 35] propose a certain degree of inter-
pretable client aggregation from the perspective of client
contribution to the group. However, their selection and
training of clients are often unreliable and uninterpretable,
resulting in uncertainty in the training process and a ten-
dency to ignore synergies between clients when the number
of clients is large and the data distribution widely varies.
Consequently, the collective intelligence is underutilized, as
shown in Fig. 1.
Herein, we propose a reliable and interpretable feder-
ated learning framework, RIPFL. Specifically, we introduce
Dempster–Shafer evidence theory (DST) [20, 34] to quan-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20422
tify the uncertainty and performance of each client and pro-
vide reliable client selection strategies. To reliably explain
client choices and aggregation methods without wasting
collective intelligence, RIPFL ensures that all smart clients
participate in the aggregation process while a small num-
ber of nonsmart individuals participate, to enable that nons-
mart clients can adequately gain more valuable collective
knowledge from smart clients. Moreover, a method that
can reliably integrate social and personal information is pro-
posed. The proposed framework is primarily applicable to
situations where the number of clients is large and the tasks
solved by clients are complex. The main contributions of
this paper are as follows.
• This paper proposes a reliable and interpretable per-
sonalized FL architecture from the perspective of so-
cial learning, which consists of interpretable local
training, reliable clients selection and division, and ef-
fective federated aggregation.
• To reliably select the required clients, this paper intro-
duces evidence theory to the local training of clients,
thus quantifying the uncertainty of each client and pro-
viding interpretable training methods.
• A Bayesian-rule-based evidence fusion method is in-
troduced by considering the global model as the prior
information of clients when there are differences in
the data distribution among clients. Consequently, the
knowledge of the global model is not forgotten by
clients in local training.
|
Pena_Re-Basin_via_Implicit_Sinkhorn_Differentiation_CVPR_2023 | Abstract
The recent emergence of new algorithms for permuting
models into functionally equivalent regions of the solution
space has shed some light on the complexity of error sur-
faces and some promising properties like mode connectivity.
However, finding the permutation that minimizes some ob-
jectives is challenging, and current optimization techniques
are not differentiable, which makes it difficult to integrate
into a gradient-based optimization, and often leads to sub-
optimal solutions. In this paper, we propose a Sinkhorn re-
basin network with the ability to obtain the transportation
plan that better suits a given objective. Unlike the cur-
rent state-of-art, our method is differentiable and, there-
fore, easy to adapt to any task within the deep learning do-
main. Furthermore, we show the advantage of our re-basin
method by proposing a new cost function that allows per-
forming incremental learning by exploiting the linear mode
connectivity property. The benefit of our method is com-
pared against similar approaches from the literature un-
der several conditions for both optimal transport and linear
mode connectivity. The effectiveness of our continual learn-
ing method based on re-basin is also shown for several com-
mon benchmark datasets, providing experimental results
that are competitive with the state-of-art. The source code
is provided at https://github.com/fagp/sinkhorn-rebasin.
| 1. Introduction
Despite the success of deep learning (DL) across many
application domains, the loss surfaces of neural networks
(NNs) are not well understood. Even for shallow NNs, the
number of saddle points and local optima can increase expo-
nentially with the number of parameters [4,13]. The permu-
AB
P(B)
(a)
Naive
WM [2]
Sinkhorn
C()
(b)
Figure 1. (a) Loss landscape for the polynomial approximation
task [27].AandBare models found by SGD. LMC suggests
that re-basin the model Bwould result in a functionally equiv-
alent model P(B), with no barrier on its linear interpolation
(1 )A+P(B). (b) Comparison of the cost in the lin-
ear path along before and after re-basin using weight matching
(WM) [2] and our Sinkhorn. The dashed line in the figures corre-
sponds with the naive path, and the solid line is the path after the
proposed Sinkhorn re-basin. The blue line represents WM path.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20237
tation symmetry of neurons in each layer allows the same
function to be represented with many different parameter
values of the same network. Symmetries imposed by these
invariances help us to better understand the structure of the
loss landscape [6, 11, 13].
Previous studies establish that minima found by Stochas-
tic Gradient Descent (SGD) are not only connected in the
network parameter’s space by a path of non-increasing loss,
but also permutation symmetries may allow us to con-
nect those points linearly with no detriment to the loss
[9, 11–13, 15, 24]. This phenomenon is often referred to as
linear mode connectivity (LMC) [24]. For instance, Fig. 1a
shows a portion of the loss landscape for the polynomial
approximation task [27] using the method proposed by Li
et al. [16].AandBare two minima found by SGD in
different basins with an energy barrier between the pair.
LMC suggests that if one considers permutation invariance,
we can teleport solutions into a single basin where there is
almost no loss barrier between different solutions [2, 11].
In literature, this mechanism is called re-basin [2]. How-
ever, efficiently searching for permutation symmetries that
bring all solutions to one basin is a challenging problem
[11]. Three main approaches for matching units between
two NNs have been explored in the literature [2]. Some
studies propose a data-dependent algorithm that associates
units across two NNs by matching their activations [2, 26].
Since activation-based matching is data dependent, it helps
to adjust permutations to certain desired kinds of classes or
domains [26]. Instead of associating units by their activa-
tions, one could align the weights of the model itself [2,26],
which is independent of the dataset, and therefore the com-
putational cost is much lower. Finally, the third approach
is to iteratively adjust the permutation of weights. In par-
ticular, Ainsworth et al. [2] have proposed alternating be-
tween models alignment and barrier minimization using a
Straight-Through Estimator. Unfortunately, the proposed
approaches so far are either non-differentiable [2, 11, 26] or
computationally expensive [2], making the solution difficult
to be extended to other applications, with a different objec-
tive. For instance, adapting those methods for incremental
learning by including the algorithm for weight matching be-
tween two models trained on different domains is not trivial
because of the difficulties in optimizing new objectives.
In this work, inspired by [21], we relax the permutation
matrix with the Sinkhorn operator [1], and use it to solve
the re-basin problem in a differentiable fashion. To avoid
the high cost for computing gradients in the proposal of
Mena et al. [21], we use the implicit differentiation algo-
rithm proposed in [10], which has been shown to be more
cost-effective. Our re-basin formulation allows defining any
differentiable objective as a loss function.
A direct application of re-basin is the merger of diverse
models without significantly degrading their performance[2, 5, 12, 13, 28]. Applications like federate learning [2],
ensembling [12], or model initialization [5] exploit such a
merger by selecting a model in the line connecting the mod-
els to be combined. To show the effectiveness of our ap-
proach, we propose a new continual learning algorithm that
combines models trained on different domains. Our con-
tinual learning algorithm differs from previous state-of-art
approaches [22] because it directly estimates a model at the
intersection of previous and new knowledge, by exploiting
the LMC property observed in SGD-based solutions.
Our main contribution can be summarized as follows:
(1)Solving the re-basin for optimal transportation using
implicit Sinkhorn differentiation, enabling better differen-
tiable solutions that can be integrated on any loss.
(2)A powerful way to use our re-basin method based on the
Sinkhorn operator for incremental learning by considering
it as a model merging problem and leveraging LMC.
(3)An extensive set of experiments that validate our method
for: (i) finding the optimal permutation to transform a
model to another one equivalent; (ii) linear mode connec-
tivity, to linearly connect two models such that their loss
is almost identical along the entire connecting line in the
weights space; and (iii) learning new domains and tasks in-
crementally while not forgetting the previous ones.
|
Li_Unified_Mask_Embedding_and_Correspondence_Learning_for_Self-Supervised_Video_Segmentation_CVPR_2023 | Abstract
The objective of this paper is self-supervised learning of
video object segmentation. We develop a unified framework
which simultaneously models cross-frame dense correspon-
dence for locally discriminative feature learning and embeds
object-level context for target-mask decoding. As a result, it
is able to directly learn to perform mask-guided sequential
segmentation from unlabeled videos, in contrast to previous
efforts usually relying on an oblique solution — cheaply
“copying” labels according to pixel-wise correlations. Con-
cretely, our algorithm alternates between i)clustering video
pixels for creating pseudo segmentation labels ex nihilo; and
ii)utilizing the pseudo labels to learn mask encoding and de-
coding for VOS. Unsupervised correspondence learning is
further incorporated into this self-taught, mask embedding
scheme, so as to ensure the generic nature of the learnt repre-
sentation and avoid cluster degeneracy. Our algorithm sets
state-of-the-arts on two standard benchmarks ( i.e., DAVIS 17
and YouTube-VOS), narrowing the gap between self- and
fully-supervised VOS, in terms of both performance and net-
work architecture design.
| 1. Introduction
In this article, we focus on a classic computer vision task:
accurately segmenting desired object(s) in a video sequence,
where the target object(s) are defined by pixel-wise mask(s)
in the first frame. This task is referred as ( one-shot )video ob-
ject segmentation (VOS) or mask propagation [1], playing a
vital role in video editing and self-driving. Prevalent solu-
tions [2–25] are built upon fully supervised learning techni-
ques, costing intensive labeling efforts. In contrast, we aim
to learn VOS from unlabeled videos — self-supervised VOS.
Due to the absence of mask annotation during training,
existing studies typically degrade such self-supervised yet
mask-guided segmentation task as a combo of unsupervised
correspondence learning and correspondence based, non-
*Work done during an internship at Baidu VIS.
†Corresponding author: Wenguan Wang .
t+1
t+1
t
t
matching
t t+1
mask
warpingmask
embeddingmatching
mask
decodingmask -guided
segmentation
(a) (b)
(d)
(c)ResNet-18
ResNet-50
MAST[28]
VFS[39]
LIIR[31]
UVC[35]
ConCor[41]
Ours555963677175J&F
20 30 40 50 60 70 80
frame number4050607080 J&F
UVC[35]ConCor[41]MAST[28]CRW[34]VFS[39]LIIR[31]Ours
Figure 1. (a) Correspondence learning based self-supervised VOS,
where mask tracking is simply degraded as correspondence match-
ing mask warping. (b) We achieve self-supervised VOS by jointly
learning mask embedding and correspondence matching. Our algo-
rithm explicitly embeds masks for target object modeling, hence
enabling mask-guided segmentation. (c) Performance comparison
and (d) Performance over time, reported on DA VIS 17[42]val.
learnable mask warping (cf.Fig. 1(a)). They first learn pixel-
/patch-wise matching ( i.e., cross-frame correspondence) by
exploring the inherent continuity in raw videos as free super-
visory signals, in the form of i)aphotometric reconstruc-
tionproblem where each pixel in a target frame is desired to
be recovered by copying relevant pixels in reference frame(s)
[26–31]; ii)acycle-consistency task that enforces matching
of pixels/patches after forward-backward tracking [32–36];
andiii)acontrastive matching scheme that contrasts confi-
dent correspondences against unreliable ones [37–40]. Once
trained, the dense matching model is used to approach VOS
in a cheap way (Fig.1(a)): the label of a query pixel/patch is
simply borrowed from previously segmented ones, accord-
ing to their appearance similarity (correspondence score).
Though straightforward, these correspondence based “ex-
pedient” solutions come with two severe limitations: First ,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18706
they learn to match pixels instead of customizing VOS tar-
get – mask-guided segmentation, leaving a significant gap
between the training goal and task/inference setup. During
training, the model is optimized purely to discovery reliable,
target-agnostic visual correlations, with no sense of object-
mask information. Spontaneously, during testing/inference,
the model struggles in employing first-/prior-frame masks to
guide the prediction of succeeding frames. Second , from the
view of mask-tracking, existing self-supervised solutions, in
essence, adopt an obsolete, matching-/flow-based mask pro-
pagation strategy [43–47]. As discussed even before the deep
learning era[48–50], such a strategy is sub-optimal. Specifi-
cally, without modeling the target objects, flow-based mask
warping is sensitive to outliers, resulting in error accumula-
tion over time [1]. Subject to the primitive matching-and-
copy mechanism, even trivial errors are hard to be corrected,
and oftenleadtomuchworseresultscausedbydriftsorocclu-
sions. This is also why current top-leading fully supervised
VOSsolutions[4, 5, 10–22]largely followa maskembedding
learning philosophy — embedding frame-mask pairs , in-
stead of only frame images, into the segmentation network.
With such explicit modeling of the target object, more ro-
bust and accurate mask-tracking can be achieved [1, 51].
Motivated by the aforementioned discussions, we inte-
grate mask embedding learning and dense correspondence
modeling into a compact, end-to-end framework for self-
supervised VOS ( cf.Fig. 1(b)). This allows us to inject the
mask-tracking nature of the task into the very heart of our
algorithm and model training. However, bringing the idea of
mask embedding into self-supervised VOS is not trivial, due
to the lack of mask annotation. We therefore achieve mask
embedding learning in a self-taught manner. Concretely, our
model is trained by alternating between i)space-time pixel
clustering, and ii)mask-embedded segmentation learning.
Pixel clustering is to automatically discover spatiotempo-
rally coherent object(-like) regions from raw videos. By uti-
lizing such pixel-level video partitions as pseudo ground-
truths of target objects, our model can learn how to extract
target-specific context from frame-mask pairs, and how
to leverage such high-level context to predict the next-frame
mask. At the same time, such self-taught mask embedding
scheme is consolidated by self-supervised dense correspon-
dence learning. This allows our model to learn transferable,
locally discriminative representations by making full use of
the spatiotemporal coherence in natural videos, and prevent
the degenerate solution of the deterministic clustering.
Our approach owns a few distinctive features: First , it has
the ability of directly learning to conduct mask-guided se-
quential segmentation; its training objective is completely
aligned with the core nature of VOS. Second , by learning
to embed object-masks into mask tracking, target-oriented
context can be efficiently mined and explicitly leveraged
for object modeling, rather than existing methods merelyrelying on local appearance correlations for label “copy-
ing”. Hence our approach can reduce error accumulation
(cf.Fig. 1(d)) and perform more robust when the latent cor-
respondences are ambiguous, e.g.,deformation, occlusion or
one-to-many matches. Third , our mask embedding strategy
endows our self-supervised framework with the potential of
being empowered by more advanced VOS model designs
developed in the fully-supervised learning setting.
Through embracing the powerful idea of mask embed-
ding learning as well as inheriting the merits of correspon-
dence learning, our approach favorably outperforms state-of-
the-art competitors, i.e.,3.2%,2.5%, and 2.2% mIoU gains
on DA VIS 17[42]val, DA VIS 17test-dev and YouTube-
VOS [52] val, respectively. In addition to narrowing the
performance gap between self- and fully-supervised VOS,
our approach establishes a tight coupling between them in
the aspect of model design. We expect this work can foster
the mutual collaboration between these two relevant fields.
|
Nakhli_Sparse_Multi-Modal_Graph_Transformer_With_Shared-Context_Processing_for_Representation_Learning_CVPR_2023 | Abstract
Processing giga-pixel whole slide histopathology images
(WSI) is a computationally expensive task. Multiple in-
stance learning (MIL) has become the conventional ap-
proach to process WSIs, in which these images are split
into smaller patches for further processing. However, MIL-
based techniques ignore explicit information about the in-
dividual cells within a patch. In this paper, by defining the
novel concept of shared-context processing, we designed a
multi-modal Graph Transformer (AMIGO) that uses the cel-
lular graph within the tissue to provide a single representa-
tion for a patient while taking advantage of the hierarchical
structure of the tissue, enabling a dynamic focus between
cell-level and tissue-level information. We benchmarked the
performance of our model against multiple state-of-the-art
methods in survival prediction and showed that ours can
significantly outperform all of them including hierarchical
Vision Transformer (ViT). More importantly, we show that
our model is strongly robust to missing information to an
extent that it can achieve the same performance with as
low as 20% of the data. Finally, in two different cancer
datasets, we demonstrated that our model was able to strat-
ify the patients into low-risk and high-risk groups while
other state-of-the-art methods failed to achieve this goal.
We also publish a large dataset of immunohistochemistry
images (InUIT) containing 1,600 tissue microarray (TMA)
cores from 188 patients along with their survival informa-
tion, making it one of the largest publicly available datasets
in this context. | 1. Introduction
Digital processing of medical images has recently at-
tracted significant attention in computer vision communi-
ties, and the applications of deep learning models in this
domain span across various image types ( e.g., histopathol-
ogy images, CT scans, and MRI scans) and numerous
tasks ( e.g., classification, segmentation, and survival pre-
diction) [6, 11, 27, 28, 30, 36, 38, 44]. The paradigm-shifting
ability of these models to learn predictive features directly
from raw images has presented exciting opportunities in
medical imaging. This has especially become more im-
portant for digitized histopathology images where each data
point is a multi-gigapixel image (also referred to as a Whole
Slide Image or WSI). Unlike natural images, each WSI
has high granularity at different levels of magnification and
a size reaching 100,000 ×100,000 pixels, posing exciting
challenges in computer vision.
The typical approach to cope with the computational
complexities of WSI processing is to use the Multiple In-
stance Learning (MIL) technique [31]. More specifically,
this approach divides each slide into smaller patches ( e.g.,
256×256 pixels), passes them through a feature extractor,
and represents the slide with an aggregation of these rep-
resentations. This technique has shown promising results
in a variety of tasks, including cancer subtype classifica-
tion and survival prediction. However, it suffers from sev-
eral major issues. Firstly, considering the high resolution
of WSIs, even a non-overlapping 256 ×256 window gener-
ates a huge number of patches. Therefore, the subsequent
aggregation method of MIL has to perform either a simple
pooling operation [3, 17] or a hierarchical aggregation to
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11547
add more flexibility [6]. Nevertheless, the former limits the
representative power of the aggregator drastically, and the
latter requires a significant amount of computational power.
Secondly, this approach is strongly dependent on the size
of the dataset, which causes the over-fitting of the model in
scenarios where a few data points ( e.g., hundreds) are avail-
able. Lastly, despite the fact that cells are the main com-
ponents of the tissue, the MIL approach primarily focuses
on patches, which limits the resolution of the model to a
snapshot of a population of cells rather than a single cell.
Consequently, the final representation of the slide lacks the
mutual interactions of individual cells.
Multiple clinical studies have strongly established that
the heterogeneity of the tissue has a crucial impact on the
outcome of cancer [32, 46]. For instance, high levels of im-
mune infiltration in the tumor stroma were shown to cor-
relate with longer survival and positive therapy response
in breast cancer patients [46]. Therefore, machine learn-
ing methods for histopathology image analysis are required
to account for tumor heterogeneity and cell-cell interac-
tions. Nonetheless, the majority of the studies in this do-
main deal with a single image highlighting cell nuclei (re-
gardless of cell type) and extra cellular matrix. Recently,
few studies have investigated pathology images where vari-
ous cell types were identified using different protein mark-
ers [25, 42]. However, they still utilized a single-modal ap-
proach (i.e., one cell type in an image), ignoring the multi-
modal context (i.e., several cell types within the tissue) of
these images.
In this study, we explore the application of graph neural
networks (GNN) for the processing of cellular graphs (i.e.,
a graph constructed by connecting adjacent cells to each
other) generated from histopathology images (Fig. 1). In
particular, we are interested in the cellular graph because
it gives us the opportunity to focus on cell-level informa-
tion as well as their mutual interactions. By delivering an
adaptable focus at different scales, from cell level to tissue
level, such information allows the model to have a multi-
scale view of the tissue, whereas MIL models concentrate
on patches with a preset resolution and optical magnifica-
tion. The availability of cell types and their spatial loca-
tion helps the model to find regions of the tissue that have
more importance for its representation ( e.g., tumor regions
or immune cells infiltrating into tumor cells). In contrast
to the expensive hierarchical pooling in MIL methods [6],
the message-passing nature of GNNs offers an efficient ap-
proach to process the vast scale of WSIs as a result of weight
sharing across all the graph nodes. This approach also re-
duces the need for a large number of WSIs during training
as the number of parameters is reduced.
In this work, we introduce a spArse MultI-modal Graph
transfOrmer model (AMIGO) for the representation learn-
ing of histopathology images by using cells as the main
Figure 1. Cellular graph built from a 4,000×4,000pixel TMA
core stained with Ki67 biomarker. Each red point demonstrates
a cell that has a positive response to Ki67 while the blue points
show cells that had a negative response to this biomarker. The
highlighted patches show representative areas of the tissue where
the spatial distribution of cells and the structure of the tissue are
different. A typical MIL method cannot capture this heterogeneity
as it does not take into account the location of the patches and
lacks explicit information about the specific cells present within a
patch.
building blocks. Starting from the cell level, our model
gradually propagates information to a larger neighborhood
of cells, which inherently encodes the hierarchical structure
of the tissues. More importantly, in contrast to other works,
we approach this problem in a multi-modal manner, where
we can get a broader understanding of the tissue structure,
which is quite critical for context-based tasks such as sur-
vival prediction. In particular, for a single patient, there
can be multiple histopathology images available, each high-
lighting cells of a certain type (by staining cells with spe-
cific protein markers), and resulting in a separate cellular
graph (Fig. 2). Therefore, using a multi-modal approach,
we combine the cellular graphs of different modalities to-
gether to obtain a unified representation for a single patient.
This also affirms our stance regarding the importance of cell
type and the distinction between different cellular connec-
tivity types. Aside from achieving state-of-the-art results,
we notice that, surprisingly, our multi-modal approach is
strongly robust to missing information, and this enables us
to perform more efficient training by relying on this recon-
struction ability of the network. Our work advances the
frontiers of MIL, Vision Transformer (ViT), and GNNs in
multiple directions:
• We introduce the first multi-modal cellular graph pro-
cessing model that performs survival prediction based
on the multi-modal histopathology images with shared
contextual information.
• Our model eliminates the critical barriers of MIL mod-
11548
Graph
SAGE
SAGE P ool
SAGE P ool
SAGE P oolGraph
SAGE
Graph
SAGE
++PLM
Instance
A�en�on
&
Norm
Mul�-Head
A�en�on
Add & Norm
Feed
Forward
Add & NormGraph
SAGE
SAGE P ool
SAGE P ool
SAGE P oolGraph
SAGE
Graph
SAGE
++PLM
Instance
A�en�on
&
NormGraph
SAGE
SAGE P ool
SAGE P ool
SAGE P oolGraph
SAGE
Graph
SAGE
++PLM
Instance
A�en�on
&
NormShared Shared
Shar ed Sharedc) b)a)
Modality 1 (CD8 S tain)
Modality 2 (CD20 S tain)
Modality 3 (K56 S tain)Figure 2. Overview of our proposed method. a) The Cellular graphs are first extracted from histopathology images stained with different
biomarkers ( e.g., CD8, CD20, and Ki67) and are fed into the encoder corresponding to their modality. The initial layer of encoders is
shared, allowing further generalization, while the following layers pick up functionalities unique to each modality. The graphs at the top
depict the hierarchical pooling mechanism of the model. b) The representations obtained from multiple graph instances in each modality
are combined via a shared instance attention layer (shared-context processing), providing a single representation vector. c) A Transformer
is used to merge the resultant vectors to create a patient-level embedding that will be used for downstream tasks such as survival prediction.
els, enabling efficient training of multi-gigapixel im-
ages on a single GPU and outperforming all the base-
lines including ViT. It also implements the hierarchi-
cal structure of Vision Transformer while keeping the
number of parameters significantly lower during end-
to-end training.
• We also publish a large dataset of IHC images contain-
ing1,600tissue microarray (TMA) cores from 188pa-
tients along with their survival information, making it
one of the largest datasets in this context.
|
Qin_MotionTrack_Learning_Robust_Short-Term_and_Long-Term_Motions_for_Multi-Object_Tracking_CVPR_2023 | Abstract
The main challenge of Multi-Object Tracking (MOT)
lies in maintaining a continuous trajectory for each tar-
get. Existing methods often learn reliable motion pat-
terns to match the same target between adjacent frames
and discriminative appearance features to re-identify the
lost targets after a long period. However, the reliability
of motion prediction and the discriminability of appear-
ances can be easily hurt by dense crowds and extreme oc-
clusions in the tracking process. In this paper, we pro-
pose a simple yet effective multi-object tracker, i.e., Mo-
tionTrack, which learns robust short-term and long-term
motions in a unified framework to associate trajectories
from a short to long range. For dense crowds, we design
a novel Interaction Module to learn interaction-aware mo-
tions from short-term trajectories, which can estimate the
complex movement of each target. For extreme occlusions,
we build a novel Refind Module to learn reliable long-
term motions from the target’s history trajectory, which
can link the interrupted trajectory with its correspond-
ing detection. Our Interaction Module and Refind Mod-
ule are embedded in the well-known tracking-by-detection
paradigm, which can work in tandem to maintain superior
performance. Extensive experimental results on MOT17
and MOT20 datasets demonstrate the superiority of our
approach in challenging scenarios, and it achieves state-
of-the-art performances at various MOT metrics. Code is
available at https://github.com/qwomeng/MotionTrack.
| 1. Introduction
Multi-Object Tracking (MOT) is a fundamental task in
computer vision, which has a wide range of applications,
†Co-first authors.∗Corresponding author.
Target Tracked trajectory Occluded trajectory
Video sequence
(b)
(a)Figure 1. Illustration of challenging scenarios in different
videos . (a)Dense crowds. Pedestrians do not move independently
in this situation. They will be affected by their surrounding neigh-
bors to avoid collisions which will make their motion patterns hard
to learn in practice. (b) Extreme occlusion. Pedestrians are eas-
ily occluded by fixed facilities for a long period, such as billboard
and sunshade, in which the dynamic environment will make them
undergone a large appearance variation.
such as autonomous driving [8] and intelligent surveil-
lance [29]. It aims at jointly locating targets through bound-
ing boxes and recognizing their identities throughout a
whole video [40]. Though great progress has been made
in the past few years, MOT still remains a challenging task
due to the dynamic environment, such as dense crowds and
extreme occlusions, in the tracking scenario.
In general, the existing MOT methods either follow the
tracking-by-detection [2] or tracking-by-regression [39, 40,
59], paradigm. The former methods first detect objects in
each video frame and then associate detections between ad-
jacent frames to create individual object tracks over time.
The latter methods conduct tracking differently: the ob-
ject detector not only provides frame-wise detections but
also replaces the data association with a continuous regres-
sion of each tracklet to its new position. Regardless of the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17939
paradigm, all methods need to address the short-range and
long-range association problems, i.e., how to associate the
alive tracklets with detections in a short time, and how to
re-identify the lost tracklets with detections after a long pe-
riod.
For the short-range association problem, discriminative
motion patterns and appearance features [5, 44] are of-
ten learned to conduct data association between adjacent
frames. However, as shown in Figure 1 (a), it is tough to
learn discriminative representations in the dense crowd sce-
nario. On the one hand, the bounding boxes of detections
are too small to be distinguished by their appearances. On
the other hand, different targets need to plan suitable paths
to avoid collisions, which makes the resulting motions very
complex in the tracking process. For the long-range asso-
ciation problem, prior works [24, 43, 44] usually learn dis-
criminative appearance features to re-identify the lost tar-
gets after long occlusion [56–58]. As shown in Figure 1 (b),
the main bottleneck of these methods is how to keep the ro-
bustness of features against different poses, low resolution,
and poor illumination for the same target. To alleviate this
issue, the memory technology [7, 46] is widely applied to
store diverse features for each target to match different tar-
gets in a multi-query manner. Moreover, a lot of memo-
ries and time will be consumed by the memory module and
multi-query regime, unfriendly to real-time tracking.
In this paper, we propose a simple yet effective object
tracker, i.e., MotionTrack, to address the short-range and
long-range association problems in MOT. In particular, our
MotionTrack follows the tracking-by-detection paradigm,
in which both interaction-aware and history trajectory-
based motions are learned to associate trajectories from a
short to long range. To deal with the short-range association
problem, we design a novel Interaction Module to model all
interactions between targets, which can predict their com-
plex motions to avoid collisions. The Interaction Module
uses an asymmetric adjacency matrix to represent the inter-
action between targets, and obtains the prediction after the
information fusion by a graph convolution network. Thanks
to the captured target interaction, those short-term occluded
targets can be successfully tracked in dense crowds. To deal
with the long-range association problem, we design a novel
Refind Module based on the history trajectory of each tar-
get. It can effectively re-identify the lost targets through two
steps: correlation calculation and error compensation. For
the lost tracklets and the unmatched detections, the correla-
tion calculation step takes the features of history trajectories
and current detections as input, and computes a correlation
matrix to represent the possibility that they are associated.
Afterward, the error compensation step is further taken to
revise the occluded trajectories. Extensive experiments on
two benchmark datasets (MOT17 and MOT20) demonstrate
that our proposed MotionTrack outperforms the previousstate-of-the-art methods.
The main contribution of this work can be highlighted as
follows: (1) We propose a simple yet effective multi-object
tracker, MotionTrack, to address the short-range and long-
range association problems; (2) We design a novel Interac-
tion Module to model the interaction between targets, which
can handle complex motions in dense crowds; (3) We design
a novel Refind Module to learn discriminative motion pat-
terns, which can re-identify the lost tracklets with current
detections.
|
Mei_Exploring_and_Utilizing_Pattern_Imbalance_CVPR_2023 | Abstract
In this paper, we identify pattern imbalance from sev-
eral aspects, and further develop a new training scheme to
avert pattern preference as well as spurious correlation. In
contrast to prior methods which are mostly concerned with
category or domain granularity, ignoring the potential finer
structure that existed in datasets, we give a new definition
of seed category as an appropriate optimization unit to dis-
tinguish different patterns in the same category or domain.
Extensive experiments on domain generalization datasets of
diverse scales demonstrate the effectiveness of the proposed
method.
| 1. Introduction
Over the past decade, the rise of deep neural networks
(DNNs) has promoted the rapid development of various ar-
tificial intelligence communities [13, 20, 22]. Despite the
remarkable success, DNNs tend to take shortcuts to learn
spurious features [24, 27]. The causal correlation between
these spurious features and ground truth only exists in the
training set, which hinders the generalization of DNN mod-
els. This phenomenon is also known as domain shift. More-
over, due to the incomplete distribution of training data, the
learned model may have a preference for gender, race, and
skin color, which will lead to serious ethical problems.
To tackle these problems, various methods have been
proposed to discuss the failure modes of out-of-distribution
(OOD) generalization [18,30,32,43]. Some researchers fo-
cus on encouraging the model to learn domain invariant fea-
tures. Ganin et al. [9] simultaneously optimize a standard
classifier and a domain classifier through adversarial train-
ing, where the features extracted by DNN can be used for
original classification but failed on domain recognition to
inhibit domain characteristics learning. Arjovsky et al. [1]
restrict the learned representations to be classified by sim-
ilar optimal linear classifiers in different domains. Other
researchers start by avoiding spurious features. Zhang et
*Corresponding author.al. [42] argue that there exist sub-networks with preferable
domain generalization ability in the model and represent the
sub-network through a learnable mask. Nam et al. [28] as-
sume that the spurious features are generally embodied in
the texture or style of the image. They design SagNet to
decouple the content and style of the image, impelling the
feature extractor to pay more attention to the content infor-
mation. Most of the above methods manually design spe-
cific model structures to handle domain generalization.
Instead of designing specific networks, we are more
concerned about solving domain generalization by ex-
ploring the character of the dataset. In particular, sup-
pose a simple handwritten digit recognition scenario, where
a large amount of digit 0 possesses the red background and
digit 1 possesses the green background. The dataset with
only the above two patterns cannot be effectively learned,
since the model has no idea whether the task is to classify
the digits or the background color. Therefore, in a given
learnable data set, there must exist a minority of digit 0
with green background and digit 1 with red background.
These samples play a significant role in establishing the true
causal relationship between images and labels but have not
been paid enough attention. We call pattern imbalance the
phenomenon that different patterns in the same class ap-
pear imbalanced, thus leading to model learning preference.
Based on the above observations, we attribute the domain
generalization problem to the mining of hard or minority
patterns under imbalanced patterns. First of all, we iden-
tify the pattern imbalance in the dataset from several per-
spectives. We note that even though a model has achieved
favorable performance on average, Achilles’heel still exists
on some weak patterns. To alleviate the influence caused by
imbalance patterns, we pay more attention to these samples
of minority patterns and propose a training scheme based on
dynamic distribution. To this end, we define a new concept,
seed category, that is, the inherent pattern to distinguish, to
promote model training by paying full attention to various
patterns in the data set. Specifically, for samples of the same
class, the seed category is divided based on the distance of
the samples in the embedding space as a more fine-grained
weight allocation unit than previous methods [19, 32, 39].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7569
In this paper, this dynamic and fine-grained training scheme
enables our method to obtain excellent domain generaliza-
tion performance.
We argue that it is effective to apply more detailed weight
allocation on out-of-distribution generalization tasks, that
is, the patterns that are crucial but laborious to be learned
by the model deserve special treatments, which is the most
significant difference between our method and the previ-
ous methods. Prior methods, e.g., GroupDRO [32], mini-
mize the worst-case loss over domains to treat different do-
mains differently, and the performance will be limited by
the coarse granularity of grouped distribution. On the con-
trary, the flexibility of our method is revealed in two as-
pects, that is, the weight allocation unit is more detailed and
the seed category can be dynamically adjusted during the
training process. Our contributions can be summarized as
follows:
• We identify pattern imbalance generally existed in
classification tasks and give a new definition of seed
category, that is, the inherent pattern to recognize.
• We further develop a dynamic weight distribution
training strategy based on seed category to facilitate
out-of-distribution performance.
• Extensive experiments on several domain generaliza-
tion datasets well demonstrate the effectiveness of the
proposed method.
|
Pang_DPE_Disentanglement_of_Pose_and_Expression_for_General_Video_Portrait_CVPR_2023 | Abstract
One-shot video-driven talking face generation aims at
producing a synthetic talking video by transferring the facial
motion from a video to an arbitrary portrait image. Head
pose and facial expression are always entangled in facial
motion and transferred simultaneously. However, the entan-
glement sets up a barrier for these methods to be used in
video portrait editing directly, where it may require to modify
the expression only while maintaining the pose unchanged.
One challenge of decoupling pose and expression is the lack
of paired data, such as the same pose but different expres-
*Corresponding Authors.sions. Only a few methods attempt to tackle this challenge
with the feat of 3D Morphable Models (3DMMs) for explicit
disentanglement. But 3DMMs are not accurate enough to
capture facial details due to the limited number of Blend-
shapes, which has side effects on motion transfer. In this
paper, we introduce a novel self-supervised disentanglement
framework to decouple pose and expression without 3DMMs
and paired data, which consists of a motion editing module,
a pose generator, and an expression generator. The editing
module projects faces into a latent space where pose motion
and expression motion can be disentangled, and the pose
or expression transfer can be performed in the latent space
conveniently via addition. The two generators render the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
427
modified latent codes to images, respectively. Moreover, to
guarantee the disentanglement, we propose a bidirectional
cyclic training strategy with well-designed constraints. Eval-
uations demonstrate our method can control pose or expres-
sion independently and be used for general video editing.
Code: https://github.com/Carlyx/DPE
| 1. Introduction
Talking face generation has seen tremendous progress
in visual quality and accuracy over recent years. Literature
can be categorized into two groups, i.e., audio-driven [23]
and video-driven [16]. The former focuses on animating
an unseen portrait image or video with a given audio. The
latter aims at animating with a given video. Talking face
generation has a variety of meaningful applications, such as
digital human animation, film dubbing, etc. In this work, we
target video-driven talking face generation.
Recently, most methods [16, 26, 36, 39, 44] endeavor to
drive a still portrait image with a video from different per-
spectives, i.e., one-shot talking face generation. But only a
few [19, 21, 30] make effort to reenact the portrait in a video
with another talking video, i.e., video portrait editing. This is
a more challenging task because edited faces are required to
paste back to the original video and temporal dynamics need
to be maintained. Several methods [19, 28] provide personal-
ized solutions to this challenge by training a model on the
videos of a specific person only. However, the learned model
cannot generalize to other identities as the personalized train-
ing heavily overfits the facial motion of the specific person
and the background. For general video portrait editing, there-
fore, resorting to the generalization property of one-shot
talking face generation might be a feasible solution.
One-shot methods can transfer facial motion from a driv-
ing face to a source one, resulting in that the edited face
mimics the head pose and facial expression *of the driving
one. The facial motion consists of entangled pose motion
and expression motion, which are always transferred simul-
taneously in previous methods. However, the entanglement
makes those methods unable to transfer pose or expression
independently. Since the input to the processing network is
always the cropped face rather than the full original image,
if the pose is modified along with the expression, the paste-
back operation can cause noticeable artifacts around the crop
boundary, e.g., twisted neck and inconsistent background.
Consequently, most one-shot methods face this obstacle pre-
venting their application to general video portrait editing.
One challenge to disentangle pose and expression is the
lack of paired data, such as the same pose but different ex-
pressions, or vice versa. In the literature, there are only a
few exceptions that can get rid of this limitation, e.g., PIRen-
*Note that facial expression here differs from emotion.derer [25] and StyleHEAT [41], which are based on 3D
Morphable Models (3DMMs) [3], a predefined parametric
representation that decomposes expression, pose, and iden-
tity. However, the 3DMM-based methods heavily depend on
the decoupling accuracy of the 3DMM parameters, which
is far from satisfactory to reconstruct facial details due to
the limited number of Blendshapes. Besides, optimization-
based 3DMM parameter estimation is not efficient while
learning-based estimation will introduce more errors.
In this work, we propose a novel self-supervised dis-
entanglement framework to decouple pose and expression,
breaking through the limitation of paired data without us-
ing 3DMMs. Our framework has a motion editing module,
a pose generator, and an expression generator. The editing
module projects faces into a latent space where coupled pose
and expression motion in a latent code can be disentangled
by a network. Then, pose or expression transfer can be per-
formed by directly adding the latent code of a source face
with the disentangled pose or expression motion code of
a driving face. Finally, the two generators render modified
latent codes to images. More importantly, to accomplish the
disentanglement without paired data, we introduce a bidirec-
tional cyclic training method with well-designed constraints.
Specifically, given a source face Sand a driving one D, we
transfer the expression and pose from DtoSsequentially,
resulting in two synthetic faces, S′andS′′. Since there is
no paired data, no supervision is provided for S′. To tackle
the missing supervision, we exchange the role of SandD
to transfer the pose and expression motion from StoD,
resulting in D′andD′′. The distance between D′andS′is
one constraint for learning. However, it is still not enough
for disentangling pose and expression. Then, we discover
another core constraint, i.e.,face reconstruction. When S
andDare the same, S′andD′are exactly the same as Sand
D, respectively. More analyses will be presented in Sec. 3.
Our main contributions are three-fold:
•We propose a self-supervised disentanglement frame-
work to decouple pose and expression for independent
motion transfer, without using 3DMMs and paired data.
•We propose a bidirectional cyclic training strategy with
well-designed constraints to achieve the disentangle-
ment of pose and expression.
•Extensive experiments demonstrate that our method can
control pose or expression independently, and can be
used for general video editing.
|
Li_PREIM3D_3D_Consistent_Precise_Image_Attribute_Editing_From_a_Single_CVPR_2023 | Abstract
We study the 3D-aware image attribute editing problem
in this paper, which has wide applications in practice. Re-
cent methods solved the problem by training a shared en-
coder to map images into a 3D generator’s latent space or
by per-image latent code optimization and then edited im-
ages in the latent space. Despite their promising results
near the input view, they still suffer from the 3D inconsis-
tency of produced images at large camera poses and im-
precise image attribute editing, like affecting unspecified
attributes during editing. For more efficient image inver-
sion, we train a shared encoder for all images. To alle-
viate 3D inconsistency at large camera poses, we propose
two novel methods, an alternating training scheme and a
multi-view identity loss, to maintain 3D consistency and
subject identity. As for imprecise image editing, we at-
tribute the problem to the gap between the latent space of
real images and that of generated images. We compare
the latent space and inversion manifold of GAN models
and demonstrate that editing in the inversion manifold can
achieve better results in both quantitative and qualitative
*Corresponding authors.evaluations. Extensive experiments show that our method
produces more 3D consistent images and achieves more
precise image editing than previous work. Source code
and pretrained models can be found on our project page:
https://mybabyyh.github.io/Preim3D/ .
| 1. Introduction
Benefiting from the well-disentangled latent space of
Generative Adversarial Networks (GANs) [12], many
works study GAN inversion [1, 2, 11, 28, 35, 36, 40] as well
as real image editing in the latent space [14, 15, 22, 31, 32].
With the popularity of Neural Radiance Fields (NeRF) [24],
some works start to incorporate it into GAN frameworks for
unconditional 3D-aware image generation [6, 7, 13, 25–27,
30]. In particular, EG3D [6], the state-of-the-art 3D GAN,
is able to generate high-resolution multi-view-consistent
images and high-quality geometry conditioned on gaus-
sian noise and camera pose. Similar to 2D GANs, 3D
GANs also have a well semantically disentangled latent
space [6, 13, 21, 33], which enables realistic yet challeng-
ing 3D-aware image editing.
Achieving 3D-aware image editing is much more chal-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8549
lenging because it not only has to be consistent with the
input image at the input camera pose but also needs to pro-
duce 3D consistent novel views. Recently, 3D-Inv [21] uses
pivotal tuning inversion (PTI) [29], first finding out a piv-
otal latent code and then finetuning the generator with the
fixed pivotal latent code, to obtain the latent code and edit
the image attributes in the latent space. IDE-3D [33] pro-
poses a hybrid 3D GAN inversion approach combining tex-
ture and semantic encoders and PTI technique, accelerating
the optimization process by the encoded initial latent code.
Pixel2NeRF [5] is the first to achieve 3D inversion by train-
ing an encoder mapping a real image to the latent space Z
ofπ-GAN [7]. However, these methods still do not solve
the problem of 3D consistency at large camera poses and
precise image attribute editing. As shown in Fig. 4, some
inverted images meet head distortion at large camera poses,
or some unspecific attributes of edited images are modified.
In this paper, we propose a pipeline that enables PRecise
Editing in the Inversion Manifold with 3Dconsistency effi-
ciently, termed PREIM3D . There are three goals to achieve
for our framework, (i) image editing efficiently , (ii) precise
inversion , which aims to maintain realism and 3D consis-
tency of multiple views, and (iii) precise editing , which is to
edit the desired attribute while keeping the other attributes
unchanged. 3D-Inv and IDE-3D optimized a latent code for
each image, which is not suitable for interactive applica-
tions. Following Pixel2NeRF, we train a shared encoder for
all images for efficiency.
To address precise inversion , we introduce a 3D consis-
tent encoder to map a real image into the latent space W+
of EG3D, and it can infer a latent code with a single forward
pass. We first design a training scheme with alternating in-
domain images (i.e., the generated images) and out-domain
images (i.e., the real images) to help the encoder maintain
the 3D consistency of the generator. We optimize the en-
coder to reconstruct the input images in the out-domain im-
age round. In the in-domain image round, we additionally
optimize the encoder to reconstruct the ground latent code,
which will encourage the distribution of the inverted latent
code closer to the distribution of the original latent code of
the generator. Second, to preserve the subject’s identity, we
propose a multi-view identity loss calculated between the
input image and novel views randomly sampled in the sur-
rounding of the input camera pose.
Though many works tried to improve the editing pre-
cision by modifying latent codes in Zspace [31], W
space [14, 17, 34], W+space [1, 1, 40], and Sspace [37],
they all still suffer from a gap between real image editing
and generated image editing because of using the editing di-
rections found in the original generative latent space to edit
the real images. To bridge this gap, we propose a real im-
age editing subspace, which we refer to inversion manifold .
We compare the inversion manifold and the original latentspace and find the distortion between the attribute editing
directions. We show that the editing direction found in the
inversion manifold can control the attributes of the real im-
ages more precisely. To our knowledge, we are the first
to perform latent code manipulation in the inversion mani-
fold. Our methodology is orthogonal to some existing edit-
ing methods and can improve the performance of manipu-
lation in qualitative and quantitative results when integrated
with them. Figure 1 shows the inversion and editing results
produced by our method. Given a single real image, we
achieve 3D reconstruction and precise multi-view attribute
editing.
The contributions of our work can be summarized as fol-
lows:
• We present an efficient image attribute editing method
by training an image-shared encoder for 3D-aware
generated models in this paper. To keep 3D consis-
tency at large camera poses, we propose two novel
methods, an alternating training scheme and a multi-
view identity loss, to maintain 3D consistency and sub-
ject identity.
• We compare the latent space and inversion manifold
of GAN models, and demonstrate that editing in the
inversion manifold can achieve better results in both
quantitative and qualitative evaluations. The proposed
editing space helps to close the gap between real image
editing and generated image editing.
• We conduct extensive experiments, including both
quantitative and qualitative, on several datasets to
show the effectiveness of our methods.
|
Muglikar_Event-Based_Shape_From_Polarization_CVPR_2023 | Abstract
State-of-the-art solutions for Shape-from-Polarization
(SfP) suffer from a speed-resolution tradeoff: they either
sacrifice the number of polarization angles measured or
necessitate lengthy acquisition times due to framerate con-
straints, thus compromising either accuracy or latency. We
tackle this tradeoff using event cameras. Event cameras
operate at microseconds resolution with negligible motion
blur, and output a continuous stream of events that precisely
measures how light changes over time asynchronously. We
propose a setup that consists of a linear polarizer rotating
at high speeds in front of an event camera. Our method
uses the continuous event stream caused by the rotation
to reconstruct relative intensities at multiple polarizer an-
gles. Experiments demonstrate that our method outper-
forms physics-based baselines using frames, reducing the
MAE by 25% in synthetic and real-world datasets. In the
real world, we observe, however, that the challenging con-
ditions (i.e., when few events are generated) harm the per-
formance of physics-based solutions. To overcome this,
we propose a learning-based approach that learns to esti-
mate surface normals even at low event-rates, improving the
physics-based approach by 52% on the real world dataset.
The proposed system achieves an acquisition speed equiva-
lent to 50fps (>twice the framerate of the commercial po-
larization sensor) while retaining the spatial resolution of
1MP . Our evaluation is based on the first large-scale dataset
for event-based SfP .
Code, dataset and video are available under:
https://rpg.ifi.uzh.ch/esfp.html
https://youtu.be/sF3Ue2Zkpec
| 1. Introduction
Polarization cues have been used in many applications
across computer vision, including image dehazing [41],
panorama stitching and mosaicing [42], reflection removal
[21], image segmentation [25], optical flow gyroscope, [47]
and material classification [5]. Among these, Shape-from-
Polarization (SfP) methods exploit changes in polariza-
a) Setup
Objectrotating polarizer
ω
timeevent
streamevent
camera
t=τ
unpolarized light
b) Method
Events att=τ
Learning Based
encoder decoderPhysics Based
tSurface Normals (Ours)
c) Baselines
Images
135 deg
90 deg
45 deg
0 degMahmoud (2012)
Ba (2020)
Figure 1. Surface normal estimation using event-based SfP. (a)
Rotating a polarizer in front of an event camera creates sinosoidal
changes in intensities, triggering events. (b) The proposed event-
based method uses the continuous event stream to reconstruct rel-
ative intensities at multiple polarizer angles which is used to es-
timate surface normals using physics-based and learning-based
method. (c) Our approach outperforms image-based baselines
[24, 51].
tion information to infer geometric properties of an object
[2,18,22,49,51]. It uses variations in radiance under differ-
ent polarizer angles to estimate the 3D surface of a given
object. In particular, when unpolarized light is reflected
from a surface, it becomes partially polarized depending on
the geometry and material of the surface. Surface normals,
and thus 3D shape, can then be estimated by orienting a po-
larizing filter in front of a camera sensor and studying the
relationship between the polarizer angle and the magnitude
of light transmission. SfP has a number of advantages over
both active and passive depth sensing methods. Unlike ac-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1547
(a) Division of fo-
cal plane
(b) Division of time
Figure 2. Illustration of SfP methods.
tive depth sensors that use structured light (SL) [7, 48] or
time-of-flight (ToF), SfP is not limited by material type and
can be applied to non-Lambertian surfaces like transparent
glass and reflective, metallic surfaces.
Despite these advantages, however, estimating high-
quality surface normals from polarization images is still
an open challenge. Division of Focal Plane (DoFP) meth-
ods [22, 32, 33, 51] trade-off spatial resolution for latency
and allow for the capture of four polarizations in the same
image. This is achieved through a complex manufactur-
ing process that requires precisely placing a micro-array
of four polarization filters on the image sensor [32, 33], as
shown in Fig. 2a Despite the reduced latency, this system
constrains the maximum number of polarization angles that
can be captured, potentially impacting the accuracy of the
estimates as we show in our results.Additionally, the spa-
tial resolution of the sensor is also reduced, requiring fur-
ther mosaicing-based algorithms for high-resolution recon-
struction [50]. On the other hand, Division of Time (DoT)
methods [2, 18, 49] provide full-resolution images and are
not limited in the number of polarization angles they can
capture thanks to a rotating polarizing filter put in front of
the image sensor. The frame rate of the sensing camera,
however, effectively limits the rate at which the filter can
rotate, increasing the acquisition time significantly (acqui-
sition time =N/f , where Nis the number of polarizer an-
gles and fis the framerate of the camera). For this reason,
commercial solutions, such as the Lucid Polarisens [33], fa-
vor DoFP, despite the lower resolution of both polarization
angles and image pixels. To overcome this shortcoming, re-
cently, significant progress has been made with data-driven
priors [22, 51]. However, these solutions still fall short in
terms of computational complexity when compared to DoT
methods. A solution able to bridge the accuracy of DoT
with the speed of DoFP is thus still lacking in the field.
In this paper, we tackle the speed-resolution trade-off
using event cameras. Event cameras are efficient high-
speed vision sensors that asynchronously measure changes
in brightness intensity with microsecond resolution. We ex-
ploit these characteristics to design a DoT approach able
to operate at high acquisition speeds (up to 5,000fps vs.
22fps of standard frame-based devices) and full-resolution
(1280×720) s shown in Fig. 1. Thanks to the working
principles of event-cameras, our sensing device provides a
continuous stream of information for estimating the sur-Dataset Modality (Resolution) Size
Polar3D [18] 6 Images(DoT) 18 MP 3
DeepSfP [51] 4 Images(DoFP) 1224×1024 236
SPW [22] 4 Images(DoFP) 1224×1024 522
ESfP- Synthetic (Ours) Events (DoT) + 12 Images(DoT) 512×512 104
ESfP- Real (Ours) Events (DoT) + 4 Images (DoFP) 1280×720 90
Table 1. Summary of publicly available datasets for SfP.
face normal as compared to the discrete intensities cap-
tured at fixed polarization angles of traditional approaches.
We present two algorithms to estimate surface normals
from events, one using geometry and the other based on
a learning-based approach. Our geometry-based method
takes advantage of the continuous event stream to recon-
struct relative intensities at multiple polarizer angles, which
are then used to estimate the surface normal using tradi-
tional methods. Since events provide a temporally rich in-
formation, this results in better reconstruction of intermedi-
ate intensities. This leads to an improvement of upto 25% in
surface normal estimation, both on the synthetic dataset and
on the real-world dataset.On the real dataset, however, the
non-idealities of the event camera introduce a lower fill-rate
(percentage of pixels triggering events) of 3.6%in average
(refer Section 3.1). To overcome this, we propose a deep
learning framework which uses a simple U-Net network to
predict the dense surface normals from events. Our data-
driven approach improves the accuracy over the geometry-
based method by 52%. Our contributions can be summa-
rized as follows:
• A novel approach for shape-from-polarization using an
event camera. Our approach utilizes the rich temporal
information of events to reconstruct event intensities at
multiple polarization angles. These event intensities are
then used to estimate the surface normal. Our method
outperforms previous state-of-the-art physics-based ap-
proaches using images by 25% in terms of accuracy.
• A learning-based framework which predicts surface nor-
mals using events to solve the issue of low fill-rate com-
mon in the real-world. This framework improves the es-
timation over physics-based approach by 52% in terms
of angular error.
• Lastly, we present the firstlarge scale dataset containing
over 90 challenging scenes for SfP with events and im-
ages. Our dataset consists of events captured by rotating
a polarizer in front of an event camera, as well as images
captured using the Lucid Polarisens [33].
|
Lu_Robust_and_Scalable_Gaussian_Process_Regression_and_Its_Applications_CVPR_2023 | Abstract
This paper introduces a robust and scalable Gaussian
process regression (GPR) model via variational learning.
This enables the application of Gaussian processes to a
wide range of real data, which are often large-scale and
contaminated by outliers. Towards this end, we employ
a mixture likelihood model where outliers are assumed to
be sampled from a uniform distribution. We next derive
a variational formulation that jointly infers the mode of
data, i.e., inlier or outlier, as well as hyperparameters by
maximizing a lower bound of the true log marginal like-
lihood. Compared to previous robust GPR, our formula-
tion approximates the exact posterior distribution. The in-
ducing variable approximation and stochastic variational
inference are further introduced to our variational frame-
work, extending our model to large-scale data. We ap-
ply our model to two challenging real-world applications,
namely feature matching and dense gene expression impu-
tation. Extensive experiments demonstrate the superiority
of our model in terms of robustness and speed. Notably,
when matching 4k feature points, its inference is completed
in milliseconds with almost no false matches. The code is at
github.com/YifanLu2000/Robust-Scalable-GPR .
| 1. Introduction
Gaussian processes (GPs) [31] are probably the primary
non-parametric method for inference on latent functions.
They have a wide range of applications from biology [3] to
computer vision [41]. A commonly used observation model
for Gaussian process regression (GPR) is the Normal dis-
tribution, which brings great convenience to the inference.
Unfortunately, a well-known limitation of the Gaussian ob-
servation model is its sensitivity to outliers in data. As il-
lustrated in Fig. 1 (b), a few outliers can drastically destroy
the entire posterior regression result. This hinders the real-
world applications of GPR for many domains, where out-
liers are often inevitable. This paper intends to conquer the
GPR with outlier contaminated data.
The idea of robust regression is not new. Outlier detec-
*Corresponding Author
Figure 1. Regression with our model. (a) Perform exact GPR
from 100 inliers. (b) When there are only 6 outliers in the data, the
exact GPR leads to completely wrong results. (c) By comparison,
our model is able to recover the exact posterior even facing 100
outliers. (d) The feature matching result using our model. (e) The
dense spatial gene expression imputation result using our model.
tion has been extensively and systematically described in
[6,9,10,29]. In the context of GPR, many efforts tried to re-
place the Gaussian likelihood with other distributions show-
ing heavy-tail behaviors, including Student- t[16,21,28,30],
Laplace [22, 30], Gaussian mixture [8, 22, 27], and data-
dependent noise model [17]. The challenge with these non-
Gaussian likelihoods lies in the inference, which is analyti-
cally intractable. To this end, many approximation schemes
have been applied, despite having high computational com-
plexity, e.g., Markov Chain Monte Carlo (MCMC) sam-
pling and Expectation Propagation (EP) [22].
In this paper, we propose a more effective mixture like-
lihood model, where uniform distribution accounts for the
outliers and Gaussian for inliers. In our formulation, the
outliers are independent of the GP and do not affect the
computation of the posterior GP, thereby allowing to tol-
erate more outliers. We next introduce a variational method
that jointly determines the modes of data ( i.e., inlier or out-
lier) as well as hyperparameters by maximizing a lower
bound to the marginal likelihood. We highlight that the dif-
ference between our variational formulation and pervious
methods is that the modes of data now become variational
parameters and are obtained by minimizing the Kullback-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21950
Leibler (KL) divergence between the variational and the
true posterior distribution. Thus, the proposed formulation
is less likely to overfit and is able to approximate the exact
posterior GP only from inliers, as in Fig. 1 (c).
Inspired by [37], the sparse inducing variable approxi-
mation is integrated into our variational framework, which
retains the exact GP prior but performs posterior approx-
imation, and reduces the time complexity from O(n3)to
O(nm2). By treating the inducing variables as global vari-
ables [18], our variational model enjoys the acceleration by
Stochastic Variational Inference (SVI) [20]. It performs
stochastic optimization from natural gradient and further
decreases the time complexity to O(km2). This provides
a guarantee for our model to scale to large-scale data.
We apply our robust GPR model to two real-world ap-
plications, say feature matching and dense spatial gene ex-
pression imputation, as illustrated in the Figs. 1 (d) and (e).
Extensive experiments demonstrate the superiority of our
method on both numerical data and real applications.
To summarize, our contributions include the following.
(i) We present a robust Gaussian process regression model,
which uses variational learning to approximate the true ex-
act posterior. (ii) We leverage inducing variables and SVI
to adapt our model to large-scale data. (iii) Two applica-
tions of our model are described. Extensive experimental
validation demonstrates the superiority of our model.
|
Potje_Enhancing_Deformable_Local_Features_by_Jointly_Learning_To_Detect_and_CVPR_2023 | Abstract
Local feature extraction is a standard approach in com-
puter vision for tackling important tasks such as image
matching and retrieval. The core assumption of most meth-
ods is that images undergo affine transformations, disregard-
ing more complicated effects such as non-rigid deformations.
Furthermore, incipient works tailored for non-rigid corre-
spondence still rely on keypoint detectors designed for rigid
transformations, hindering performance due to the limita-
tions of the detector. We propose DALF (Deformation-Aware
Local Features), a novel deformation-aware network for
jointly detecting and describing keypoints, to handle the
challenging problem of matching deformable surfaces. All
network components work cooperatively through a feature
fusion approach that enforces the descriptors’ distinctiveness
and invariance. Experiments using real deforming objects
showcase the superiority of our method, where it delivers 8%
improvement in matching scores compared to the previous
best results. Our approach also enhances the performance
of two real-world applications: deformable object retrieval
and non-rigid 3D surface registration. Code for training, in-
ference, and applications are publicly available at verlab.
dcc.ufmg.br/descriptors/dalf_cvpr23 .
| 1. Introduction
Finding pixel-wise correspondences between images de-
picting the same surface is a long-standing problem in com-
puter vision. Besides varying illumination, viewpoint, and
distance to the object of interest, real-world scenes impose
additional challenges. The vast majority of the correspon-
dence algorithms in the literature assume that our world is
rigid, but this assumption is far from the truth. It is notice-
able that the community invests significant efforts into novel
architectures and training strategies to improve image match-
DALF
DISKMS = 0.48
MS = 0.38Figure 1. Image matching under deformations . We propose
DALF, a deformation-aware keypoint detector and descriptor for
matching deformable surfaces. DALF (top) enables local feature
matching across deformable scenes with improved matching scores
(MS) compared to state-of-the-art, as illustrated with DISK [37].
Green lines show correct matches, and red markers, the mismatches.
ing for rigid scenes [6, 19, 26, 34, 37, 42], but disregards the
fact that many objects in the real world can deform in more
complex ways than an affine transformation.
Many applications in industry, medicine, and agricul-
ture require tracking, retrieval, and monitoring of arbitrary
deformable objects and surfaces, where a general-purpose
matching algorithm is needed to achieve accurate results.
Since the performance of standard affine local features sig-
nificantly decreases for scenarios such as strong illumination
changes and deformations, a few works considering a wider
class of transformations have been proposed [24, 25, 30].
However, all the deformation-aware methods neglect the
keypoint detection phase, limiting their applicability in chal-
lenging deformations. Although the problems of keypoint
detection and description can be treated separately, recent
works that jointly perform detection and description of fea-
tures [4, 26] indicate an entanglement of the two tasks since
the keypoint detection can impact the performance of the de-
scriptor. The descriptor for its turn can be used to determine
reliable points optimized for specific goals. In this work,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1306
we propose a new method for jointly learning keypoints and
descriptors robust to deformations, viewpoint, and illumina-
tion changes. We show that the detection phase is critical
to obtain robust matching under deformations. Fig. 1 de-
picts an image pair with challenging deformations, where
our method can extract reliable keypoints and match them
correctly, significantly increasing matching scores compared
to the recent state-of-the-art approach DISK [37].
Contributions. (1) Our first contribution is a new end-to-
end method called DALF (Deformation-Aware Local Fea-
tures), which jointly learns to detect keypoints and extract
descriptors with a mutual assistance strategy to handle sig-
nificant non-rigid deformations. Our method boosts the
state-of-the-art in this type of feature matching by 8% using
only synthetic warps as supervision, showing strong gener-
alization capabilities. We leverage a reinforcement learning
algorithm for unified training, combined with spatial trans-
formers that capture deformations by learning context priors
affecting the image; (2)Second, we introduce a feature fu-
sion approach, a major difference from previous methods
that allows the model to tackle challenging deformations
with complementary features (with distinctiveness and in-
variance properties) obtained from both the backbone and
the spatial transformer module. This approach is shown
beneficial with substantial performance improvements com-
pared to the non-fused features; (3)Finally, we demonstrate
state-of-the-art results in non-rigid local feature applications
for deformable object retrieval and non-rigid 3D surface reg-
istration. We also will make the code and both applications
publicly available to the community.
|
Ni_PATS_Patch_Area_Transportation_With_Subdivision_for_Local_Feature_Matching_CVPR_2023 | Abstract
Local feature matching aims at establishing sparse cor-
respondences between a pair of images. Recently, detector-
free methods present generally better performance but are
not satisfactory in image pairs with large scale differences.
In this paper, we propose Patch Area Transportation with
Subdivision (PATS) to tackle this issue. Instead of build-
ing an expensive image pyramid, we start by splitting the
original image pair into equal-sized patches and gradu-
ally resizing and subdividing them into smaller patches with
the same scale. However, estimating scale differences be-
tween these patches is non-trivial since the scale differ-
ences are determined by both relative camera poses and
scene structures, and thus spatially varying over image
pairs. Moreover, it is hard to obtain the ground truth for
real scenes. To this end, we propose patch area transporta-
tion, which enables learning scale differences in a self-
supervised manner. In contrast to bipartite graph match-
ing, which only handles one-to-one matching, our patch
area transportation can deal with many-to-many relation-
ships. PATS improves both matching accuracy and cover-
age, and shows superior performance in downstream tasks,
such as relative pose estimation, visual localization, and
optical flow estimation. The source code is available at
https://zju3dv.github.io/pats/ .
| 1. Introduction
Local feature matching between images is essential in
many computer vision tasks which aim to establish corre-
spondences between a pair of images. In the past decades,
local feature matching [3, 40] has been widely used in a
large number of applications such as structure from mo-
tion (SfM) [44, 64], simultaneous localization and mapping
(SLAM) [30,36,62], visual localization [19,41], object pose
estimation [22, 61], etc. The viewpoint change from the
*Junjie Ni and Yijin Li contributed equally to this work.
†Guofeng Zhang is the corresponding author.
Figure 1. Two-view reconstruction results of LoFTR [49],
ASpanFormer [7], PDC-Net+ [58] and our approach on
MegaDepth dataset [27]. PATS can extract high-quality matches
under severe scale variations and in indistinctive regions with
repetitive patterns, which allows semi-dense two-view reconstruc-
tion by simply triangulating the matches in a image pair. In con-
trast, other methods either obtain fewer matches or even obtain
erroneous results.
source image to the target image may lead to scale varia-
tions, which is a long-standing challenge in local feature
matching. Large variations in scale leads to two severe
consequences: Firstly, the appearance is seriously distorted,
which makes learning the feature similarity more challeng-
ing and impacts the correspondence accuracy. Secondly,
there may be several pixels in the source image correspond-
ing to pixels in a local window in the target image. How-
ever, existing methods [33, 40] only permit one potential
target feature to be matched in the local window, and the
following bipartite graph matching only allows one source
pixel to win the matching. The coverage of correspondences
derived from such feature matches is largely suppressed and
will impact the downstream tasks.
Before the deep learning era, SIFT [33] is a milestone
that tackles the scale problem by detecting local features
on an image pyramid and then matching features crossing
pyramid levels, called scale-space analysis. This technique
is also adopted in the inference stage of learning-based lo-
cal features [39]. Recently, LoFTR abandons feature detec-
tion stage and learns to directly draw feature matches via
simultaneously encoding features from both images based
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17776
Figure 2. Scale Alignment with Patch Area Transportation.
Our approach learns to find the many-to-many relationship and
scale differences through solving the patch area transportation.
Then we crop the patches and resize the image content to align
the scale, which remove the appearance distortion.
on the attention mechanism. By removing the information
bottleneck caused by detectors, LoFTR [49] produces bet-
ter feature matches. However, LoFTR does not handle the
scale problem and the scale-space analysis is infeasible in
this paradigm because conducting attention intra- and inter-
different scales will bring unbearable increasing computa-
tional costs. As a result, the scale curse comes back again.
In this paper, we propose Patch AreaTransportation
with Subdivision (PATS) to tackle the scale problem in a
detector-free manner. The appearance distortion can be al-
leviated if the image contents are aligned according to their
scale differences before feature extraction. As shown in
Fig. 2, if the target image is simply obtained by magnify-
ing the source image twice, a siamese feature encoder will
produce features with large discrepancies at corresponding
locations. The discrepancies are corrected if we estimate
the scale difference and resize the target image to half be-
fore feature extraction. Considering that scale differences
are spatially varying, we split the source image into equal-
sized patches and then align the scale patch-wisely. Specif-
ically, we identify a corresponding rectangular region in the
target image for each source patch and then resize the im-
age content in the region to align the scale. By also splitting
the target image into patches, the rectangular regions can
be represented with patches bounded by boxes. Based on
this representation, one source patch corresponds to mul-
tiple target patches. Moreover, the bounding box may be
overlapped, indicating that one target patch may also corre-
spond to multiple source patches. Here comes the question:
how can we find many-to-many patch matches instead of
one-to-one [7, 42, 49]?
We observe that finding target patches for a source patch
can be regarded as transporting the source patch to the
target bounding box, where each target patch inside the
box occupies a portion of the content. In other words,
the area proportion that the target patches occupying the
source patch should be summed to 1. Motivated by this
observation, we propose to predict the target patches’ area
and formulate patch matching as a patch area transporta-
tion problem that transports areas of source patches to tar-
get patches with visual similarity restrictions. Solving this
problem with Sinkhorn [10], a differential optimal trans-
port solver, also encourages our neural network to bettercapture complex visual priors. Once the patch matching
is finished, the corresponding bounding boxes can be eas-
ily determined. According to the patch area transportation
with patch subdivision from coarse to fine, PATS signifi-
cantly alleviates appearance distortion, which largely eases
the difficulty of feature learning to measure visual similar-
ity. Moreover, source patches being allowed to match over-
lapped target patches naturally avoid the coverage reduction
problem. After resizing the target regions according to es-
timated scale differences, we subdivide the corresponding
source patch and target region to obtain finer correspon-
dences, dubbed as scale-adaptive patch subdivision. Fig. 1
shows qualitative results of our approach.
Our contributions in this work can be summarized as
three folds: 1) We propose patch area transportation to
handle the many-to-many patch-matching challenge and
grants the ability that learning scale differences in a self-
supervised manner to the neural network. 2) We pro-
pose a scale-adaptive patch subdivision to effectively re-
fine the correspondence quality from coarse to fine. 3) Our
patch area transportation with subdivision (PATS) achieves
state-of-the-art performance and presents strong robustness
against scale variations.
|
Peters_pCON_Polarimetric_Coordinate_Networks_for_Neural_Scene_Representations_CVPR_2023 | Abstract
Neural scene representations have achieved great suc-
cess in parameterizing and reconstructing images, but cur-
rent state of the art models are not optimized with the
preservation of physical quantities in mind. While current
architectures can reconstruct color images correctly, they
create artifacts when trying to fit maps of polar quanti-
ties. We propose polarimetric coordinate networks (pCON),
a new model architecture for neural scene representations
aimed at preserving polarimetric information while accu-
rately parameterizing the scene. Our model removes arti-
facts created by current coordinate network architectures
when reconstructing three polarimetric quantities of inter-
est. All code and data can be found at this link: https:
//visual.ee.ucla.edu/pcon.htm .
| 1. Introduction
Neural scene representations are a popular and useful
tool in many computer vision tasks, but these models are
optimized to preserve visual content, not physical informa-
tion. Current state-of-the-art models create artifacts due to
the presence of a large range of spatial frequencies when re-
constructing polarimetric data. Many tasks in polarimetric
imaging rely on precise measurements, and thus even small
artifacts are a hindrance for downstream tasks that would
like to leverage neural reconstructions of polarization im-
ages. In this work we present pCON, a new architecture for
neural scene representations. pCON leverages images’ sin-
gular value decompositions to effectively allocate network
capacity to learning the more difficult spatial frequencies
at each pixel. Our model reconstructs polarimetric images
without the artifacts introduced by state-of-the-art models.
The polarization of light passing through a scene con-
tains a wealth of information, and while current neural rep-
resentations can represent single images accurately, but they
produce noticeable visual artifacts when trying to represent
*Equal contribution.multiple polarimetric quantities concurrently.
We propose a new architecture for neural scene repre-
sentations that can effectively reconstruct polarimetric im-
ages without artifacts. Our model reconstructs color images
accurately while also ensuring the quality of three impor-
tant polarimetric quantities, the degree ( ρ) and angle ( ϕ)of
linear polarization (DoLP and AoLP), and the unpolarized
intensity Iun. This information is generally captured using
images of a scene taken through linear polarizing filters at
four different angles. Instead of learning a representation
of these images, our model operates directly on the DoLP,
AoLP and unpolarized intensity maps. When learning to
fit these images, current coordinate network architectures
produce artifacts in the predicted DoLP and unpolarized in-
tensity maps. To alleviate this issue, we take inspiration
from traditional image compression techniques and fit im-
ages using their singular value decompositions. Images can
be compressed by reconstructing them using only a subset
of their singular values [28]. By utilizing different, non-
overlapping sets of singular values to reconstruct an image,
the original image can be recovered by summing the indi-
vidual reconstructions together. Our model is supervised in
a coarse-to-fine manner, which helps the model to represent
both the low and and high frequency details present in maps
of polarimetric quantities without introducint noise or tiling
artifacts. A demonstration of the efficacy our model can be
seen in Fig. 1 and Table 1. Furthermore, our model is capa-
ble of representing images at varying levels of detail, creat-
ing a tradeoff between performance and model size without
retraining.
1.1. Contributions
To summarize, the contributions of our work include:
• a coordinate network architecture for neural scene rep-
resentations of polarimetric images;
• a training strategy for our network which learns a se-
ries of representations using different sets of singular
values, allowing for a trade-off between performance
and model size without retraining;
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16579
GT
SIREN [52]
ACORN [34]
ReLU P.E
Ours
Figure 1. Our model reconstructs the training scene more accurately than other architectures. Our model does not have the noise
pattern present in reconstructions from SIREN [52] or a ReLU MLP with positional encoding [38], nor does it show tiling artifacts as in
ACORN’s [34] prediction.
• results demonstrating that our model reconstructs
maps of polarimetric quantities without the artifacts
created by current state-of-the-art approaches.
|
Qu_How_To_Prevent_the_Poor_Performance_Clients_for_Personalized_Federated_CVPR_2023 | Abstract
Personalized federated learning (pFL) collaboratively
trains personalized models, which provides a customized
model solution for individual clients in the presence of het-
erogeneous distributed local data. Although many recent
studies have applied various algorithms to enhance per-
sonalization in pFL, they mainly focus on improving the
performance from averaging or top perspective. How-
ever, part of the clients may fall into poor performance
and are not clearly discussed. Therefore, how to prevent
these poor clients should be considered critically. Intu-
itively, these poor clients may come from biased univer-
sal information shared with others. To address this issue,
we propose a novel pFL strategy, called Personalize Lo-
cally, Generalize Universally (PLGU). PLGU generalizes
the fine-grained universal information and moderates its bi-
ased performance by designing a Layer-Wised Sharpness
Aware Minimization (LWSAM) algorithm while keeping the
personalization local. Specifically, we embed our proposed
PLGU strategy into two pFL schemes concluded in this pa-
per: with/without a global model, and present the training
procedures in detail. Through in-depth study, we show that
the proposed PLGU strategy achieves competitive general-
ization bounds on both considered pFL schemes. Our exten-
sive experimental results show that all the proposed PLGU
based-algorithms achieve state-of-the-art performance.
| 1. Introduction
Federated Learning (FL) is a popular collaborative re-
search paradigm that trains an aggregated global learning
model with distributed private datasets on multiple clients
[16, 29]. This setting has achieved great accomplishments
when the local data cannot be shared due to privacy and
communication constraints [36]. However, because of the
*Corresponding author.ExE
62 64 66 68 70 72 740.00.10.20.30.4
Poor Clients Poor ClientspFedMe
FedRep
Figure 1. Toy example in a heterogeneous pFL on CIFAR10,
which includes 100 clients and each client obtains 3 labels.
non-IID/heterogeneous datasets, learning a single global
model to fit the “averaged distribution” may be difficult to
propose a well-generalized solution to the individual client
and slow the convergence results [24]. To address this
problem, personalized federated learning (pFL) is devel-
oped to provide a customized local model solution for each
client based on its statistical features in the private train-
ing dataset [5, 9, 11, 34]. Generally, we can divide existing
pFL algorithms into two schemes: (I) with a global model
[5, 23, 25, 40] or (II) without a global model [27, 28, 37].
Though many pFL algorithms make accomplishments
by modifying the universal learning process [41, 47] or en-
hancing the personalization [3, 27, 37], they may lead part
of clients to fall into poor learning performance, where the
personalization of local clients performs a large statistical
deviation from the “averaged distribution”. To the best of
our knowledge, none of the existing studies explore how
to prevent clients from falling into poor personalized per-
formance on these two schemes. For example, the poor
medical learning models of some clients may incur seri-
ous medical malpractice. To better present our concerned
problem, we introduce a toy example in Figure 1, which
is learned by two pFL algorithms representing these two
schemes: pFedMe [40] and FedRep [3]. Though both al-
gorithms achieve high averaged local model performance
of 66.43% and 71.35%, there also 15% of clients are less
than 64% and 14% clients are less than 69%, respectively.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12167
This motivates us to exploit an effective strategy to prevent
clients from falling into poor performance while without de-
grading others, e.g., the green curve.
Intuitively, we consider this phenomenon oftentimes
comes from the biased universal information towards the
clients with better learning performance. For scheme I,
a simple-averaged aggregation may not perfectly handle
data heterogeneity, as it generates serious bias between the
global and local clients. For scheme II, abandoning the
universal contribution may dismiss some information from
other clients. Instead of designing a new pFL algorithm,
we propose a novel pFL strategy on existing pFL stud-
ies: generalizing the universal learning for unbiased local
adaptation as well as keeping the local personalized fea-
tures, called Personalize Locally, Generalize Universally
(PLGU). The main challenge of PLGU is to generalize uni-
versal information without local feature perturbation, as the
statistical information is only stored locally. In this pa-
per, we tackle this challenge by developing a fine-grained
perturbation method called Layer-Wised Sharpness-Aware-
Minimization (LWSAM) based on the SAM optimizer [7,
33], which develops a generalized training paradigm by
leveraging linear approximation. Furthermore, we present
how to embed this PLGU strategy with the perturbed uni-
versal generalization on both the two pFL schemes.
For scheme I (with the global model), we propose the
PLGU-Layer Freezing (LF) algorithm. As illustrated in
[21, 31, 48], each layer in a personalized model shares a
different contribution: the shallow layers focus more on lo-
cal feature extraction (personalization), and the deeper lay-
ers are for extracting global features (universal). Specifi-
cally, the PLGU-LF first explores the personalization score
of each layer. Then, PLGU-LF freezes the important layer
locally for personalization and uses the LWSAM optimizer
with the consideration of obtained layer importance score
for universal generalization. For scheme II (without the
global model), we mainly focus on our proposed PLGU
strategy FedRep algorithm [3], named PLGU-GRep. It gen-
eralizes the universal information by smoothing personal-
ization in the representation part. To show the extensibil-
ity, we present that we can successfully extend our PLGU
strategy to pFedHN [37], called PLGU-GHN, to improve
learning performance, especially for poor clients. Further-
more, we analyze the generalization bound on PLGU-LF,
PLGU-GRep, and PLGU-GHN algorithms in-depth. Exten-
sive experimental results also show that all three algorithms
successfully prevent poor clients and outperform the aver-
age learning performance while incrementally reducing the
top-performance clients.
|
Li_SIM_Semantic-Aware_Instance_Mask_Generation_for_Box-Supervised_Instance_Segmentation_CVPR_2023 | Abstract
Weakly supervised instance segmentation using only
bounding box annotations has recently attracted much re-
search attention. Most of the current efforts leverage low-
level image features as extra supervision without explicitly
exploiting the high-level semantic information of the ob-
jects, which will become ineffective when the foreground ob-
jects have similar appearances to the background or other
objects nearby. We propose a new box-supervised instance
segmentation approach by developing a Semantic-aware In-
stance Mask (SIM) generation paradigm. Instead of heav-
ily relying on local pair-wise affinities among neighboring
pixels, we construct a group of category-wise feature cen-
troids as prototypes to identify foreground objects and as-
sign them semantic-level pseudo labels. Considering that
the semantic-aware prototypes cannot distinguish differ-
ent instances of the same semantics, we propose a self-
correction mechanism to rectify the falsely activated regions
while enhancing the correct ones. Furthermore, to handle
the occlusions between objects, we tailor the Copy-Paste
operation for the weakly-supervised instance segmentation
task to augment challenging training data. Extensive exper-
imental results demonstrate the superiority of our proposed
SIM approach over other state-of-the-art methods. The
source code: https://github.com/lslrh/SIM .
| 1. Introduction
Instance segmentation is among the fundamental tasks
of computer vision, with many applications in autonomous
driving, image editing, human-computer interaction, etc.
The performance of instance segmentation has been im-
proved significantly along with the advances in deep learn-
ing [6, 12, 34, 38]. However, training robust segmentation
networks requires a large number of data with pixel-wise
annotations, which consumes intensive human labor and
*denotes the equal contribution, †denotes the corresponding author.
This work is supported by the Hong Kong RGC RIF grant (R5001-18).
Figure 1. The pipeline of Semantic-aware Instance Mask (SIM)
generation method. (a) shows the mask prediction produced by us-
ing only low-level affinity supervision, where the foreground heav-
ily blends with background. (b) and (c) show the semantic-aware
masks obtained with our constructed prototypes, which perceive
the entity of objects but are unable to separate different instances
of the same semantics. (d) shows the final instance pseudo mask
rectified by our proposed self-correction module.
resources. To reduce the reliance on dense annotations,
weakly-supervised instance segmentation based on cheap
supervisions, such as bounding boxes [14,21,36], points [8]
and image-level labels [1,18], has recently attracted increas-
ing research attention.
In this paper, we focus on box-supervised instance seg-
mentation (BSIS), where the bounding boxes provide coarse
supervised information for pixel-wise prediction task. To
provide pixel-wise supervision, conventional methods [10,
19] usually leverage off-the-shelf proposal techniques, such
as MCG [30] and GrabCut [31], to create pseudo instance
masks. However, the training pipelines of these meth-
ods with multiple iterative steps are cumbersome. Sev-
eral recent works [14, 36] enable end-to-end training by
taking pairwise affinities among pixels as extra supervi-
sion. Though these methods have achieved promising per-
formance, they heavily depend on low-level image features,
such as color pairs [36], and simply assume that the proxi-
mal pixels with similar colors are likely to have the same
label. This leads to confusion when foreground objects
have similar appearances to the background or other ob-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7193
jects nearby, as shown in Fig. 1 (a). It is thus error-prone
to use only low-level image cues for supervision since they
are weak to represent the inherent structure of objects.
Motivated by the fact that high-level semantic informa-
tion can reveal intrinsic properties of object instances and
hence provide effective supervision for segmentation model
training, we propose a novel Semantic-aware Instance Mask
generation method, namely SIM, to explicitly exploit the
semantic information of objects. To distinguish proximal
pixels with similar color but different semantics (please re-
fer to Fig. 1 (a)), we construct a group of representative
dataset-level prototypes, i.e., the feature centroids of differ-
ent classes, to perform foreground/background segmenta-
tion, producing semantic-aware pseudo masks (see Fig. 1
(b)). These prototypes abstracted from massive training
data can capture the structural information of objects, en-
abling more comprehensive semantic pattern understand-
ing, which is complementary to affinity supervision of pair-
wise neighboring pixels. However, as shown in Fig. 1 (c),
these prototypes are unable to separate the instances of the
same semantics, especially for overlapping objects. We
consequently develop a self-correction mechanism to rec-
tify the false positives while enhancing the confidence of
true-positive foreground objects, resulting in more precise
instance-aware pseudo masks, as shown in Fig. 1 (d).
It is worth mentioning that our generated pseudo masks
could co-evolve with the segmentation model without cum-
bersome iterative training procedures in previous meth-
ods [10, 21]. In addition, considering that the exist-
ing weakly-supervised instance segmentation methods only
provide very limited supervision for rare categories and
overlapping objects due to the lack of ground truth masks,
we propose an online weakly-supervised Copy-Paste ap-
proach to create a combinatorial number of augmented
training samples. Overall, the major contributions of this
work can be summarized as follows:
A novel BSIS framework is presented by developing
a semantic-aware instance mask generation mechanism.
Specifically, we construct a group of representative proto-
types to explore the intrinsic properties of object instances
and identify complete entities, which produces more reli-
able supervision than low-level features.
A self-correction module is designed to rectify the
semantic-aware pseudo masks to be instance-aware. The
falsely activated regions will be reduced, and the correct
ones will be boosted, enabling more stable training and
progressively improving the segmentation results.
We tailor the Copy-Paste operation for weakly-supervised
segmentation tasks in order to create more occlusion pat-
terns and more challenging training data. The overall
framework can be trained in an end-to-end manner. Ex-
tensive experiments demonstrate the superiority of our
method over other state-of-the-art methods. |
Li_Regularize_Implicit_Neural_Representation_by_Itself_CVPR_2023 | Abstract
This paper proposes a regularizer called Implicit Neural
Representation Regularizer (INRR) to improve the general-
ization ability of the Implicit Neural Representation (INR).
The INR is a fully connected network that can represent sig-
nals with details not restricted by grid resolution. However,
its generalization ability could be improved, especially with
non-uniformly sampled data. The proposed INRR is based
on learned Dirichlet Energy (DE) that measures similarities
between rows/columns of the matrix. The smoothness of the
Laplacian matrix is further integrated by parameterizing
DE with a tiny INR. INRR improves the generalization of
INR in signal representation by perfectly integrating the sig-
nal’s self-similarity with the smoothness of the Laplacian
matrix. Through well-designed numerical experiments, the
paper also reveals a series of properties derived from INRR,
including momentum methods like convergence trajectory
and multi-scale similarity. Moreover, the proposed method
could improve the performance of other signal representa-
tion methods.
| 1. Introduction
INR uses a fully connected network (FCN) ϕθ(x) :
Rd7→Roto approximate the explicit solution of an implicit
function F
x, ϕθ,∇xϕθ,∇2
xϕθ, . . .
= 0. For an example,
we can represent a gray-scale image X∈Rm×nwith an
INRϕθ(x) :R27→Rwhich satisfied ϕθ(i
m,j
n) =Xij, i∈
{1, . . . , m }, j∈ {1, . . . , n }. Compared with traditional grid
representation X, INR’s representation ability to details is
not restricted by grid resolution m, n as INR can predict the
pixel value at any location (x, y)∈R2even not equals to
(i
m,j
n).
Besides the representation ability of INR, generalization
ability is critical for a neural network. We explore the em-
*This work was supported by the National Key Research ,Development
Program (2020YFA0713504), the National Natural Science Foundation of
China (61977065) and the Macao Science and Technology Development
Fund (061/2020/A2).pirical generalization ability via a 256×256 gray-scale
non-uniformly sampled image inpainting task as Figure 2(a)
shows. Although INR fits training data perfectly in Fig-
ure 2(b), its prediction outside training data is unreasonable.
Theoretical analysis of INR illustrates that a hyper-parameter
controls the smoothness degree of ϕθ(x). Moreover, the ex-
periments show that the best hyper-parameter varies with
the missing rate (the percentage of unsampled pixels) as Fig-
ure 3 shows. Adjusting this hyper-parameter cannot make
the non-uniformly missing case perform best, as different
locations might have different missing rates.
A carefully designed regularizer is proposed to improve
the generalization ability of INR. It is based on Adaptive and
Implicit Regularization (AIR) which is a learned Dirichlet
Energy (DE) [12] that measures similarities or correlations
between rows/columns of X. The smoothness of the Lapla-
cian matrix is further integrated by parameterizing DE with
a tiny INR. The structure of the proposed implicit neural
representation regularizer (INRR) is shown in Figure 1(b).
Because a smooth Laplacian matrix represents non-local
prior and large-scale local prior in vision data, INRR can
improve the generalization of INR in image representation.
Numerous numerical experiments show that INRR outper-
forms various classical regularizers, including total variation
(TV), L2energy, and so on. As a regularizer both in a new
form and with new meaning, INRR can be combined with
other signal representation methods, such as deep matrix
factorization (DMF) [1].
To summarize, the contributions of our work include the
following:
•Neural Tangent Kernel (NTK) [1] theoretically analyzes
the generalization ability of INR and why INR performs
poorly with nonuniform sampling is given.
•A tiny INR parameterized regularizer named INRR
is proposed based on DE, which perfectly integrates
the image’s self-similarity with the smoothness of the
Laplacian matrix.
•A series of properties derived from INRR, including
momentum methods, multi-scale similarity, and gener-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10280
𝑥𝑥
𝑦𝑦
𝑓𝑓(𝑥𝑥,𝑦𝑦)
𝑥𝑥 𝑔𝑔(𝑥𝑥)(a)
(b)INR
INRR(c)
𝑓𝑓(𝑥𝑥,𝑦𝑦)𝑥𝑥𝑦𝑦
𝐿𝐿(𝑔𝑔𝑇𝑇𝑦𝑦1𝑔𝑔(𝑦𝑦2))𝑦𝑦1𝑦𝑦2
INR-Z𝑥𝑥𝑦𝑦 𝑓𝑓(𝑁𝑁𝑥𝑥,𝑦𝑦)
ℎ𝑥𝑥,𝑦𝑦,𝑓𝑓𝑁𝑁𝑥𝑥,𝑦𝑦0.00
-0.02
-0.04
-0.06
-0.08
-0.10Figure 1. Overview of proposed improve scheme for INR. (a) INR is a fully connected neural network which maps from coordinate to pixel
value. (b) INRR is a regularization term represented by an INR which can capture the self-similarity. (c) INR-Z improve the performance of
INR by combining the neighbor pixels with coordinate together as the input of another INR.
(a) Sampling (b) INR (18.1 dB) (c) INRR (23.3 dB)
Figure 2. Image fitting results. All the methods are based on the
SIREN to fit an 256×256Baboon with the sampling data in (a).
(b) trained with a vanilla SIREN while (c) trained with proposed
INRR.
alization ability, are revealed by well-designed numeri-
cal experiments.
|
Long_CapDet_Unifying_Dense_Captioning_and_Open-World_Detection_Pretraining_CVPR_2023 | Abstract
Benefiting from large-scale vision-language pre-training
on image-text pairs, open-world detection methods have
shown superior generalization ability under the zero-shot
or few-shot detection settings. However, a pre-defined cate-
gory space is still required during the inference stage of ex-
isting methods and only the objects belonging to that space
will be predicted. To introduce a “real” open-world de-
tector, in this paper, we propose a novel method named
CapDet to either predict under a given category list or di-
rectly generate the category of predicted bounding boxes.
Specifically, we unify the open-world detection and dense
caption tasks into a single yet effective framework by in-
troducing an additional dense captioning head to gener-
ate the region-grounded captions. Besides, adding the cap-
tioning task will in turn benefit the generalization of detec-
tion performance since the captioning dataset covers more
concepts. Experiment results show that by unifying the
dense caption task, our CapDet has obtained significant
performance improvements (e.g., +2.1% mAP on LVIS rare
classes) over the baseline method on LVIS (1203 classes).
Besides, our CapDet also achieves state-of-the-art perfor-
mance on dense captioning tasks, e.g., 15.44% mAP on VG
V1.2 and 13.98% on the VG-COCO dataset.
| 1. Introduction
Most state-of-the-art object detection methods [ 33,34,
50] benefit from a large number of densely annotated detec-
tion datasets ( e.g., COCO [ 27], Object365 [ 36], LVIS [ 12]).
However, this closed-world setting results in the model only
being able to predict categories that appear in the training
set. Considering the ubiquity of new concepts in real-world
scenes, it is very challenging to locate and identify these
new visual concepts. This predictive ability of new concepts
in open-world scenarios has very important research value
*Equal contribution.
†Corresponding authors.
sofa
tablebottlesuitcase
(b) OVD (a) OWD
dense captioning
head
alignment
a white wall outlet
a black lamp
bottle region embeddings
remote, table, bottle, suitcase, sofa,…pre-defined
category list
(C)CapDet (ours)
unknown
unknownremote
lamp
text encoderFigure 1. Comparison of the different model predictions under
OWD, OVD, and our setting. (a) OWD methods [ 14,18,48] are
not able to describe the detailed category of the detected unknown
objects and (b) the performance of OVD methods [ 8,12,41] usu-
ally depends on the pre-defined category list during the inference.
(c) With the unification of two pipelines of dense captioning and
open-world detection pre-training, our CapDet can either predict
under a given category list or directly generate the description of
predicted bounding boxes.
in real-world applications such as object search [ 29,30], in-
stance registration [ 45], and human-object interaction mod-
eling [ 10].
Currently, the open world scenario mainly includes two
tasks: open world object detection [18] (OWD) and open-
vocabulary object detection [44] (OVD). Although the
paradigms of OWD and OVD tasks are closer to the real
world, the former cannot describe the specific concept of
the detected unknown objects and requires a pre-defined
category list during the inference. Specifically, as shown
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15233
in Figure 1, previous OWD methods [ 14,18,48] would rec-
ognize new concepts not in the predefined category space
as “unknown”. Further, another line of task OVD requires
the model to learn a limited base class and generalize to
novel classes. Compared to the zero-shot object detection
(ZSD) proposed by [ 32], OVD allows the model to use ex-
ternal knowledge, e.g., knowledge distillation from a large-
scale vision-language pre-trained model [ 8,12], image-
caption pairs [ 44], image classification data [ 49], grounding
data [ 25,41,46]. With the external knowledge, OVD meth-
ods show a superior generalization capacity to detect the
novel classes within a given category space. However, as
shown in Figure 1, when given an incomplete category list,
OVD can only predict the concepts that appear in the given
category list, otherwise, there will be recognition errors, (
i.e., as illustrated in Figure 1(b), the OVD methods prone
to predict the “wall socket” as “remote”, since the latter is
in the category list but not the former).
Thus, under the OVD setting, we mainly face the follow-
ing two challenges: ( i) it is difficult to define a complete list
of categories; ( ii) low response values on rare categories
often lead to recognition errors. This is mainly because
we cannot exhaustively enumerate new objects in the real
world, and secondly, it is difficult to collect enough sam-
ples for rare classes. However, the fact that rare objects in
the real world, even some new objects that are unknown
to humans, such as UFOs, do not prevent people from using
natural language to describe it as “a flying vehicle that looks
like a Frisbee”.
Therefore, based on the above observations, in this pa-
per, we consider a new setting that is closer to the open
world and real scenes, i.e., we expect the model to both
detect and recognize concepts in a given category list, and
to generate corresponding natural language descriptions for
new concepts or rare categories of objects. Early dense cap-
tioning methods [ 9,17] can locate salient regions in images
and generate the region-grounded captions with natural lan-
guage. Inspired by this, to address the challenges faced in
the OVD setting, we propose to unify the two pipelines of
dense captioning and open-world detection pre-training into
one training framework, called CapDet . It empowers the
model with the ability to both accurately detect and recog-
nize common object categories and generate dense captions
for unknown and rare categories by unifying the two train-
ing tasks.
Specifically, our CapDet constructs a unified data for-
mat for the dense captioning data and detection data. With
the data unification, CapDet further adopts a unified pre-
training paradigm including open-world object detection
and dense captioning pre-training. For open-world detec-
tion pretraining, we treat the detection task as a semantic
alignment task and adopt a dual encoder structure as [ 41]
to locate and predict the given concepts list. The conceptslist contains category names in detection data and region-
grounded captions in dense captioning data. For dense cap-
tioning pretraining, CapDet proposes a dense captioning
head to take the predicted proposals as input to generate
the region-grounded captions. Due to the rich visual con-
cepts in the dense captioning data , the integration of dense
captioning tasks will in turn benefit the generalization of
detection performance.
Our experiments show that the integration of few dense
captioning data brings in large improvement in the object
detection datasets LVIS, e.g., +2.7% mAP on LVIS. The
unification of dense captioning and detection pre-training
gains an additional 2.3% increment on LVIS and 2.1% in-
crement on LVIS rare classes. Besides, our model also
achieves state-of-the-art performance on dense captioning
tasks. Note that our method is the first to unify dense cap-
tioning and open-world detection pretraining.
To summarize, our contributions are three folds:
• We propose a novel open-vocabulary object detection
framework CapDet, which cannot only detect and rec-
ognize concepts in a given category list but also gen-
erate corresponding natural language descriptions for
new concept objects.
• We propose to unify the two pipelines of dense cap-
tioning and open-world detection pre-training into one
training framework. Both two pre-training tasks are
beneficial to each other.
• Experiments show that by unified dense captioning
task and detection task, our CapDet gains significant
performance improvements on the open-vocabulary
object detection task ( e.g., +3.3% mAP on LVIS rare
classes). Furthermore, our CapDet also achieves state-
of-the-art performance on the dense captioning tasks,
e.g., 15.44% mAP on Visual Genome (VG) V1.2 and
13.98% mAP on VG-COCO.
|
Narayan_DF-Platter_Multi-Face_Heterogeneous_Deepfake_Dataset_CVPR_2023 | Abstract
Deepfake detection is gaining significant importance in
the research community. While most of the research ef-
forts are focused towards high-quality images and videos
with controlled appearance of individuals, deepfake gener-
ation algorithms now have the capability to generate deep-
fakes with low-resolution, occlusion, and manipulation of
multiple subjects. In this research, we emulate the real-
world scenario of deepfake generation and propose the DF-
Platter dataset, which contains (i) both low-resolution and
high-resolution deepfakes generated using multiple genera-
tion techniques and (ii) single-subject and multiple-subject
deepfakes, with face images of Indian ethnicity. Faces in
the dataset are annotated for various attributes such as gen-
der, age, skin tone, and occlusion. The dataset is prepared
in 116 days with continuous usage of 32 GPUs account-
ing to 1,800 GB cumulative memory. With over 500 GBs
in size, the dataset contains a total of 133,260 videos en-
compassing three sets. To the best of our knowledge, this
is one of the largest datasets containing vast variability
and multiple challenges. We also provide benchmark re-
sults under multiple evaluation settings using popular and
state-of-the-art deepfake detection models, for c0 images
and videos along with c23 and c40 compression variants.
The results demonstrate a significant performance reduc-
tion in the deepfake detection task on low-resolution deep-
fakes. Furthermore, existing techniques yield declined de-
tection accuracy on multiple-subject deepfakes. It is our
assertion that this database will improve the state-of-the-
art by extending the capabilities of deepfake detection al-
gorithms to real-world scenarios. The database is available
at: http://iab-rubric.org/df-platter-database.
| 1. Introduction
With the advent of diverse deep learning architectures,
significant breakthrough have been made in the field of im-
age/video forgery. This has led to an incredible rise in the
*Equal contribution by student authors.
(c) (a) (b) Figure 1. Samples showcasing multi-face deepfakes circulated on
social media. (a) A zoom call with a deepfake of Elon Musk [8]
(b) Real-time deepfake generation at America’s Got Talent [9] (c)
Deepfake round-table with multiple deepfake subjects [33].
amount of fake multimedia content being generated due to
increased accessibility and less training requirements. Not
only has the amount of such media risen, but the sophisti-
cation of such content has also improved drastically, mak-
ing it indistinguishable from real videos. While most deep-
fakes are used for entertainment purposes like parody films
and filters in apps, they can also be used to illicitly defame
someone, spread misinformation or propaganda, or conduct
fraud. In 2020 Delhi state elections in India, a deepfake
video of a popular political figure was created [34] and ac-
cording to some estimates, the deepfake was disseminated
to about 15 million people in the state [13]. Given the abuse
of deepfakes and their possible impact, the necessity for bet-
ter and robust deepfake detection methods is unavoidable.
Designing a dependable deepfake detection system re-
quires availability of comprehensive deepfake datasets for
training. Table 1 summarizes the key characteristics of the
publicly available deepfake datasets. Most of the datasets
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9739
Table 1. Quantitative comparison of DF-Platter with existing Deepfake datasets.
DatasetReal
VideosFake
VideosTotal
VideosTotal
SubjectsReal
SourceMultiple faces per
image/videoFace
OcclusionGeneration
TechniquesLow
Resolution1Annotations2
FF++ [30] 1,000 4,000 5,000 N/A YouTube ✗ ✗ 4 ✗ ✗
Celeb-DF [21] 590 5,639 6,229 59 YouTube ✗ ✗ 1 ✗ ✗
UADFV [36] 49 49 98 49 YouTube ✗ ✗ 1 ✗ ✗
DFDC [6] 23,654 104,500 128,154 960 Self-Recording ✗ ✗ 8 ✗ ✗
DeepfakeTIMIT [15] 640 320 960 32 VidTIMIT ✗ ✗ 2 ✓ ✗
DF-W [29] N/A 1,869 1,869 N/A YouTube & Bilibili ✗ ✗ 4 ✗ ✗
KoDF [16] 62,166 175,776 237,942 403 Self-Recording ✗ ✗ 6 ✗ ✗
WildDeepfake [39] 707 707 1,414 N/A Internet ✗ ✗ N/A ✗ ✗
OpenForensics [17] 45,473* 70,325* 115,325* N/A Google Open Images ✓ ✓ 1 ✗ ✗
DeePhy [26] 100 5,040 5,140 N/A YouTube ✗ ✓ 3 ✗ ✓
DF-Platter (ours) 764 132,496 133,260 454 YouTube ✓ ✓ 3 ✓ ✓
1Low resolution means the dataset contains low-resolution deepfakes generated using low-resolution videos and not by down-sampling.
2The dataset provides annotations such as skin tone, facial attributes and face occlusion.
*The number of images have been reported since the dataset contains only images.
contain high-resolution images with s |