title
stringlengths 28
135
| abstract
stringlengths 0
12k
| introduction
stringlengths 0
12k
|
---|---|---|
Wang_ProTeGe_Untrimmed_Pretraining_for_Video_Temporal_Grounding_by_Video_Temporal_CVPR_2023 | Abstract
Video temporal grounding (VTG) is the task of localiz-
ing a given natural language text query in an arbitrarily
long untrimmed video. While the task involves untrimmed
videos, all existing VTG methods leverage features from
video backbones pretrained on trimmed videos. This is
largely due to the lack of large-scale well-annotated VTG
dataset to perform pretraining. As a result, the pretrained
features lack a notion of temporal boundaries leading to
the video-text alignment being less distinguishable between
correct and incorrect locations. We present ProT ´eG´e as
the first method to perform VTG-based untrimmed pretrain-
ing to bridge the gap between trimmed pretrained back-
bones and downstream VTG tasks. ProT ´eG´e reconfigures
the HowTo100M dataset, with noisily correlated video-text
pairs, into a VTG dataset and introduces a novel Video-Text
Similarity-based Grounding Module and a pretraining ob-
jective to make pretraining robust to noise in HowTo100M.
Extensive experiments on multiple datasets across down-
stream tasks with all variations of supervision validate that
pretrained features from ProT ´eG´e can significantly outper-
form features from trimmed pretrained backbones on VTG.
| 1. Introduction
Video temporal grounding (VTG) is the video-language
multimodal task of localizing which part of an arbitrarily
long untrimmed video can be best associated with a given
natural language text query. VTG has a wide range of ap-
plications, such as information retrieval and robotics. Fig-
ure 1 shows a sample video-text pair for the VTG task and
illustrates the primary challenge in grounding an uncon-
strained natural language text query in a long untrimmed
video, namely, the need for a fine-grained understanding of
the spatio-temporal dynamics in the video.
⋆Authors with equal contribution.
This work was done as Lan Wang’s internship project at Microsoft.
Downstream Untrimmed VideoGround Truth“A person kneeling on the floor talks on a phone.” (Q1)“The person pours something into a glass.(Q2)0.0-8.4s13.3-22.4sVideo-textPretraining on TrimmedVideosProTéGé: Video-text Pretraining on UntrimmedVideos0.0-10.4s13.4-19.3s0.0-8.9s13.4-22.3s………
Cosine Similarity of Downstream UntrimmedVideo Features with Q2“Use a watercolor pen to fill in the gaps.” ……“Next step is to fill the channel”(a)
(b) Video-text Pretraining on TrimmedVideos(c) ProTéGé: Video-text Pretraining on UntrimmedVideos
“Reinforce the handrail with screws.”Cosine Similarity of Downstream UntrimmedVideo Features with Q2
“The person climbing mountain.”Figure 1. (a) Comparison between video-text trimmed and
untrimmed pretraining on grounding text Q1 and Q2 in an
untrimmed video. Untrimmed video-text pretraining shows
stronger grounding capability. (b) and (c) show box plots of co-
sine similarity after joint video-text pretraining on trimmed and
untrimmed videos respectively, between video features aligning
with text (blue) and not aligning with text (red). We observe that
compared to using trimmed videos (b), cosine similarities of video
features aligning with text are higher and farther apart from that
of video features not aligning with text when using untrimmed
videos (c), thus illustrating the impact of untrimmed pretraining.
While there are multiple approaches for VTG, all exist-
ing methods, to the best of our knowledge, rely on video
backbones pretrained on trimmed videos (such as Kinetics
[15]) to obtain the visual features as part of their respective
approaches. Such a design choice introduces a disconnect
between the downstream VTG task on untrimmed videos
and the trimmed videos used for pretraining the model from
which video features are derived. For example, Fig 1 shows
that the grounding predictions (in orange), when using a
backbone jointly pretrained on trimmed videos and text,
do not match adequately with the ground truth. Due to
pretraining on trimmed videos, the video backbone is in-
sensitive to temporal boundaries since the training objec-
tive is to associate an entire trimmed video to a label/text
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6575
query [43, 44]. The backbone, therefore, does not have an
explicit ability to localize, i.e., associate the given query to
only the most relevant part of the long untrimmed video. As
a result, the cosine similarity between video features align-
ing and not aligning with the text query are indistinguish-
able, as shown in Fig 1b.
Inspired by the advantage shown in other tasks where
pretraining and downstream setup match [3, 7, 23, 26], we
hypothesize that formulating the pretraining itself as a VTG
task on untrimmed videos can improve downstream ground-
ing performance. The untrimmed pretraining will equip the
model with a more accurate and fine-grained understand-
ing of temporal boundaries within a given untrimmed video
(as evidenced by the more precise predictions in blue in
Fig. 1). We introduce ProT ´eG´e, Untrimmed Pretraining for
Video Te mporal Grounding by Video T emporal Ground-
ing. ProT ´eG´e is the first approach to formulate pretraining
as a VTG task to bridge the gap between video backbones
pretrained on trimmed videos and downstream VTG tasks
working with untrimmed videos.
A critical challenge impeding this untrimmed pretrain-
ing is the scarcity of large-scale well-annotated video
grounding datasets. There are, however, datasets such as
HowTo100M [30] and Youtube-8M [1], with over a million
untrimmed videos and corresponding subtitled text gener-
ated via automated speech-to-text APIs. One can poten-
tially employ them for untrimmed pretraining as a VTG
task. However, as noted by prior methods [12,29,41], since
the text is derived from subtitles, the video regions are only
noisily-correlated with the subtitled text, rendering the util-
ity of these video-text pairs for grounding a non-trivial task.
To overcome the aforementioned challenges in leverag-
ing large-scale untrimmed video datasets, we first propose a
novel approach to transform them into VTG datasets. Then
we introduce a novel video-text similarity grounding mod-
ule along with an optimization objective that allows the pre-
training to be robust to the noisy video-text correlations
present in these datasets.
In this work, we use ProT ´eG´e with HowTo100M in par-
ticular. To transform HowTo100M into a VTG dataset,
ProT ´eG´e introduces aggregated subtitles to concatenate one
or more subtitles to form the text query and randomly sam-
ples an untrimmed video segment around the query. Ag-
gregated subtitles allow ProT ´eG´e to incorporate arbitrarily
long text queries larger than the average 4s duration of a sin-
gle subtitle. This way, we can synthesize millions of video-
text grounding pairs for VTG pretraining. Using these pairs,
ProT ´eG´e performs pretraining with our novel Video-Text
Similarity-based Grounding Module (VT-SGM). VT-SGM
creates a 2D-proposal grid by computing the cosine similar-
ity between the text query and the different temporal regions
of the untrimmed video. It then learns to maximize the sim-
ilarity between the query and the part that is most relevantto it. This is achieved via our novel pretraining objective
that incorporates a distance-based localization loss which
uses the noisy ground truth and a combination of inter-video
and intra-video alignment losses. This allows the objective
to balance the training via the noisy ground truth and mul-
timodal video-text representation learning. We show that
ProT ´eG´e is very effective for VTG as a downstream task. It
significantly outperforms backbones pretrained on trimmed
videos on standard datasets across all variations of supervi-
sion. We summarize our contributions as,
1. We propose ProT ´eG´e, the first pretraining method for-
mulated as a video temporal grounding task to bridge
the gap between pretraining and downstream video
temporal grounding tasks in untrimmed videos.
2. We propose a novel algorithm including aggregated
subtitles , a Video-Text Similarity-based Grounding
Module, and a pretraining objective to leverage large-
scale untrimmed video dataset HowTo100M with
noisy video-text pairs.
3. Extensive experiments on standard datasets across
multiple downstream tasks with different levels of su-
pervision validate that our approach significantly im-
proves the performance across all benchmarks.
|
Wang_Deep_Hashing_With_Minimal-Distance-Separated_Hash_Centers_CVPR_2023 | Abstract
Deep hashing is an appealing approach for large-scale
image retrieval. Most existing supervised deep hashing
methods learn hash functions using pairwise or triple image
similarities in randomly sampled mini-batches. They suffer
from low training efficiency, insufficient coverage of data
distribution, and pair imbalance problems. Recently, cen-
tral similarity quantization (CSQ) attacks the above prob-
lems by using “hash centers” as a global similarity metric,
which encourages the hash codes of similar images to ap-
proach their common hash center and distance themselves
from other hash centers. Although achieving SOTA retrieval
performance, CSQ falls short of a worst-case guarantee on
the minimal distance between its constructed hash centers,
i.e. the hash centers can be arbitrarily close. This pa-
per presents an optimization method that finds hash cen-
ters with a constraint on the minimal distance between any
pair of hash centers, which is non-trivial due to the non-
convex nature of the problem. More importantly, we adopt
the Gilbert-Varshamov bound from coding theory, which
helps us to obtain a large minimal distance while ensuring
the empirical feasibility of our optimization approach. With
these clearly-separated hash centers, each is assigned to
one image class, we propose several effective loss functions
to train deep hashing networks. Extensive experiments on
three datasets for image retrieval demonstrate that the pro-
posed method achieves superior retrieval performance over
the state-of-the-art deep hashing methods.
| 1. Introduction
Hashing methods are widely-used in large-scale image
retrieval due to their excellent efficiency in both storage
and retrieval. Recently, much effort has been devoted to
deep-learning-based hashing ( deep hashing ) methods for
image retrieval. They use deep neural networks to learn
hash functions that encode similar/dissimilar images to
nearby/faraway binary codes, respectively. Most of the ex-
*Corresponding Authoristing deep hashing methods train models on pairwise/triple
similarities among training samples in randomly sampled
mini-batches (e.g., [1, 10, 18, 22, 24]). Very recently, Yuan
et al. [26] pointed out that these methods lead to restricted
performance due to three problems: low-efficiency to obtain
global similarity of the dataset, incomplete coverage of data
distribution that harms the discriminability of the generated
hash codes, and ineffectiveness on imbalanced amount of
similar/dissimilar data pairs. They then proposed central
similarities that finds mutually separated hash centers for
each class of similar images, and uses these centers to en-
sure small distances between the hash codes of similar im-
ages and large distances between those of dissimilar ones.
For deep hashing methods that use hash centers, it is cru-
cial to construct well-separated hash centers, i.e. the Ham-
ming distance between two hash centers should be signifi-
cantly larger than the Hamming distance between the hash
codes of two similar images, which makes it challenging
to generalize to various length of hash code and different
number of image classes. For instance, CSQ [26] adopts
Hadamard matrix and Bernoulli sampling to produce hash
centers with nice properties that any two centers’ Hamming
distance is on average half of the hash code length. How-
ever, pairs of hash centers constructed in this way can be
arbitrarily small in the worst case, i.e. zero in Hamming
distance (see Table 4). These degenerated hash centers are
expected to harm the retrieval performance.
To address this issue, we propose a novel deep hashing
method that uses an optimization procedure to produce hash
centers, with an additional constraint on a given minimal
distance dbetween any pair of hash centers. The value of d
is derived using the Gilbert-Varshamov bound [20] adopted
from coding theory, which help us to find a large dwhile
ensuring the feasibility of our optimization procedure.
As shown in Fig.1, the proposed method employs a two-
stage pipeline. In Stage 1, we tackle the optimization prob-
lem stated above to produce clearly-separated hash centers.
To solve this optimization problem, we propose an alternat-
ing optimization procedure that relies on the ℓp-box binary
optimization technique [23]. In Stage 2, we train a deep
hashing network by using the constructed hash centers as a
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23455
Figure 1. The proposed method comprises of a two-stage pipeline. Stage 1 ( left) employs a optimization procedure to produce hash centers
constrained by a minimal Hamming distance dbetween any pair of hash centers, each is assigned to one image class. dis given by the
Gilbert-Varshamov bound that guarantees the optimization’s feasibility. Stage 2 ( right ) employs a deep hashing network with three loss
functions. The first one brings the hash code of an image close to its corresponding hash center while keeping it distant from the other
centers. The second one draws similar data points within the same class even closer. The last one is to minimize quantization errors.
global similarity metric. Specifically, loss functions are de-
fined to make that (1) an input image’s hash code is close to
its class’s hash center but is distanced from other centers, (2)
the hash codes of images in the same class should be close
to each other, and (3) quantization errors are minimized.
The proposed method is assessed on three datasets for
image retrieval. The results indicate that the obtained hash
centers are always separated by the minimal distance we
derived, and the proposed method outperforms the state-of-
the-art deep hashing methods.
|
Wang_MeMaHand_Exploiting_Mesh-Mano_Interaction_for_Single_Image_Two-Hand_Reconstruction_CVPR_2023 | Abstract
Existing methods proposed for hand reconstruction tasks
usually parameterize a generic 3D hand model or predict
hand mesh positions directly. The parametric representa-
tions consisting of hand shapes and rotational poses are
more stable, while the non-parametric methods can predict
more accurate mesh positions. In this paper, we propose to
reconstruct meshes and estimate MANO parameters of two
hands from a single RGB image simultaneously to utilize
the merits of two kinds of hand representations. To fulfill
this target, we propose novel Mesh-Mano interaction blocks
(MMIBs), which take mesh vertices positions and MANO
parameters as two kinds of query tokens. MMIB consists
of one graph residual block to aggregate local information
and two transformer encoders to model long-range depen-
dencies. The transformer encoders are equipped with differ-
ent asymmetric attention masks to model the intra-hand and
inter-hand attention, respectively. Moreover, we introduce
the mesh alignment refinement module to further enhance
the mesh-image alignment. Extensive experiments on the
InterHand2.6M benchmark demonstrate promising results
over the state-of-the-art hand reconstruction methods.
| 1. Introduction
Vision-based 3D hand analysis plays an important role
in many applications such as virtual reality (VR) and aug-
mented reality (AR). Two-hand reconstruction from a sin-
gle RGB image is more challenging due to complex mutual
interactions and occlusions. Besides, the skin appearance
similarity makes it difficult for the network to align image
features to the corresponding hand.
Previous hand reconstruction works can be divided into
two categories, parametric methods [3, 7, 32, 33] and non-
parametric methods [4–6, 10, 18–20]. Parametric meth-
ods typically learn to regress pose andshape parameters
of MANO model [23], where pose represents joint rota-
*Equal contribution.†Corresponding author.
InputImageGTIntagHandOursFigure 1. Comparison with the state-of-the-art method IntagHand
[16] for single-image two-hand reconstruction. The integration of
parametric and non-parametric hand representations allows us to
achieve better performance in hard cases such as severe occlusions
and challenging viewpoints.
tions in axis-angle representation and shape represents the
coefficients of shape PCA bases. The MANO prior can
yield plausible hand shapes from a single monocular image.
However, they can not produce fine-grained hand meshes
due to their limited capacity.
With the rapid progress of graph convolutional network
(GCN) and transformer techniques [10, 16, 18, 19], it is ob-
served that direct mesh reconstruction can achieve state-
of-the-art performance towards the Euclidean distances be-
tween the ground truth vertices and the predicted vertices.
Nonetheless, the non-parametric methods are less robust in
handling challenging viewpoints or severe occlusions.
In this paper, we introduce a novel single-image two-
hand reconstruction method designed to predict mesh ver-
tices positions and estimate MANO parameters simulta-
neously to utilize the merits of two kinds of hand repre-
sentations. The proposed Mesh-Mano interaction Hand
reconstruction architecture (MeMaHand) consists of three
modules: 1) the image encoder-decoder module, 2) the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
564
mesh-mano interaction module, 3) and the mesh align-
ment refinement module. To extract contextually meaning-
ful image features, we pre-train a classical image encoder-
decoder network on auxiliary tasks including hand seg-
mentation, hand 2D joints and dense mapping encodings.
The low-resolution features encode more global knowledge,
while the high-resolution features contain more local de-
tails. Secondly, the mesh-mano interaction module stacks
three mesh-mano interaction blocks (MMIBs) to transform
the mesh vertices and MANO parameters queries initialized
by the global image feature vector. We observe that the
hand prior embedded in the MANO parameters is valuable
for predicting stable hand meshes in challenging situations
such as severe occlusions. MMIB consists of one graph
residual block to aggregate local information and two trans-
former encoders to model long-range dependencies. The
transformer encoders are equipped with different asymmet-
ric attention masks to model the intra-hand and inter-hand
attention, respectively. Each MMIB is followed by an up-
sampling operation to upsample the mesh vertices tokens
in a coarse-to-fine manner. Finally, the mesh alignment re-
finement module utilizes one MMIB to predict offsets for
mesh vertices and MANO parameters to enhance mesh-
image alignment. To improve the reliability of image ev-
idence, we project mesh vertices predicted by the Mesh-
Mano interaction module onto the 2D image plane. The ex-
plicit mesh-aligned image features are concatenated to the
transformer input tokens.
The whole network, including the pre-trained image fea-
ture encoder-decoder, is jointly optimized such that the im-
age features better adapt to our hand mesh reconstruction
task. Benefiting from the mesh-mano interaction mecha-
nism and mesh alignment refinement stage, extensive ex-
periments demonstrate that our method outperforms exist-
ing both parametric and non-parametric methods on Inter-
Hand2.6M [22] dataset. In summary, the contributions of
our approach are as follows:
• We propose MeMaHand to integrate the merits of para-
metric and non-parametric hand representation. The
mesh vertices and MANO parameters are mutually re-
inforced to achieve better performance in mesh recov-
ery and parameter regression.
• A mesh-image alignment feedback loop is utilized to
improve the reliability of image evidence. Therefore,
more accurate predictions are obtained by rectifying
the mesh-image misalignment.
• Our method achieves superior performance on the
InterHand2.6M dataset, compared with both non-
parametric and parametric methods.
|
Wang_Learning_Conditional_Attributes_for_Compositional_Zero-Shot_Learning_CVPR_2023 | Abstract
Compositional Zero-Shot Learning (CZSL) aims to train
models to recognize novel compositional concepts based
on learned concepts such as attribute-object combinations.
One of the challenges is to model attributes interacted with
different objects, e.g., the attribute “wet” in “wet apple”
and “wet cat” is different. As a solution, we provide anal-
ysis and argue that attributes are conditioned on the recog-
nized object and input image and explore learning condi-
tional attribute embeddings by a proposed attribute learn-
ing framework containing an attribute hyper learner and
an attribute base learner. By encoding conditional at-
tributes, our model enables to generate flexible attribute
embeddings for generalization from seen to unseen compo-
sitions. Experiments on CZSL benchmarks, including the
more challenging C-GQA dataset, demonstrate better per-
formances compared with other state-of-the-art approaches
and validate the importance of learning conditional at-
tributes. Code‡is available at https://github.com/
wqshmzh/CANet-CZSL .
| 1. Introduction
Deep machine learning algorithms today can learn
knowledge of concepts to recognize patterns. Can a ma-
chine compose different learned concepts to generalize to
new compositions? Compositional generalization is one of
the hallmarks of human intelligence [3, 18]. To make the
models equipped with this ability, Compositional Zero-Shot
Learning (CZSL) [25] is proposed, where the models are
trained to recognize images of unseen compositions com-
posed of seen concepts. In this work, we concentrate on the
situation where each composition is composed by attribute
(e.g., wet) and object (e.g., apple ). For example, given im-
*E-mail: wqshmzh@mail.nwpu.edu.cn
†Corresponding author. E-mail: peng.wang@nwpu.edu.cn
‡Gitee: https://gitee.com/wqshmzh/canet-czsl
Figure 1. The diagram of our work. We aim to learn conditional
attributes conditioned on the recognized object and input image
through an attribute learning framework containing an attribute hy-
per learner and an attribute base learner. We first recognize the ob-
ject in the input image. Then, we feed prior knowledge extracted
from the conditions, which are recognized object word embedding
and input image visual embedding, to the attribute hyper learner.
Finally, conditional attribute embeddings are produced by the at-
tribute base learner parameterized by the attribute hyper learner.
ages of wet apple anddry cat , a well-trained model can rec-
ognize images of new compositions dry apple andwet cat .
Compositional Zero-Shot Learning of attribute-object
compositions requires modeling attributes, objects, and the
contextuality between them. Learning to model objects in
CZSL is similar to conventional supervised object classi-
fication task since the model has access to all objects in
CZSL task [33]. Learning to model contextuality between
attribute and object is mostly addressed in the literature
[23,25,26,31,39–41]. One of the main challenges of CZSL
is the appearance diversity of an attribute when composed
with different objects, e.g., attribute wetinwet apple and
wet cat is different. This reveals that the information of each
attribute is dependent on different objects. However, most
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11197
recent works in CZSL [4, 27, 32, 33, 42, 45] extract attribute
representations irrelevant to the object from seen composi-
tions to infer the unseen compositions. These approaches
neglect the nature of attribute diversity and learn concrete
attribute representation agnostic to different objects.
In this paper, we learn conditional attributes rather than
learning concrete ones in a proposed Conditional Attribute
Network (CANet). We first conduct analysis to determine
the exact conditions by considering the recognition of at-
tribute and object as computing a classification probability
of attribute and object conditioned on the input image. By
decomposing this probability, we demonstrate that the prob-
ability of the input image belonging to an attribute is condi-
tioned on the recognized object and the input image.
We present an attribute learning framework to learn con-
ditional attribute embeddings conditioned on the above two
conditions. The framework contains an attribute hyper
learner and an attribute base learner, which are sketched
in Fig. 1. The attribute hyper learner learns from prior
knowledge extracted from the conditions. The attribute base
learner is parameterized by the attribute hyper learner and
is designed to encode all attribute word embeddings into
conditional attribute embeddings. With the attribute learn-
ing framework, the attribute embeddings are changed along
with the recognized object and input image. Finally, the at-
tribute matching is processed in an attribute space where the
input image embedding is projected. The attribute classifi-
cation logits are computed by cosine similarities between
the projected input image embedding and all conditional
attribute embeddings. Additionally, we model the contex-
tuality between attribute and object as composing attribute
and object word embeddings. We use cosine similarities
between the projected input image embedding and all com-
posed attribute-object embeddings to get the classification
logits.
Our main contributions are as follows:
• We propose to learn attributes conditioned on the rec-
ognized object and input image.
• We propose an attribute learning framework contain-
ing an attribute hyper learner and an attribute base
learner for learning conditional attribute embeddings.
• Experiments and ablation studies indicate the effec-
tiveness of our proposed conditional attribute network,
which further validates the importance of learning con-
ditional attributes in the CZSL task.
|
Wei_MMANet_Margin-Aware_Distillation_and_Modality-Aware_Regularization_for_Incomplete_Multimodal_Learning_CVPR_2023 | Abstract
Multimodal learning has shown great potentials in nu-
merous scenes and attracts increasing interest recently.
However, it often encounters the problem of missing modal-
ity data and thus suffers severe performance degradation
in practice. To this end, we propose a general framework
called MMANet to assist incomplete multimodal learn-
ing. It consists of three components: the deployment net-
work used for inference, the teacher network transferring
comprehensive multimodal information to the deployment
network, and the regularization network guiding the de-
ployment network to balance weak modality combinations.
Specifically, we propose a novel margin-aware distilla-
tion (MAD) to assist the information transfer by weigh-
ing the sample contribution with the classification uncer-
tainty. This encourages the deployment network to focus
on the samples near decision boundaries and acquire the
refined inter-class margin. Besides, we design a modality-
aware regularization (MAR) algorithm to mine the weak
modality combinations and guide the regularization net-
work to calculate prediction loss for them. This forces
the deployment network to improve its representation abil-
ity for the weak modality combinations adaptively. Fi-
nally, extensive experiments on multimodal classification
and segmentation tasks demonstrate that our MMANet out-
performs the state-of-the-art significantly. Code is available
at: https://github.com/shicaiwei123/MMANet
| 1. Introduction
Multimodal learning has achieved great success on many
vision tasks such as classification [21, 33, 46], object detec-
tion [26, 45, 53], and segmentation [5, 23, 41]. However,
most successful methods assume that the models are trained
and tested with the same modality data. In fact, limited
by device [32, 39], user privacy [13, 25], and working con-
dition [3, 29], it is often very costly or even infeasible toModality Customized Unified Drop
RGB 10.01 11.75 -1.65
Depth 4.45 5.87 -1.42
IR 11.65 16.62 -4.97
RGB+Depth 3.41 4.61 -1.2
RGB+IR 6.32 6.68 -0.36
Depth+IR 3.54 4.95 -1.41
RGB+Depth+IR 1.23 2.21 -0.98
Table 1. The performance of customized models and the uni-
fied model for different modality combinations on the CASIA-
SURF dataset using the average classification error rate. The
‘customized‘ means to train a model for each combination inde-
pendently while the ‘unified’ means to train only one model for
all the combinations. The architectures of all the models are the
same and the feature map of missing modality (such as the IR for
RGB+Depth) is set as zero.
collect complete modality data during the inference stage.
There is thus substantial interest in assisting the incomplete
or even single modality inference via the complete modality
data during training.
A typical solution is to reconstruct the sample or feature
of the missing modalities from the available ones [10, 14,
15, 20, 29, 32]. Nevertheless, this needs to build a specific
model for each modality from all possible modality combi-
nations and thus has high complexity. Recent studies focus
on learning a unified model, instead of a bunch of networks,
for different modality combinations. Generally, many such
approaches [6, 11, 12, 17, 51, 52] attempt to leverage feature
fusion strategies to capture modality-invariant representa-
tion so that the model can adapt to all possible modality
combinations. For example, RFNet [11] designs the region-
aware fusion module to fuse the features of available image
modalities.
Although the existing unified models are indeed able to
increase the efficiency of training and deployment of the
multimodal models, their performance is likely to be sub-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20039
optimal. As shown in Table 1, the customized models con-
sistently outperform the unified model for different modal-
ity combinations. This is because existing unified mod-
els usually focus on the modality-invariant features while
ignoring the modality-specific information. Note that the
complementary modality-specific information of multiple
modalities can help refine the inter-class discrimination and
improve inference performance [2, 18, 36]. This motivates
us to propose the first research question of this paper: Can a
unified model consider the modality invariant and spe-
cific information simultaneously while maintaining ro-
bustness for incomplete modality input?
To this end, we propose to guide the unified model to
learn the comprehensive multimodal information from the
teacher model trained with complete modality. This regular-
izes the target task loss to encourage the unified model to ac-
quire complementary information among different modal-
ity combinations multimodal information while preserv-
ing the generalization to them. Specifically, we propose a
novel margin-aware distillation (MAD) that trains the uni-
fied model by guiding it to mimic the inter-sample relation
of the teacher model. MAD introduces the classification
uncertainty of samples to re-weigh their contribution to the
final loss. Since the samples near the class boundary are
more likely to be misclassified and have higher classifica-
tion uncertainty [8], this encourages the unified model to
preserve the inter-class margin refined by the complemen-
tary cues and learn the modality-specific information.
Another limitation of existing unified approaches is that
they struggle to obtain optimal performance for the unbal-
anced training problem. To be specific, conventional multi-
modal learning models tend to fit the discriminative modal-
ity combination and their performance will degrade signif-
icantly when facing weak modality combinations. To solve
this issue, existing unified approaches introduce the auxil-
iary discriminator to enhance the discrimination ability of
the unimodal combinations [6, 11, 51]. This utilizes a hy-
pothesis that a single modality is weaker than multiple ones.
However, as shown in Table 1, no matter for the customized
model or the unified model, the single Depth modality out-
performs the RGB, IR, and their combinations. This indi-
cates the combination with multiple weak modalities may
be harder to be optimized than a single strong modality.
Moreover, as shown in Table 3, RGB becomes the strong
modality while Depth and IR become the weak modalities.
This indicates that the modality importance is not fixed but
varies with scenarios. These findings motivate us to propose
the second research question: How to effectively optimize
the weak modality combination in varying scenarios?
To this end, we design a regularization network and
MAR algorithm to assist the training of the unified network.
Specifically, the regularization network generates additional
predictions for all inputs. Then MAR mines and calculatesprediction loss for the sample from the weak combinations.
This forces the unified model to improve its representation
ability for the weak combination. In detail, MAR mines the
weak combination via the memorization effect [1, 16, 49]
that DNNs tend to first memorize simple examples before
overfitting hard examples. As shown in Fig. 5(a), the uni-
fied model tends to fit the samples containing Depth modal-
ity firstly at the early stage. Therefore, MAR first mines the
strong modality via the memorization effect. Then it deter-
mines the combinations of rest modalities as the weak ones.
Finally, we develop a model and task agnostic frame-
work called MMANet to assist incomplete multimodal
learning by combining the proposed MAD and MAR strate-
gies. MMANet can guide the unified model to acquire com-
prehensive multimodal information and balance the perfor-
mance of the strong and weak modality combination si-
multaneously. Extensive comparison and ablation experi-
ments on multimodal classification and segmentation tasks
demonstrate the effectiveness of the MMANet.
|
Wei_Autoregressive_Visual_Tracking_CVPR_2023 | Abstract
We present ARTrack , an autoregressive framework for
visual object tracking. ARTrack tackles tracking as a co-
ordinate sequence interpretation task that estimates object
trajectories progressively, where the current estimate is in-
duced by previous states and in turn affects subsequences.
This time-autoregressive approach models the sequential
evolution of trajectories to keep tracing the object across
frames , making it superior to existing template matching
based trackers that only consider the per-frame localiza-
tion accuracy. ARTrack is simple and direct, eliminating
customized localization heads and post-processings. Despite
its simplicity, ARTrack achieves state-of-the-art performance
on prevailing benchmark datasets. Source code is available
athttps://github.com/MIV-XJTU/ARTrack .
| 1. Introduction
Visual object tracking [5, 20, 34, 38, 48, 52] is a founda-
tional objective in the realm of computer vision, whereby
the tracker endeavors to estimate the location of an arbitrary
target in each video frame, based on its initial state. Despite
its ostensibly straightforward definition, the tracking task
poses a significant challenge in real-world settings due to
a variety of issues including but not limited to object de-
formation, scale variation, occlusion, and distraction from
similar objects. Fortunately, visual tracking capitalizes on
abundant temporal data as its input comprises a sequence
of video frames. Observationally, humans leverage temporal
information to gain a perception of the target’s deformation,
velocity, and acceleration trends, enabling them to maintain
consistent tracking results in the face of indiscriminative or
temporarily unavailable visual information.
The present mainstream approaches [10,13,45,61,64] for
visual object tracking typically view it as a per-frame tem-
plate matching problem, neglecting the potential temporal
dependencies among the video frames. These methods gen-
erally follow three primary stages: (i) deep neural network-
based feature extraction from the search and template images,
𝑥𝑚𝑖𝑛𝑡𝑦𝑚𝑖𝑛𝑡𝑥𝑚𝑎𝑥𝑡𝑦𝑚𝑎𝑥𝑡
Command Token
Spatio -Temporal PromptsSearch Template
t-2
t-1t
𝑥𝑚𝑖𝑛𝑡−2𝑦𝑚𝑖𝑛𝑡−2𝑥𝑚𝑎𝑥𝑡−2𝑦𝑚𝑎𝑥𝑡−2𝑥𝑚𝑖𝑛𝑡−1𝑦𝑚𝑖𝑛𝑡−1𝑥𝑚𝑎𝑥𝑡−1𝑦𝑚𝑎𝑥𝑡−1 …EncoderAutoregressive
Decoder
Figure 1. Our ARTrack framework . First, we embed visual fea-
tures of template and search by an encoder. Then the coordinate
tokens at current time step are interpreted by the decoder, condi-
tioning on the previous estimates (as spatio-temporal prompts), and
the command and visual tokens.
(ii) an integration module using either convolution [2,4] or at-
tention mechanisms [10,61] for feature matching/fusion, and
(iii) bounding-box localization through customized heads
for corner [13, 61], center/scale [64] estimation, and target
classification [4, 61]. In some cases, the first two stages
can be combined using a unified architecture [13, 64]. Post-
processing techniques are usually employed during the local-
ization step, such as Hanning window penalty [10,56,64,68]
and box optimization [4, 56]. Some methods incorporate a
template update mechanism to improve the target feature
representation. Representative techniques in this category in-
clude template image selection [61], feature integration [56],
and time evolution [62, 66]. However, customized heads and
post-processing techniques are complex and may require
individual training and inference, which compromises the
simple end-to-end framework. Moreover, tracking empha-
sizes preserving localization accuracy across the sequence,
while conventional per-frame training methods prioritize
immediate localization accuracy, resulting in an objective
mismatch between training and inference [35].
This study proposes a novel framework to visual object
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9697
tracking that differs from the mainstream methods, which
typically employ a per-frame template matching task. In-
stead, the authors propose to consider tracking as coordi-
nate sequence interpretation , with the objective of learning
a simple end-to-end model for direct trajectory estimation.
The proposed approach is based on the idea that given a se-
quence of frames and an initial object box, the tracker should
“interpret” a sequence of coordinates that trace the object,
in a manner similar to a language modeling task. The pro-
posed framework models the sequential evolution of object
trajectories across frames by decoding the entire trajectory
sequence step by step. The current estimate is influenced
by previous states and in turn influences the subsequences,
thus unifying the task objectives of training and inference.
Furthermore, the proposed approach simplifies the tracking
pipeline by avoiding customized heads and post-processings,
relying instead on direct coordinate regression.
The proposed autoregressive visual tracking framework,
called ARTrack, is depicted in Figure 1. The first step in this
framework is to construct discrete token sequences from
object trajectories using a quantization and serialization
scheme [8]. The framework then adopts an encoder-decoder
architecture to perceive visual information and generate tar-
get sequences gradually. In this autoregressive framework,
the previous outcomes serve as spatio-temporal prompts ,
propagating preceding motion dynamics into succeeding
frames for more coherent tracking results. Notably, the
model is trained using a structured loss function that maxi-
mizes the likelihood of the target sequence, consistent with
the task objective at test time. The authors demonstrate the ef-
ficacy of this approach through extensive experiments, show-
ing that the simple and neat ARTrack framework achieves
state-of-the-art results on prevailing tracking benchmarks,
outperforming other highly customized trackers.
|
Wen_DIP_Dual_Incongruity_Perceiving_Network_for_Sarcasm_Detection_CVPR_2023 | Abstract
Sarcasm indicates the literal meaning is contrary to the
real attitude. Considering the popularity and complemen-tarity of image-text data, we investigate the task of multi-modal sarcasm detection. Di fferent from other multi-modal
tasks, for the sarcastic data, there exists intrinsic incon-gruity between a pair of image and text as demonstratedin psychological theories. To tackle this issue, we pro-
pose a Dual Incongruity Perceiving (DIP) network con-
sisting of two branches to mine the sarcastic information
from factual and a ffective levels. F or the factual aspect,
we introduce a channel-wise reweighting strategy to ob-
tain semantically discriminative embeddings, and leverage
gaussian distribution to model the uncertain correlationcaused by the incongruity. The distribution is generated
from the latest data stored in the memory bank, which can
adaptively model the di fference of semantic similarity be-
tween sarcastic and non-sarcastic data. F or the a ffective
aspect, we utilize siamese layers with shared parametersto learn cross-modal sentiment information. Furthermore,we use the polarity value to construct a relation graphfor the mini-batch, which forms the continuous contrastiveloss to acquire affective embeddings. Extensive experiments
demonstrate that our proposed method performs favorably
against state-of-the-art approaches. Our code is released
onhttps://github.com/downdric/MSD .
| 1. Introduction
Sarcasm is an interesting and prevailing manner to ex-
press users’ opinions [ 18], which means the real attitude
is converse to the literal meaning [ 19]. With the develop-
ment of social platforms, sarcasm detection (SD) attractsincreasing attention [ 11,40,65] due to its wide application,
e.g. product review analysis, political opinion mining [ 32],
etc. Automatically distinguishing sarcastic instances from
∗Equal contribution.
† Corresponding author.
We got a little bit
ofsnow in the last
Factual Incongruity
(b)(a)
Affective IncongruityProbabilitySuper busy at the
theatre this morning
Ice cream roses Foil fish packets
with spinach
Weather’s looking
amazing todayThanks , usps !
Beautiful day with
more beautiful pup
A bomb made a
direct hit
ProbabilityX Y
Z [
0.0 0.2 0.4 0.6 0.8 1.00.20.60.8
0.4
0.0Sarcastic Non- Sarcastic
XY
Z[Mean: 0.65
Std: 0.93
Mean: 0.43
Std: 0.84
0.0 0.2 0.4 0.6 0.8 1.00.20.60.8
0.4
0.0XYZSarcastic Non- Sarcastic
Mean: 0.53
Std: 1.27
Mean: 0.11
Std: 1.41
X Y
Z [[
Figure 1. Examples from the sarcasm dataset [ 6]. (a) shows the
samples (left) and statistics (right) for factual incongruity. Ac-quiring inter-modal semantic similarity 𝑆
𝑖𝑛𝑡𝑒𝑟 from CLIP [ 50],
the factual incongruity is depicted by 1 −𝑆𝑖𝑛𝑡𝑒𝑟 . (b) displays the
cases for affective incongruity. Obtaining the models trained on FI
(image) [ 70] and IMDB (text) [ 41] datasets, the incongruity is rep-
resented by the difference in sentiment polarity. For both groups
of samples, the top two samples are sarcastic data, and the bottomtwo samples are non-sarcastic ones.
the mass of non-sarcastic content is important for any on-
line service.
The challenge of multi-modal sarcasm detection (MSD)
mainly comes from two aspects. First, the task aims to de-tect implicit intention from data, which increases the di ffi-
culty of learning. Specifically, compared with visual recog-nition, the expressed attitude of the sarcastic data com-
monly hides in a normal stimulus and is hard to be iden-
tified. Fortunately, the linguistic theory demonstrates that
incongruity is an important and e ffective factor for sarcasm
detection [ 29], which inspires researchers automatically ex-
tract the positive and negative seeds [ 51]. Another challenge
lies in that, while both image and text express similar infor-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2540
mation is expected in multi-modal tasks [ 3,50], this rule is
not applicable to SD that discovering dissimilar informa-
tion. There exists an intrinsic conflict between o ff-the-shelf
techniques for multi-modal learning and the new task in thiswork.
In order to address the issue, we focus on the inter-modal
incongruity for MSD. Sarcasm is a long standing topic invarious areas like psychology [ 43], sociology [ 54], and neu-
robiology [ 31]. Researchers observe that sarcasm occurs
when the literal meaning unexpectedly contrasts with theobserved facts [ 22,43]. The process is defined as counter-
factual inference [ 43]. Besides, the studies from empiri-
cal theory [ 55] find that attitude is another important factor,
which is especially e ffective for obscure cases. In light of
these theoretical works, we utilize semantic association and
sentiment polarity to verify the incongruity in the sarcasmdataset [ 6]. As shown in Figure 1, the incongruity of sar-
castic data is obviously larger than the non-sarcastic one infactual level, especially in terms of the mean value. Mean-
while, the phenomenon also exists in the a ffective level. In-
spired by the study and verification above, we design ourmethod to detect the incongruity for multi-modal sarcastic
data in both the factual and a ffective levels.
We propose a Dual Incongruity Perceiving (DIP) net-
work, which is consisting of Semantic Intensified Distri-bution (SID) Modeling and Siamese Sentiment Contrastive
(SSC) Learning modules. In SID, based on the semantic as-
sociation [ 9,44], the samples are di fferentiated by an adap-
tive strategy. Specifically, we maintain gaussian distribu-
tions for sarcastic and non-sarcastic samples respectively,
and utilize the probability generated by them to model theincongruity. Since the distributions depend on the extractedembeddings, we introduce a channel-wise reweighting strat-
egy to learn representations related to sarcasm. In SSC,
the affective incongruity is perceived by the polarity dif-
ference between the image-text pair. To e fficiently intro-
duce sentiment information into the network, we employ
two siamese layers to transmit knowledge of a ffective dic-
tionary, i.e. SenticNet. Furthermore, with the help of the
polarity intensity, the continuous contrastive learning is pro-posed to enhance the a ffective representations. Overall, the
facutal and affective information are intensified in SID and
SSC, and leveraged to explicitly calculate the incongruity
for MSD.
Our contributions are three-fold: (1) To our knowledge,
DIP is the first work explicitly investigating and model-ing incongruity in multi-modal sarcasm detection. (2) It’s
a dual perceiving network to learn sarcastic informationfrom factual and a ffective levels, which utilizes channel-
wise reweighting and continuous contrastive strategies to
acquire discriminative representations. (3) Extensive com-
parisons and ablations demonstrate the e ffectiveness and su-
periority of the proposed method. |
Wang_LP-DIF_Learning_Local_Pattern-Specific_Deep_Implicit_Function_for_3D_Objects_CVPR_2023 | Abstract
Deep Implicit Function (DIF) has gained much popu-
larity as an efficient 3D shape representation. To capturegeometry details, current mainstream methods divide 3Dshapes into local regions and then learn each one with alocal latent code via a decoder . Such local methods cancapture more local details due to less diversity among local
regions than global shapes. Although the diversity of localregions has been decreased compared to global approaches,the diversity in different local regions still poses a challenge
in learning an implicit function when treating all regionsequally using only a single decoder . What is worse, theselocal regions often exhibit imbalanced distributions, where
certain regions have significantly fewer observations. This
leads that fine geometry details could not be preserved well.To solve this problem, we propose a novel Local Pattern-specific Implicit Function, named LP-DIF , to represent ashape with clusters of local regions and multiple decoders,
where each decoder only focuses on one cluster of local re-gions which share a certain pattern. Specifically, we firstextract local codes for all regions, and then cluster theminto multiple groups in the latent space, where similar re-gions sharing a common pattern fall into one group. After
that, we train multiple decoders for mining local patterns of
different groups, which simplifies the learning of fine geo-metric details by reducing the diversity of local regions seen
by each decoder . To further alleviate the data-imbalanceproblem, we introduce a region re-weighting module to
each pattern-specific decoder using a kernel density estima-tor , which dynamically re-weights the regions during learn-
*The corresponding author is Yu-Shen Liu. This work was supported
by National Key R&D Program of China (2022YFC3800600), the Na-tional Natural Science Foundation of China (62272263, 62072268), andin part by Tsinghua-Kuaishou Institute of Future Media Data.ing. Our LP-DIF can restore more geometry details, and
thus improve the quality of 3D reconstruction. Experiments
demonstrate that our method can achieve the state-of-the-
art performance over previous methods. Code is available
at https://github.com/gtyxyz/lpdif.
| 1. Introduction
Representing 3D shapes is a fundamental problem for
many applications in 3D computer vision. Recently, DeepImplicit Function (DIF) [ 4,22,23,25,28,41] has gained pop-
ularity for efficiently learning the representation of 3D ob-
jects and scenes. In contrast to directly learning explicit 3D
representations [ 15,30,31] (voxels, point clouds or meshes),
DIF aims to train a neural network to learn the binary occu-
pancy function [ 25] or signed distance function (SDF) [ 28],
as given a query location and an input latent code. Such
kind of representation is continuous with arbitrary precision
and can handle various topology, which has achieved the
state-of-the-art results in several shape reconstruction tasks.
Existing DIF methods can be roughly classified into two
categories: global and local approaches. Most of the earlymethods [ 4,9,20,21,25,27–29,35,37,40] fall into global
approaches. These methods take advantage of one latentcode and a single decoder to represent the whole shape.Global approaches often suffer from long training time andlow reconstruction accuracy due to the limited capacity ofcapturing local geometry details. More recently, local ap-proaches [ 1,3,5,6,10–12,17,19,34,36,38] divide 3D shapes
(often divide by 3D grids) into local regions and then learn
each one with a local latent code via a decoder, where the
decoder shares the geometric similarities among differentlocal regions. Although such local approaches can capturesome local details, a large diversity of different local regionsstill increase the difficulty of learning an implicit function
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21856
(d) Reference (b) Local DIF (c) Ours (a) Global DIF
Figure 1. Visual comparison of 3D shape surface reconstruction. Compared with global DIF (e.g. [ 28]) and local DIF (e.g. [ 1]), our method
can reconstruct the shape with fine-grained geometric details. Compared with previous methods that treat all local regions equally using asingle decoder, we regard a shape as clusters of local regions and mine local patterns with different decoders. This alleviates the difficulty
of learning caused by diverse local regions.
when treating all regions equally using only a single de-
coder. In addition, these local regions often exhibit imbal-anced distributions, where certain regions have significantly
fewer observations, especially in scenes. As a result, finegeometry details of shapes could not be captured well.
To address the above-mentioned problems, we propose a
novel Local Pattern-specific Implicit Function, named LP-
DIF, for learning 3D shape representation using clusters oflocal regions with multiple decoders, where each decoderonly represents one cluster of local regions which share acertain pattern (geometric features such as facing direction,number of faces, relative positions to the region center).
Specifically, we first extract the local latent codes for alllocal regions divided by 3D grids, and then cluster them
into multiple groups in the latent space, where similar re-gions sharing a common pattern fall into one group. After
that, we train a separate pattern-specific decoder for each
group of regions, which reduces data-imbalance among dif-
ferent patterns of regions and simplifies the learning of finegeometric details of 3D structures by limiting the diver-
sity of regions seen to each decoder. To further alleviatethe region-imbalance problem, we introduce a region re-weighting module to each pattern-specific decoder by ker-nel density estimator, which dynamically re-weights the re-gions during learning. Our main contributions can be sum-marized as follows.
• We propose a novel LP-DIF to learn local pattern-
specific deep implicit function of 3D shapes for re-
constructing highly detailed geometry. Compared withprevious methods that treat all local regions equally us-ing a single decoder, we regard a shape as clusters oflocal regions and mine local patterns with different de-coders. This alleviates the difficulty of learning caused
by diverse local regions.
• We introduce a dynamic region re-weighting module,
which could provide more focus on less common re-gions to tackle the data-imbalance problem in each pat-
tern decoder. As a result, the regions with less appear-
ances can be captured more accurately.
• Our method could be applied in multiple objects, sin-
gle complex objects and large scale of scenes. Weimprove the state-of-the-art accuracy in surface recon-
struction under various benchmarks.
Figure 2illustrates the main differences between DIF,
local DIF and our method. For DIF approaches, one globalcode and a decoder are used for the whole shape. For Lo-cal DIF methods, multiple local codes and a shared decoder
are used. For our method, multiple clusters of regions are
learned with different decoders.
|
Wang_3Mformer_Multi-Order_Multi-Mode_Transformer_for_Skeletal_Action_Recognition_CVPR_2023 | Abstract
Many skeletal action recognition models use GCNs to
represent the human body by 3D body joints connected body
parts. GCNs aggregate one- or few-hop graph neighbour-
hoods, and ignore the dependency between not linked body
joints. We propose to form hypergraph to model hyper-
edges between graph nodes ( e.g., third- and fourth-order
hyper-edges capture three and four nodes) which help cap-
ture higher-order motion patterns of groups of body joints.
We split action sequences into temporal blocks, Higher-
order Transformer (HoT) produces embeddings of each
temporal block based on (i) the body joints, (ii) pairwise
links of body joints and (iii) higher-order hyper-edges of
skeleton body joints. We combine such HoT embeddings of
hyper-edges of orders 1, ..., r by a novel Multi-order Multi-
mode Transformer (3Mformer) with two modules whose or-
der can be exchanged to achieve coupled-mode attention on
coupled-mode tokens based on ‘channel-temporal block’,
‘order- channel-body joint’, ‘channel-hyper-edge (any or-
der)’ and ‘channel-only’ pairs. The first module, called
Multi-order Pooling (MP) ,additionally learns weighted ag-
gregation along the hyper-edge mode, whereas the second
module, Temporal block Pooling (TP) ,aggregates along
the temporal block1mode. Our end-to-end trainable net-
work yields state-of-the-art results compared to GCN-,
transformer- and hypergraph-based counterparts.
| 1. Introduction
Action Recognition has applications in video surveil-
lance, human-computer interaction, sports analysis, and
virtual reality [ 24,25,40,52–59].Di↵erent from video-
based methods which mainly focus on modeling the spatio-
temporal representations from RGB frames and /or opti-
cal flow [ 25,52–55,58], skeleton sequences, representing
a spatio-temporal evolution of 3D body joints, have been
*Corresponding author.
1For brevity, we write ⌧temporal blocks per sequence but ⌧varies.proven robust against sensor noises and e ↵ective in action
recognition while being computationally and storage e -
cient [ 24,40,52,53,56,57,59].The skeleton data is usually
obtained by either localization of 2D /3D coordinates of hu-
man body joints with the depth sensors or pose estimation
algorithms applied to videos [ 2]. Skeleton sequences en-
joy (i) simple structural connectivity of skeletal graph and
(ii) temporal continuity of 3D body joints evolving in time.
While temporal evolution of each body joint is highly infor-
mative, embeddings of separate body joints are insensitive
to relations between body parts. Moreover, while the links
between adjacent 3D body joints (following the structural
connectivity) are very informative as they model relations,
these links represent highly correlated nodes in the sense of
their temporal evolution. Thus, modeling larger groups of
3D body joints as hyper-edges can capture more complex
spatio-temporal motion dynamics.
The existing graph-based models mainly di ↵er by how
they handle temporal information. Graph Neural Net-
work (GNN) may encode spatial neighborhood of the node
followed by aggregation by LSTM [ 46,65]. Alterna-
tively, Graph Convolutional Network (GCN) may perform
spatio-temporal convolution in the neighborhood of each
node [ 64].Spatial GCNs perform convolution within one or
two hop distance of each node, e.g., spatio-temporal GCN
model called ST-GCN [ 64] models spatio-temporal vicin-
ity of each 3D body joint. As ST-GCN applies convolution
along structural connections (links between body joints),
structurally distant joints, which may cover key patterns of
actions, are largely ignored. ST-GCN captures ever larger
neighborhoods as layers are added but su ↵ers from over-
smoothing that can be mitigated by linear GCNs [ 76–78].
Human actions are associated with interaction groups
of skeletal joints, e.g., wrist alone, head-wrist, head-wrist-
ankles, etc. The impact of these groups of joints on each
action di ↵ers, and the degree of influence of each joint
should be learned. Accordingly, designing a better model
for skeleton data is vital given the topology of skeleton
graph is suboptimal. While GCN can be applied to a fully-
connected graph ( i.e., 3D body joints as densely connected
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5620
graph nodes), Higher-order Transformer (HoT) [ 21] has
been proven more e cient.
Thus, we propose to use hypergraphs with hyper-edges
of order 1 to rto e↵ectively represent skeleton data for ac-
tion recognition. Compared to GCNs, our encoder contains
an MLP followed by three HoT branches that encode first-
, second- and higher-order hyper-edges, i.e., set of body
joints, edges between pairs of nodes, hyper-edges between
triplets of nodes, etc. Each branch has its own learnable
parameters, and processes temporal blocks2one-by-one.
We notice that (i) the number of hyper-edges of Jjoints
grows rapidly with order r,i.e.,⇣J
i⌘
fori=1, ..., r, embed-
dings of the highest order dominate lower orders in terms
of volume if such embeddings are merely concatenated, and
(ii) long-range temporal dependencies of feature maps are
insu ciently explored, as sequences are split into ⌧tempo-
ral blocks for computational tractability.
Merely concatenating outputs of HoT branches of orders
1 tor, and across ⌧blocks, is sub-optimal. Thus, our Multi-
order Multi-mode Transformer (3Mformer) with two mod-
ules whose order can be exchanged, realizes a variation of
coupled-mode tokens based on ‘channel-temporal block’,
‘order- channel-body joint’, ‘channel-hyper-edge (any or-
der)’ and ‘channel-only’ pairs. As HoT operates block-
by-block, ‘channel-temporal block’ tokens and weighted
hyper-edge aggregation in Multi-order Pooling (MP) help
combine information flow block-wise. Various coupled-
mode tokens help improve results further due to di ↵erent
focus of each attention mechanism. As the block-temporal
mode needs to be aggregated (number of blocks varies
across sequences), Temporal block Pooling (TP) can use
rank pooling [ 13], second-order [ 14,26,33,41,60,68,80]
or higher-order pooling [ 8,24,25,69,70].
In summary, our main contributions are listed as follows:
i.We model the skeleton data as hypergraph of orders 1
tor(set, graph and /or hypergraph), where human body
joints serve as nodes. Higher-order Transformer em-
beddings of such formed hyper-edges represent various
groups of 3D body joints and capture various higher-
order dynamics important for action recognition.
ii.As HoT embeddings represent individual hyper-edge
order and block, we introduce a novel Multi-order
Multi-mode Transformer (3Mformer) with two mod-
ules, Multi-order Pooling and Temporal block Pool-
ing. Their goal is to form coupled-mode tokens such as
‘channel-temporal block’, ‘order-channel-body joint’,
‘channel-hyper-edge (any order)’ and ‘channel-only’,
and perform weighted hyper-edge aggregation and tem-
poral block aggregation.
2Each temporal block enjoys a locally factored out (removed) temporal
mode, which makes each block representation compact.Our 3Mformer outperforms other GCN- and hypergraph-
based models on NTU-60, NTU-120, Kinetics-Skeleton and
Northwestern-UCLA by a large margin.
|
Xu_JacobiNeRF_NeRF_Shaping_With_Mutual_Information_Gradients_CVPR_2023 | Abstract
We propose a method that trains a neural radiance field
(NeRF) to encode not only the appearance of the scene but
also semantic correlations between scene points, regions, or
entities – aiming to capture their mutual co-variation pat-
terns. In contrast to the traditional first-order photomet-
ric reconstruction objective, our method explicitly regular-
izes the learning dynamics to align the Jacobians of highly-
correlated entities, which proves to maximize the mutual
information between them under random scene perturba-
tions. By paying attention to this second-order information,
we can shape a NeRF to express semantically meaningful
synergies when the network weights are changed by a delta
along the gradient of a single entity, region, or even a point.
To demonstrate the merit of this mutual information model-
ing, we leverage the coordinated behavior of scene entities
*Equal Contributions
†Corresponding Author <yanchaoy@hku.hk >. The author is also af-
filiated with the HKU Musketeers Foundation Institute of Data Science.that emerges from our shaping to perform label propagation
for semantic and instance segmentation. Our experiments
show that a JacobiNeRF is more efficient in propagating
annotations among 2D pixels and 3D points compared to
NeRFs without mutual information shaping, especially in
extremely sparse label regimes – thus reducing annotation
burden. The same machinery can further be used for entity
selection or scene modifications. Our code is available at
https://github.com/xxm19/jacobinerf.
| 1. Introduction
When a real-world scene is perturbed, the response is
generally local and semantically meaningful, e.g., a slight
knock on a chair will result in a small displacement of just
that chair. Such coherence in the perturbation of a scene
evidences high mutual information between certain scene
points or entities that can be leveraged to discover instances
or semantic groups [37, 38]. A NeRF scene representation,
however, solely supervised with 2D photometric loss may
not converge to a configuration that reflects the actual scene
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16498
structure [41]; even if the density is correctly estimated, the
network in general will notbe aware of the underlying se-
mantic structure. As shown in Fig. 1, a perturbation on a
specific entity of the scene through the network weights ac-
tivates almost all other entities.
This lack of semantic awareness may not be a problem
for view synthesis and browsing, but it clearly is of con-
cern when such neural scene representations are employed
for interactive tasks that require understanding the underly-
ing scene structure, e.g., entity selection, annotation prop-
agation, scene editing, and so on. All these tasks can be
greatly aided by a representation that better reflects the cor-
relations present in the underlying reality. We take a step
towards endowing neural representations with such aware-
ness of the mutual scene inter-dependencies by asking how
it is possible to train a NeRF, so that it not only reproduces
the appearance and geometry of the scene, but also gener-
ates coordinated responses between correlated entities when
perturbed in the network parameter space.
Current approaches that encode semantics largely treat
semantic labels (e.g., instance segmentation) [17, 42] as a
separate channel, in addition to density or RGB radiance.
However, in the semantics case, the value of the channel
(e.g., instance ID) is typically an artifact of the implemen-
tation. What really matters is the decomposition of the
2D pixels (or of the scene 3D points) the NeRF encodes
into groups – this is because semantics is more about rela-
tionships than values. Thus, we introduce an information-
theoretic technique whose goal is to “shape” an implicit
NeRF representation of a scene to better reflect the underly-
ing regularities (“semantics”) of the world; so as to enforce
consistent variation among correlated scene pixels, points,
regions, or entities, enabling efficient information propaga-
tion within and across views.
Thekeyto the proposed “shaping” technique is an equiv-
alence between mutual information and the normalized in-
ner product (cosine similarity) of the Jacobians at two pixels
or 3D points. More explicitly, if we apply random delta per-
turbations to the NeRF weights, the induced random values
of two pixels share mutual information up to the absolute
cosine similarity of their gradients or Jacobians with respect
to the weights computed at the unperturbed NeRF. This the-
oretical finding ensures a large correlation between scene
entities with high mutual information – and thus coherent
perturbation-induced behaviors – if their tangent spaces are
aligned. Based on this insight, we apply contrastive learn-
ing to align the NeRF gradients with general-purpose self-
supervised features (e.g., DINO), which is why we term our
NeRF “JacobiNeRF” . While several prior works [16, 30]
distill 1st-order semantic information from 2D views to get
a consensus 1st-order feature in 3D, we instead regularize
the NeRF using 2nd-order, mutual information based con-
trastive shaping on the NeRF gradients to achieve semanticconsensus – now encoded in the NeRF tangent space.
The proposed NeRF shaping sets up resonances between
correlated pixels or points and makes the propagation of all
kinds of semantic information possible from sparse annota-
tions – because pixels that co-vary with the annotated one
are probably of the same semantics indicated by the mutual
information equivalence. For example, we can use such res-
onances to propagate semantic or instance information as
shown in Sec. 3.4, where we also show that our contrastive
shaping can be applied to gradients of 2D pixels, or of 3D
points. The same machinery also enables many other func-
tions, including the ability to select an entity by clicking at
one of its points or the propagation of appearance edits, as
illustrated in Fig. 9. Additionally, our approach suggests
the possibility that a NeRF shaped with rich 2nd-order re-
lational information in the way described may be capable
of propagating many additional kinds of semantics without
further re-shaping – because the NeRF coefficients have al-
ready captured the essential “DNA” of points in the scene,
of which different semantic aspects are just different expres-
sions. In summary, our key contributions are:
• We propose the novel problem of shaping NeRFs to re-
flect mutual information correlations between scene enti-
ties under random scene perturbations.
• We show that the mutual information between any two
scene entities is equivalent to the cosine similarity of their
gradients with respect to the perturbed weights.
• We develop JacobiNeRF , a shaping technique that ef-
fectively encodes 2nd-order relational information into a
NeRF tangent space via contrastive learning.
• We demonstrate the effectiveness of JacobiNeRF with
state-of-the-art performance on sparse label propagation
for both semantic and instance segmentation tasks.
|
Woerl_Initialization_Noise_in_Image_Gradients_and_Saliency_Maps_CVPR_2023 | Abstract
In this paper, we examine gradients of logits of image
classification CNNs by input pixel values. We observe that
these fluctuate considerably with training randomness, such
as the random initialization of the networks. We extend
our study to gradients of intermediate layers, obtained via
GradCAM, as well as popular network saliency estimators
such as DeepLIFT, SHAP , LIME, Integrated Gradients, and
SmoothGrad. While empirical noise levels vary, qualita-
tively different attributions to image features are still pos-
sible with all of these, which comes with implications for
interpreting such attributions, in particular when seeking
data-driven explanations of the phenomenon generating the
data. Finally, we demonstrate that the observed artefacts
can be removed by marginalization over the initialization
distribution by simple stochastic integration.
| 1. Introduction
Deep neural networks have revolutionized pattern recogni-
tion, detecting complex structures at accuracies unheard of
just a few years back. Unsurprisingly, the newly gained
ability to model complex phenomena comes at costs in
terms of interpretability — it is usually not obvious how
nonlinear, multi-layer networks reach their conclusions.
Correspondingly, a lot of research has focused on devel-
oping interpretation methods for explaining how deep net-
works make decisions [11], and this often takes the form
ofattributing decisions to subsets of the data. In the case
of image classification, this usually leads to saliency maps
highlighting the image area containing decisive informa-
tion [25, 26, 29, 30, 32, 35].
Strong classifiers trained from example data combined
with suitable attribution methods have opened up a new ap-
proach to empirical research: understanding phenomena by
interpreting learned models [9]. We often know of poste-
rior outcomes (for example, tumor growth rates or treata-
bility with certain medication) but do not understand how
these are related to prior data (say, findings from histolog-
Figure 1. Logit-by-image gradients (ResNet18 on ”ImageNette”
[12]). First column: reference image; second column: mean over
50 models, column 3-4: single models with random initialization.
ical tissue samples). If we are able to train a strong classi-
fier that can predict posterior outcomes from prior data, an
attribution method could potentially explain which aspects
of the data predict this outcome (for example, which visual
features in the histology indicate a negative or positive ther-
apeutic prognosis [38]), thereby providing new insight into
the phenomenon at hand.
For these kinds of research approaches, the classifier
might only be an auxiliary tool: In terms of attribution, we
are not primarily interested in explaining how the classifier
reaches its decision (which, of course, would be highly rel-
evant when studying potential data leakage or the fairness
of decisions [4, 27]), but our actual goal is to accurately
characterize which features in the data are related to the
phenomenon to be explained. Ultimately, it is of course im-
possible to be sure whether an ad-hoc classifier (even with
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1766
great statistical performance and hypothetical perfect attri-
bution) actually does exploit all relevant information (and
only this), but we would of course in such cases make an
effort to avoid wrong or incomplete information or misattri-
butions that we are already aware of.
The main insight and contribution of this paper is to point
out one such source of fluctuations in attributions, the im-
pact of which, to the best of our knowledge, has not yet been
documented in literature so far: In nonlinear CNNs, image
gradients of network outputs can contain significant train-
ingnoise, i.e., noise from (in particular) random weight ini-
tialization and stochastic batch selection (Fig. 1). The level
of such noise, i.e., information unrelated to the data itself,
often exceeds the level of the attribution signal , and vari-
ability includes coarse-scale variations that could suggest
qualitatively varying attributions. Surprisingly, this still
holds (and can even be worse) for more sophisticated at-
tribution techniques, including popular approaches such as
SHAP [20], LIME [25], DeepLIFT [29], or Integrated Gra-
dients [35]. Even class activation maps (including top- and
intermediate-level GradCAMs [26], the former to a lesser
degree) can be affected by noise to an extent that could plau-
sibly alter coarse-scale attribution.
Exploring the phenomenon further, we observe that gra-
dient noise grows with depth, is rather weak in simple con-
vex architectures (linear softmax regression), and damp-
ened stochastically for wide networks (as suggested by the
known convexity in the infinite-width limit [7, 19]). This
indicates that nonlinearity and nonconvexity might play an
important role in causing the problem by either amplifying
numerical noise or convergence to different local minima.
We further show that training noise artifacts can be re-
moved by marginalization [37], which in practice can be im-
plemented with simple stochastic integration: By averaging
the results of tens of independently initialized and trained
networks, signal-to-noise-levels can be brought to accept-
able levels. We also demonstrate that the same stochastic
ensembling technique also improves the visual quality of
feature visualization by optimization [23]. While marginal-
ization incurs non-trivial additional computational efforts,
it can remove a significant source of uncertainty when ex-
plaining how data features are related to outcomes in previ-
ously unknown ways.
|
Wang_Flow_Supervision_for_Deformable_NeRF_CVPR_2023 | Abstract
In this paper we present a new method for deformable
NeRF that can directly use optical flow as supervision. We
overcome the major challenge with respect to the compu-
tationally inefficiency of enforcing the flow constraints to
the backward deformation field, used by deformable NeRFs.
Specifically, we show that inverting the backward deforma-
tion function is actually not needed for computing scene
flows between frames. This insight dramatically simplifies
the problem, as one is no longer constrained to deformation
functions that can be analytically inverted. Instead, thanks
to the weak assumptions required by our derivation based
on the inverse function theorem, our approach can be ex-
tended to a broad class of commonly used backward defor-
mation field. We present results on monocular novel view
synthesis with rapid object motion, and demonstrate signifi-
cant improvements over baselines without flow supervision.
| 1. Introduction
Reconstructing dynamic scenes from monocular videos
is a significantly more challenging task compared to its
static-scene counterparts, due to lack of epipolar constraints
for finding correspondences and ambiguities between mo-
tion and structure. Recent advances in differentiable ren-
dering have lead to various solutions using an analysis-by-
synthesis strategy – solving the non-rigid deformation and
structure by minimizing the difference between synthesized
images and input video frames. Among those, deformable
neural radiance fields [14, 21, 25, 31] has been a notable
technique to represent dynamic scenes and shows plausi-
ble space-time view synthesis results. However, the current
implementations only warrant success on teleporting-like
videos whose camera motions are significantly more rapid
than object motions. Quality of their results significantly
decrease on videos with more rapid object motions [6].
In this work, we conjecture the deficiency of these de-
formable NeRF-based methods is mainly due to lack of
withflow supervisionw/oflow supervision
inputsFigure 1. We propose a method to use optical flow supervision
for deformable NeRF. It noticeably improves novel view synthesis
for monocular videos with rapid object motions. In the figure, we
visualize rendered novel view images and depth maps for the first
and last frame of the input video.
temporal regularization. As they represent deformation as
backward warping from the sampled frames to some canon-
ical space, the motions or scene flows between temporally
adjacent frames is not directly modeled nor supervised. An-
other deficiency is that these methods minimize photomet-
ric error alone, which is insufficient for gradient descent to
overcome poor local minima when the canonical and other
frames has little spatial overlap. This deficiency is severe
for non-object-centric videos with large translations.
The community has explored optical flow as an addi-
tional cue to help supervise the temporal transitions of other
motion representations, such as scene flow fields [4, 5, 13,
43] and blend skinning fields [40]. However, enforcing
flow constraints with respect to a generic backward warp-
ing field as in Nerfies [21] is non-trivial. Intuitively, to com-
pute scene flows, it requires inverting the backward warp by
having a forward warp which maps points from canonical
space to other frames. Then scene flows can be computed
either by taking time derivative of the forward warp or pre-
dicting the next position of a point through evaluating the
forward warp. But this can be problematic since analytical
inverse of a complicated non-bijective function ( e.g. neu-
ral networks) is impossible, and an approximate solution by
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21128
having an auxiliary network to represent the forward warp
will introduce computational overhead and is theoretically
ill-posed. Counter to this intuition, we will show that invert-
ing the backward warping function is actually not needed
for computing scene flows between frames.
The main contribution of this paper is: we derive an
analytical solution to compute velocities of objects directly
from the backward warping field of the deformable NeRF.
The velocities are then used to compute scene flows through
temporal integration, which allows us to supervise the de-
formable NeRF through optical flow. This leads to signif-
icant improvement on videos with more rapid motions, as
shown in Fig. 1.
The advantage of our approach is twofold: (i) Our
method applies to all kinds of backward warping function,
thanks to the weak assumptions required by the inverse
function theorem which our derivation is based on. Thus
our method is more general compared to other works using
invertible normalizing flows [11] or blend skinning [3, 40].
(ii) Our method is also computationally more tractable com-
pared to neural scene flow fields [4, 12, 13], which would
require integrating flows over long period of time to reach
some canonical frame.
|
Wu_Sparsely_Annotated_Semantic_Segmentation_With_Adaptive_Gaussian_Mixtures_CVPR_2023 | Abstract
Sparsely annotated semantic segmentation (SASS) aims
to learn a segmentation model by images with sparse labels
(i.e., points or scribbles). Existing methods mainly focus
on introducing low-level affinity or generating pseudo la-
bels to strengthen supervision, while largely ignoring the
inherent relation between labeled and unlabeled pixels. In
this paper, we observe that pixels that are close to each
other in the feature space are more likely to share the same
class. Inspired by this, we propose a novel SASS frame-
work, which is equipped with an Adaptive Gaussian Mix-
ture Model (AGMM). Our AGMM can effectively endow re-
liable supervision for unlabeled pixels based on the distri-
butions of labeled and unlabeled pixels. Specifically, we
first build Gaussian mixtures using labeled pixels and their
relatively similar unlabeled pixels, where the labeled pix-
els act as centroids, for modeling the feature distribution
of each class. Then, we leverage the reliable information
from labeled pixels and adaptively generated GMM predic-
tions to supervise the training of unlabeled pixels, achieving
online, dynamic, and robust self-supervision. In addition,
by capturing category-wise Gaussian mixtures, AGMM en-
courages the model to learn discriminative class decision
boundaries in an end-to-end contrastive learning manner.
Experimental results conducted on the PASCAL VOC 2012
and Cityscapes datasets demonstrate that our AGMM can
establish new state-of-the-art SASS performance. Code is
available at https://github.com/Luffy03/AGMM-SASS.
| 1. Introduction
Semantic segmentation [2, 8, 42] aims to assign the
corresponding pixel-wise semantic labels for a given im-
age, which is a fundamental computer vision task. Pre-
*Corresponding author
Figure 1. (a) Illustration of SASS task. (b) Different from existing
SASS frameworks, our AGMM leverages the reliable information
of labeled pixels and generates GMM predictions for dynamic on-
line supervision. fdenotes the model, PandGrepresent segmen-
tation and GMM predictions, respectively. Solid and dashed lines
represent model propagation and supervision, respectively.
vious deep learning based semantic segmentation meth-
ods [3, 9, 43] trained on large amounts of data with accu-
rate pixel-wise annotations have demonstrated outstanding
achievements. However, collecting such dense annotations
always requires cumbersome manual efforts, which heav-
ily limits the development of semantic segmentation meth-
ods. To reduce the cost of manual annotations, many re-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15454
Figure 2. (a) Observation of the inherent relation between the labeled and unlabeled pixels. (b) Category-wise performance on the PASCAL
VOC 2012 dataset. The black line, blue bar, and orange bar represent the IoU of all unlabeled pixels, unlabeled pixels that are similar to
labeled pixels, and unlabeled pixels that are dissimilar to labeled pixels, respectively. σis the variance of a class (Eq. 5).
cent works [4,17,27,29,47,48,59] have been made towards
sparsely annotated semantic segmentation (SASS), which
learns segmentation models via sparse labels, i.e., points or
scribbles, as shown in Fig. 1(a). The sparse annotations are
cheap to obtain and also contain the least necessary category
and location information. Thus, SASS has high research
potential in terms of the trade-off between information and
costs.
The main challenge of SASS is the lack of information
for supervision. Existing SASS methods can be roughly
divided into three categories, i.e., low-level regulariza-
tion [26,30,34,47,48], pseudo supervision [7,27,35,59,60],
and consistency learning [19, 38], as shown in Fig. 1(b).
Specifically, the low-level regularization methods [26, 30,
34, 47, 48] focus on introducing the low-level affinity of the
raw images for supervision. However, the low-level infor-
mation is not reliable enough to be associated with the high-
level semantics. Pseudo supervision [27, 35, 59, 60] aims
to generate pseudo labels via training with sparse labels,
and then uses these pseudo labels to learn a more robust
segmentation model. However, it commonly requires time-
consuming multi-stage training and the generated pseudo
labels are always coarse and ambiguous, which significantly
hinders the learning of unlabeled pixels. Consistency learn-
ing [19, 38, 55] further proposes to learn consistent repre-
sentations in the high-dimension feature space, but it cannot
directly supervise the final predictions at the category level.
To solve these problems, we aim to address the SASS
task with more reliable supervision. To this end, we argue
that the reliable information of labeled pixels should be fur-
ther exploited. Previous methods only employ the labeled
pixels for partial cross-entropy supervision, while largely
ignoring the inherent relation between labeled and unla-beled pixels. As illustrated in Fig. 2, we observe that the
similarity between labeled and unlabeled pixels is highly as-
sociated with the predictions of unlabeled pixels. As shown
in Fig. 2(a), if an unlabeled pixel is similar to the labeled
pixel in the feature space, its corresponding prediction is
more likely to be consistent with the category of the labeled
pixel. In Fig. 2(b), we calculate the distance d(see Eq. 6)
between labeled and unlabeled pixels to measure the simi-
larity, i.e.,d < σ as similar and d > σ as not similar. It
can be seen that the similarity between labeled and unla-
beled pixels is highly associated with the accuracy of the
predictions. To this end, we propose to explicitly leverage
the similarity between the labeled and unlabeled pixels to
generate supervision information. The key challenge is how
to effectively model the similarity between the labeled and
unlabeled pixels.
In this paper, we propose a novel Adaptive Gaussian
Mixture Model (AGMM) framework, which is realized by
incorporating a GMM branch into the traditional segmenta-
tion branch. Specifically, we assign the labeled pixels as the
centroids of Gaussian mixtures, enabling us to model the
data distribution of each class in the high-dimension fea-
ture space. Each Gaussian mixture represents the distribu-
tion of a class, which consists of the centered labeled pix-
els and the relatively similar unlabeled pixels. In this way,
we build a GMM to measure the feature similarity between
labeled and unlabeled pixels, producing soft GMM predic-
tions to supervise the unlabeled regions from a probabilis-
tic perspective. The process of GMM formulation works
in an adaptive manner, where the parameters of GMM are
dynamically adapted to the input features, achieving end-to-
end online self-supervision. The GMM branch is progres-
sively optimized during training, enabling us to learn more
15455
discriminative Gaussian mixtures adaptively.
There are three appealing advantages in our proposed
AGMM. First, by capturing category-wise Gaussian mix-
tures for feature representations, we can learn discrimina-
tive decision boundaries between different classes via very
limited supervision. Second, AGMM pushes each unla-
beled pixel into or away from specific category-wise Gaus-
sian mixtures, which further enables an end-to-end con-
trastive representation learning. Finally, we leverage the
reliable information from labeled pixels to generate GMM
predictions for the unlabeled pixels, achieving more reliable
supervision.
We conduct experiments under the point- and scribble-
supervised settings on two widely used datasets, i.e., PAS-
CAL VOC 2012 [14] and Cityscapes [12]. It is worth
noting that compared with existing SASS methods, our
AGMM does not require extra information for supervision
[19,26,30,34,50], multi-stage training [7,35,37,59,60], and
time-consuming post-processing [27, 31, 50, 60]. Extensive
experiments demonstrate that our AGMM outperforms the
existing state-of-the-art SASS methods.
|
Wu_EDA_Explicit_Text-Decoupling_and_Dense_Alignment_for_3D_Visual_Grounding_CVPR_2023 | Abstract
3D visual grounding aims to find the object within point
clouds mentioned by free-form natural language descrip-
tions with rich semantic cues. However, existing methods
either extract the sentence-level features coupling all words
or focus more on object names, which would lose the word-
level information or neglect other attributes. To alleviate
these issues, we present EDA that Explicitly Decouples
the textual attributes in a sentence and conducts Dense
Alignment between such fine-grained language and point
cloud objects. Specifically, we first propose a text decou-
pling module to produce textual features for every seman-
tic component. Then, we design two losses to supervise
∗Corresponding author . This work was supported in part by Shenzhen
Research Project under Grant JCYJ20220531093215035.the dense matching between two modalities: position align-
ment loss and semantic alignment loss. On top of that,
we further introduce a new visual grounding task, locat-
ing objects without object names, which can thoroughly
evaluate the model’s dense alignment capacity. Through
experiments, we achieve state-of-the-art performance on
two widely-adopted 3D visual grounding datasets, Scan-
Refer and SR3D/NR3D, and obtain absolute leadership on
our newly-proposed task. The source code is available at
https://github.com/yanmin-wu/EDA .
| 1. Introduction
Multi-modal cues can highly benefit the 3D environ-
ment perception of an agent, including 2D images, 3D point
clouds, and language. Recently, 3D visual grounding (3D
VG) [8,50], also known as 3D object referencing [3], has at-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19231
tached much attention as an important 3D cross-modal task.
Its objective is to find the target object in point cloud scenes
by analyzing the descriptive query language, which requires
understanding both 3D visual and linguistic context.
Language utterances typically involve words describ-
ing appearance attributes, object categories, spatial rela-
tionships and other characteristics, as shown by different
colours in Fig. 1(a), requiring that the model integrate multi-
ple cues to locate the mentioned object. Compared with 2D
Visual Grounding [18,19,67,71], the sparseness and incom-
pleteness of point clouds, and the diversity of language de-
scriptions produced by 3D multi-view, make 3D VG more
challenging. Existing works made significant progress from
the following perspectives: improving point cloud features
extraction by sparse convolution [70] or 2D images assis-
tance [68]; generating more discriminative object candi-
dates through instance segmentation [33] or language mod-
ulation [46]; identifying complex spatial relationships be-
tween entities via graph convolution [23] or attention [6].
However, we observe two issues that remain unex-
plored. 1) Imbalance : The object name can exclude most
candidates, and even in some cases, there is only one
name-matched object, as the “door” and“refrigerator” in
Fig. 1(b1, b2). This shortcut may lead to an inductive bias
in the model that pays more attention to object names while
weakening other properties such as appearance and relation-
ships, resulting in imbalanced learning. 2) Ambiguity : Ut-
terances frequently refer to multiple objects and attributes
(such as “black object, tall shelf, fan” in Fig. 1(b4)), while
the model’s objective is to identify only the main object,
leading to an ambiguous understanding of language de-
scriptions. These insufficiencies of existing works stem
from their characteristic of feature coupling and fusing im-
plicitly. They input a sentence with different attribute words
but output only one globally coupled sentence-level fea-
ture that subsequently matches the visual features of can-
didate objects. The coupled feature is ambiguous because
some words may not describe the main object (green text
in Fig. 1) but other auxiliary objects (red text in Fig. 1).
Alternatively, using the cross-modal attention of the Trans-
former [21, 63] automatically and implicitly to fuse visual
and text features. However, this may encourage the model
to take shortcuts, such as focusing on object categories and
ignoring other attributes, as previously discussed.
Instead, we propose a more intuitive decoupled and ex-
plicit strategy. First, we parse the input text to decouple
different semantic components, including the main object
word, pronoun, attributes, relations, and auxiliary object
words. Then, performing dense alignment between point
cloud objects and multiple related decoupled components
achieves fine-grained feature matching, which avoids the
inductive bias resulting from imbalanced learning of differ-
ent textual components. As the final grounding result, weexplicitly select the object with the highest similarity to the
decoupled text components (instead of the entire sentence),
avoiding ambiguity caused by irrelevant components. Ad-
ditionally, to explore the limits of VG and examine the com-
prehensiveness and fine-graininess of visual-language per-
ception of the model, we suggest a challenging new task:
Grounding without object name (VG-w/o-ON) , where
the name is replaced by “object” (see Fig. 1(b)), forcing
the model to locate objects based on other attributes and
relationships. This setting makes sense because utterances
that do not mention object names are common expressions
in daily life, and in addition to testing whether the model
takes shortcuts. Benefiting from our text decoupling oper-
ation and the supervision of dense aligned losses, all text
components are aligned with visual features, making it pos-
sible to locate objects independent of object names.
To sum up, the main contributions of this paper are as
follows: 1)We propose a text decoupling module to parse
linguistic descriptions into multiple semantic components,
followed by suggesting two well-designed dense aligned
losses for supervising fine-grained visual-language feature
fusion and preventing imbalance and ambiguity learning.
2)The challenging new 3D VG task of grounding without
object names is proposed to comprehensively examine the
model’s robust performance. 3)We achieve state-of-the-art
performance on two datasets (ScanRefer and SR3D/NR3D)
on the regular 3D VG task and absolute leadership on the
new task evaluated by the same model without retraining.
|
Xu_Learning_Open-Vocabulary_Semantic_Segmentation_Models_From_Natural_Language_Supervision_CVPR_2023 | Abstract
This paper considers the problem of open-vocabulary se-
mantic segmentation (OVS), that aims to segment objects
of arbitrary classes beyond a pre-defined, closed-set cat-
egories. The main contributions are as follows: First ,
we propose a transformer-based model for OVS, termed
as OVSegmentor, which only exploits web-crawled image-
text pairs for pre-training without using any mask annota-
tions. OVSegmentor assembles the image pixels into a set
of learnable group tokens via a slot-attention based bind-
ing module, then aligns the group tokens to correspond-
ing caption embeddings. Second , we propose two proxy
tasks for training, namely masked entity completion and
cross-image mask consistency. The former aims to infer
all masked entities in the caption given group tokens, that
enables the model to learn fine-grained alignment between
visual groups and text entities. The latter enforces consis-
tent mask predictions between images that contain shared
entities, encouraging the model to learn visual invariance.
Third , we construct CC4M dataset for pre-training by fil-
tering CC12M with frequently appeared entities, which sig-
nificantly improves training efficiency. Fourth , we per-
form zero-shot transfer on four benchmark datasets, PAS-
CAL VOC, PASCAL Context, COCO Object, and ADE20K.
OVSegmentor achieves superior results over state-of-the-
art approaches on PASCAL VOC using only 3% data (4M
vs 134M) for pre-training.
| 1. Introduction
Semantic segmentation considers the problem of assign-
ing class labels to each pixel in the image. It plays criti-
cal roles in a wide range of real-world scenarios, including
autonomous driving, computer-aided diagnosis and satellite
image analysis, to name a few. Generally speaking, two
lines of research dominate semantic segmentation, one way
is to cluster the pixels into different groups and assign a
*Corresponding author
Seg
ModelClosed-set semantic segmentation
Open-vocabulary semantic segmentation (ours){cat, dog} {cat, dog, fire hydrant }
Seg
Model
Training Inference
A train arrives at
its destination.Three cats are
sleeping on the bed.Visual
encoder
Text
encoder
A photo of {cat/dog/fire hydrant}
Visual
encoder
Text
encoder
Figure 1. An illustration of open-vocabulary semantic segmen-
tation. Models trained on closed-set classes (cat and dog) fail to
segment novel class (fire hydrant). We train a visual encoder and a
text encoder on web-collected image-text pairs without using any
mask labels, and our model can segment arbitrary object classes.
semantic label to each group; the other idea treats segmen-
tation as pixel-wise classification, casting each pixel into
one category. Despite tremendous progress, the scalability
of existing approaches that rely on supervised training has
been fundamentally limited: (1) costly annotation proce-
dure. Extensive manual pixel-wise annotations are required
for training segmentation models; (2) closed-set segmenta-
tion. The model is restricted to segmenting objects from a
closed-set of categories. Whenever a new dataset comes,
the model requires re-training.
In this paper, our goal is to train an open-vocabulary
semantic segmentation (OVS) model, by exploiting freely
available image-caption pairs on Internet, as illustrated
in Fig. 1. The recent work, for example, CLIP [41]
and ALIGN [23] have demonstrated that a combination
of large-scale image-caption pairs and simple noise con-
trastive estimation can learn powerful image-text embed-
dings from scratch, and show strong “zero-shot” general-
ization abilities for open-vocabulary classification. Addi-
tionally, GroupViT [53] extends the idea towards semantic
segmentation by training a segmentation model with text
supervision only. They perform hierarchical grouping of
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2935
visual tokens, which are then aligned to the correspond-
ing text embeddings via a contrastive loss. However, the
following issues remain challenging and unsolved: First ,
the captions only provide coarse, image-level descriptions,
which are insufficient for training semantic segmentation
models where fine-grained, pixel-wise supervision is usu-
ally needed. Second , the diversity of web-collected data is
large, that requires the model to learn visual invariance on
objects of interest, with only weak supervision provided.
For instance, the visual appearance of two images with sim-
ilar captions can be drastically different.
To tackle the above challenges, (i) we propose a
transformer-based model for open-vocabulary semantic
segmentation, dubbed as OVSegmentor, that can segment
objects of arbitrary categories via zero-shot transfer, with
only image-caption pairs for pre-training. Specifically, we
introduce learnable group tokens to cluster image patches
via a slot-attention [33] based binding module, and align
the group tokens with corresponding caption embeddings.
Note that our model neither requires ground-truth masks
for training nor additional re-training on target segmenta-
tion datasets, substantially alleviating the annotation efforts
and improving transfer efficiency; (ii) As for training on the
image-caption dataset, we propose two proxy tasks, namely
masked entity completion and cross-image mask consis-
tency, the former trains the model to infer allthe masked
entities in the sentence given the group tokens, and the lat-
ter enforces consistent mask prediction for images with the
common entity. Both tasks have shown to be beneficial
in learning entity-specific, fine-grained and visually invari-
ant group semantics; (iii) We construct an image-caption
dataset, termed as CC4M, by designing an automatic ap-
proach to filter CC12M [7] with frequently appeared visual
entities, significantly improving the training efficiency.
We pre-train the proposed OVSegmentor on our filtered
image-caption dataset (CC4M), without using any man-
ual segmentation masks whatsoever. The model is eval-
uated on four segmentation benchmarks, PASCAL VOC
2012 [18], PASCAL Context [38], COCO Object [31], and
ADE20K [59] in a zero-shot manner, i.e., the model is
directly evaluated on target datasets without any finetun-
ing. Extensive experiments demonstrate that our model sur-
passes existing models that are trained with full supervision
and outperforms state-of-the-art self-supervised approaches
on PASCAL VOC by using only 3% data (4M vs 134M) for
pre-training, significantly improving the training efficiency.
|
Wang_Image_as_a_Foreign_Language_BEiT_Pretraining_for_Vision_and_CVPR_2023 | Abstract
A big convergence of language, vision, and multimodal
pretraining is emerging. In this work, we introduce a
general-purpose multimodal foundation model BEIT-3,
which achieves excellent transfer performance on both vi-
sion and vision-language tasks. Specifically, we advance
the big convergence from three aspects: backbone architec-
ture, pretraining task, and model scaling up. We use Mul-
tiway Transformers for general-purpose modeling, where
the modular architecture enables both deep fusion and
modality-specific encoding. Based on the shared back-
bone, we perform masked “language” modeling on images
(Imglish ), texts (English), and image-text pairs (“parallel
sentences”) in a unified manner. Experimental results show
thatBEIT-3obtains remarkable performance on object de-
tection (COCO), semantic segmentation (ADE20K), image
classification (ImageNet), visual reasoning (NLVR2), visual
question answering (VQAv2), image captioning (COCO),
and cross-modal retrieval (Flickr30K, COCO).
| 1. Introduction: The Big Convergence
Recent years have featured a trend toward the big con-
vergence of language [14, 15, 46], vision [3, 43], and mul-
timodal [45, 62, 69] pretraining. By performing large-scale
pretraining on massive data, we can easily transfer the mod-
els to various downstream tasks. It is appealing that we can
pretrain a general-purpose foundation model that handles
multiple modalities. In this work, we advance the conver-
gence trend for vision-language pretraining from the fol-
lowing three aspects.
First, the success of Transformers [59] is translated from
language to vision [16] and multimodal [26, 62] problems.
The unification of network architectures enables us to seam-
lessly handle multiple modalities. For vision-language
modeling, there are various ways to apply Transformers
*Equal contribution. †Corresponding author.
V-FFN
Vision
ExpertL-FFN VL-FFN
𝐿x
Multimodal InputShared Multi -Head
Self-AttentionSwitching Modality Experts
Language
ExpertVL
Expert BEiT -3
(Multiway Transformer )
Images TextsImage -Text
PairsMasked Data ModelingFigure 1. Overview of BE IT-3 pretraining. We perform masked
data modeling on monomodal (i.e., images, and texts) and multi-
modal (i.e., image-text pairs) data with a shared Multiway Trans-
former as the backbone network.
due to the different natures of downstream tasks. For ex-
ample, the dual-encoder architecture is used for efficient
retrieval [45], encoder-decoder networks for generation
tasks [63], and the fusion-encoder architecture for image-
text encoding [26]. However, most foundation models have
to manually convert the end-task formats according to the
specific architectures. Moreover, the parameters are usu-
ally not effectively shared across modalities. In this work,
we adopt Multiway Transformers [62] for general-purpose
modeling, i.e., one unified architecture shared for various
downstream tasks. The modular network also compre-
hensively considers modality-specific encoding and cross-
modality fusion.
Second, the pretraining task based on masked data mod-
eling has been successfully applied to various modalities,
such as texts [14] and images [3, 43]. Current vision-
language foundation models usually multitask other pre-
training objectives (such as image-text matching), render-
ing scaling-up unfriendly and inefficient. In contrast, we
only use one pretraining task, i.e., mask-then-predict, to
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19175
train a general-purpose multimodal foundation model. By
regarding the image as a foreign language (i.e., Imglish ), we
handle texts and images in the same manner without funda-
mental modeling differences. Consequentially, image-text
pairs are utilized as “parallel sentences” in order to learn the
alignments between modalities. We also show that t |
Wang_MDL-NAS_A_Joint_Multi-Domain_Learning_Framework_for_Vision_Transformer_CVPR_2023 | Abstract
In this work, we introduce MDL-NAS, a unified frame-
work that integrates multiple vision tasks into a manage-
able supernet and optimizes these tasks collectively un-
der diverse dataset domains. MDL-NAS is storage-efficient
since multiple models with a majority of shared parame-
ters can be deposited into a single one. Technically, MDL-
NAS constructs a coarse-to-fine search space, where the
coarse search space offers various optimal architectures
for different tasks while the fine search space provides fine-
grained parameter sharing to tackle the inherent obstacles
of multi-domain learning. In the fine search space, we sug-
gest two parameter sharing policies, i.e., sequential shar-
ing policy and mask sharing policy. Compared with pre-
vious works, such two sharing policies allow for the par-
tial sharing and non-sharing of parameters at each layer
of the network, hence attaining real fine-grained parameter
sharing. Finally, we present a joint-subnet search algorithm
that finds the optimal architecture and sharing parameters
for each task within total resource constraints, challeng-
ing the traditional practice that downstream vision tasks
are typically equipped with backbone networks designed for
image classification. Experimentally, we demonstrate that
MDL-NAS families fitted with non-hierarchical or hierar-
chical transformers deliver competitive performance for all
tasks compared with state-of-the-art methods while main-
taining efficient storage deployment and computation. We
also demonstrate that MDL-NAS allows incremental learn-
ing and evades catastrophic forgetting when generalizing to
a new task.
| 1. Introduction
Recently, transformers have become the standard pattern
for natural language processing (NLP) tasks due to their
efficacy in modelling long-range relationships via the self-
*Corresponding author.
Output1
Layer L
Layer 3
Layer 2
Layer 1
Many -to-oneLayer L
Layer 3
Layer 2
Layer 1
Many -to-many(MDL)Layer L
Layer 3
Layer 2
Layer 1
One-to-manyIutput1Multiple dataset
domainsMultiple
tasksFigure 1. Illustration of differences between multi-domain learn-
ing (MDL) and other learning paradigms. MDL-NAS jointly op-
timizes multiple vision tasks under different dataset domains. L
denotes the layer numbers in the backbone.
attention mechanism [41]. Such success and good prop-
erties of transformers have spawned a slew of subsequent
works that apply them to a wide variety of computer vision
tasks, such as image classification [6,27,32,49], object de-
tection [5, 57], semantic segmentation [55], and video un-
derstanding [56], achieving impressive results. However,
these methods only apply transformer to a specific domain.
After observing the success of transformers, a naive ques-
tion arises: could a transformer simultaneously handle mul-
tiple vision tasks under a variety of dataset domains ?
While a few works have investigated the usage of trans-
formers to handle multiple input modalities (i.e., images
and text), they typically concentrate on a particular task,
such as visual question answering [20, 24], i.e., many-to-
one mapping. Also, some methods [2,26] explore to simul-
taneously performing depth estimation, surface normal esti-
mation, and semantic segmentation on a given input image.
However, these methods are restricted to a single-domain
setting, where all inputs are the same, i,e,, one-to-many
mapping. In addition, there are some works [19, 28] that
employ transformers to solve different tasks under multiple
domains (multi-domain learning), which is more realistic,
i.e., many-to-many mapping, as shown in Fig. 1. Never-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20094
All-share Spe-norm Spe-attn Spe-ffn All-spe60657075808590Top1 Acc.(%)80.8 80.8 81.0 81.0 81.381.9 82.0 82.1 82.2 82.5Swin transformer
Autoformer(a) Image classification
All-share Spe-norm Spe-attn Spe-ffn All-spe0102030405060mAP(%)39.0 39.441.242.443.244.1 44.3 44.8 45.2 44.9Swin transformer
Autoformer (b) Object detection
All-share Spe-norm Spe-attn Spe-ffn All-spe010203040506070mIoU(%)45.8 45.5 44.7 44.5 43.948.1 48.0 47.3 46.7 47.1Swin transformer
Autoformer (c) Semantic segmentation
Figure 2. Adjust baselines that shares all parameters in backbone with task-specific normalization (Spe-norm), task-specific self-attention
(Spe-attn), task-specific feed forward network (Spe-ffn) and all task-specific parameters (All-spe) under the same training recipe.
theless, these methods utilize diverse encoders to manage
different dataset domains, which is inefficient in terms of
storage deployment. In this work, we investigate a unified
network that optimizes multiple vision tasks over multiple
dataset domains to enable all tasks to share as many param-
eters as feasible while maintaining promising performance.
As a preliminary step, we conduct experiments to observe
the performance impact of treating various components of
vision transformers as task-specific parameters.
As depicted in Fig. 2, considering the multihead self-
attention (MHSA) layer or feed-forward network (FFN) or
LayerNorm (LN) throughout the backbone as task-specific
parameters can all achieve a certain performance gain for
classification and detection tasks over a baseline that shares
all parameters. Besides, we observe that the performance
of semantic segmentation is elevated when all parameters
are shared, indicating that closely-related tasks have mu-
tual benefits whereas some tasks have conflicts against each
other under multi-domain learning setting. Consequently,
to use task-shared parameters for learning task-reciprocal
features while using task-specific parameters for mitigating
conflicts, sharing parameters with various proportions in-
side each layer is an immediate thought, which motivates us
to find a method to supply different share ratios for different
layers in the network. Moreover, when optimizing multiple
tasks collectively, we typically equip these tasks with the
backbone designed for image classification, which may be
sub-optimal due to the gap between the image classification
task and other vision tasks.
To tackle these issues, we introduce MDL-NAS, a uni-
fied framework based on vision transformers, which accom-
modates multiple vision tasks under heterogeneous dataset
domains into a modest supernet and jointly optimizes these
tasks. Specifically, we first construct a coarse search
space comprising embedding dimension, heads number,
query/key/value dimension, and MLP ratios for each trans-
former block to discover different optimal architectures for
diverse tasks. Moreover, such space comprises candidate ar-
chitectures with a wide spectrum of model size, which pro-
vides certain flexibility for final model deployment. Basedon the coarse search space, we design a fine search space
that offers fine-grained parameter sharing for all tasks to
resolve the inherent challenges of multi-domain learning.
In the fine search space, we suggest two parameter sharing
policies, namely sequential sharing policy and mask shar-
ing policy. Sequential sharing policy enables all tasks to
share parameters for each layer in order, which allows to
customize the parameter share ratio. Mask sharing policy
provides maximum flexibility for different tasks to share pa-
rameters with various proportions and channels inside each
layer. Following Autoformer [6], to address the efficiency
issue, we leverage the weight entanglement training strat-
egy to train MDL-NAS, allowing thousands of subnets to
be extremely well-trained.
During the search stage, we propose a joint-subnet
search algorithm that finds the optimal architecture and
sharing parameters for each task under total resource con-
straints. The searched subnets with various architectures
share as many parameters as possible in the backbone, guar-
anteeing excellent performance for each task while keeping
storage-efficient for model deployment.
Experiments show that the searched models with weights
inherited from the supernet outperform several baselines
and are comparable with the state-of-the-art methods that
are trained individually for specific tasks. We also demon-
strate that MDL-NAS allows incremental learning and
evades catastrophic forgetting when generalizing to a new
task. Thus, MDL-NAS is more parameter-efficient and can
scale up more gracefully with the number of tasks increas-
ing, as illustrated in Sec. 4.4.
The key contributions of this work can be summarized
as: (1) We propose MDL-NAS that accepts multiple dataset
domains as input to optimize multiple vision tasks concur-
rently. (2) We construct a coarse-to-fine search space, with
the coarse search space finding optimal architectures for all
tasks and the fine search space coupled with sequential or
mask sharing policy providing fine-grained shared parame-
ters to learn task-reciprocal features and extra task-specific
parameters for learning task-related features. (3) We intro-
duce a subnet search algorithm to jointly search architec-
20095
tures and share ratios, enabling all tasks to share as many
parameters as feasible while ensuring high performance for
each task. (4) We demonstrate that MDL-NAS allows in-
cremental learning with fewer parameters.
|
Wang_FEND_A_Future_Enhanced_Distribution-Aware_Contrastive_Learning_Framework_for_Long-Tail_CVPR_2023 | Abstract
Predicting the future trajectories of the traffic agents is
a gordian technique in autonomous driving. However, tra-
jectory prediction suffers from data imbalance in the preva-
lent datasets, and the tailed data is often more complicated
and safety-critical. In this paper, we focus on dealing with
the long-tail phenomenon in trajectory prediction. Previ-
ous methods dealing with long-tail data did not take into
account the variety of motion patterns in the tailed data.
In this paper, we put forward a future enhanced contrastive
learning framework to recognize tail trajectory patterns and
form a feature space with separate pattern clusters. Fur-
thermore, a distribution aware hyper predictor is brought
up to better utilize the shaped feature space. Our method is
a model-agnostic framework and can be plugged into many
well-known baselines. Experimental results show that our
framework outperforms the state-of-the-art long-tail pre-
diction method on tailed samples by 9.5% on ADE and 8.5%
on FDE, while maintaining or slightly improving the aver-
aged performance. Our method also surpasses many long-
tail techniques on trajectory prediction task.
| 1. Introduction
Trajectory prediction is of great importance in au-
tonomous driving scenarios [27]. It aims to predict a series
of future positions for the agents on the road given the ob-
served past tracks. There have been many recent methods
in trajectory prediction, both unimodal [1, 48] and multi-
modal [10, 37, 38, 49].
Despite the high accuracy those prediction methods have
achieved, most of them treat the samples in the datasets
equally in both training and evaluation phases. But there
is a long-tailed phenomenon in prevalent datasets [28]. For
*Equal contributions.
†Corresponding author.
Figure 1. The long-tailed final displacement errors of the state-
of-the-art prediction network: Trajectron++ EWTA [28] on ETH-
UCY . The long-tail part of the dataset contains various compli-
cated motion patterns, and predicting them is challenging.
example, in real traffic scenes, most of the trajectories fol-
low certain simple kinematic rules, while deviating and
collision-avoiding circumstances are scarce. Therefore, the
frequent cases are often simple and easy to predict, while
the tail cases are often complicated with many motion pat-
terns and suffer from large prediction errors, which makes
them more safety-critical, as shown in Fig. 1 for the univ
dataset. Despite of its significance, the long-tail prediction
problem have been rarely discussed in literature.
It has been pointed out that the feature encoders largely
suffer from long-tail data. In the training process, the head
samples are encountered more often and dominate the la-
tent space, while the tailed samples will be modeled insuf-
ficiently, as discussed in [24, 28, 39]. Feature embeddings
of the tailed data can even be mixed up with the ones of the
head data as discussed in [28], therefore the performances
of the tailed samples could be harmed.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1400
In this paper, we pick up the general idea of using con-
trastive learning to enhance the model ability on long-tailed
data. A new framework is developed called FEND: Future
ENhanced Distribution-aware contrastive trajectory predic-
tion, which is a pattern-based contrastive feature learning
framework enhanced by future trajectory information. An
offline trajectory clustering process and prototypical con-
trastive learning are introduced for recognizing and sepa-
rating different trajectory patterns to boost the tail samples
modeling. To deal with the afore mentioned problem, the
features of trajectories within the same pattern cluster are
pulled together, while the features from different pattern
clusters will be pushed apart. Moreover, a more flexible
network structure of the decoder is introduced to exploit the
shaped feature embedding space with different pattern clus-
ters. Our contribution can be summarized as follows:
• We propose a future enhanced contrastive feature
learning framework for long-tailed trajectory predic-
tion, which can better distinguish tail patterns from
head patterns, and the different patterns are repre-
sented by different cluster prototypes to enhance the
modeling of the tailed data.
• We propose a distribution-aware hyper predictor, aim-
ing at providing separated decoder parameters for tra-
jectory inputs with different patterns.
• Experimental results show that our proposed frame-
work can outperform start-of-the-art methods.
|
Wang_MetaMix_Towards_Corruption-Robust_Continual_Learning_With_Temporally_Self-Adaptive_Data_Transformation_CVPR_2023 | Abstract
Continual Learning (CL) has achieved rapid progress in
recent years. However, it is still largely unknown how to
determine whether a CL model is trustworthy and how to
foster its trustworthiness. This work focuses on evaluating
and improving the robustness to corruptions of existing CL
models. Our empirical evaluation results show that existing
state-of-the-art (SOTA) CL models are particularly vulnera-
ble to various data corruptions during testing. To make them
trustworthy and robust to corruptions deployed in safety-
critical scenarios, we propose a meta-learning framework of
self-adaptive data augmentation to tackle the corruption ro-
bustness in CL. The proposed framework, MetaMix, learns to
augment and mix data, automatically transforming the new
task data or memory data. It directly optimizes the general-
ization performance against data corruptions during train-
ing. To evaluate the corruption robustness of our proposed
approach, we construct several CL corruption datasets with
different levels of severity. We perform comprehensive exper-
iments on both task- and class-continual learning. Extensive
experiments demonstrate the effectiveness of our proposed
method compared to SOTA baselines.
| 1. Introduction
Humans constantly acquire new information throughout
their lifespan and easily recognize information shifts such
as structure and style variations in images. Continual learn-
ing (CL) aims at imitating human’s ability to learn from
non-stationary data distributions without forgetting the previ-
ously learned knowledge. The past few years have witnessed
rapid progress in CL research [1, 30, 31, 36, 45]. Despite
the success, existing CL systems overlook the robustness
*Corresponding authoragainst unforeseen data shifts during testing. They assume
that training and test images for each task follow the same
distribution. However, as data distributions evolve and new
scenarios occur, test images often encounter various cor-
ruptions such as snow, blur, pixelation, and combinations,
resulting in a shifted distribution from the training set. For
example, Figure 1 shows various corruptions applied on one
image from Split-miniImageNet.
Data corruption can drastically impair the performance
of existing image recognition systems. A recent study [21]
shows that classification accuracy of various architectures
has dropped significantly on the ImageNet test set with some
simple and natural corruptions. Our empirical evaluation
results show that state-of-the-art CL models are even more
vulnerable to these corruptions during testing. For example,
the accuracy of DER++ [5] for task-continual learning de-
creases from 93.9% to 50.5% on split-CIFAR10; accuracy
drops to 10.6 % from 75.6 % on split-CIFAR100; accuracy
decreases to 9.8% from 61.3% on split-miniImageNet by
applying those common corruptions on test data of each CL
task. This severe issue makes existing CL models highly
unreliable in safety-critical applications. Thus, improving
the robustness of CL models to foster their trustworthiness
when deployed in real-world scenarios is essential.
Training a CL model robust to various corruptions is diffi-
cult due to the following challenges. 1) Unseen corruptions
could perturb the test set far beyond those encountered dur-
ing training. A model that naively augments training images
with seen corruptions cannot generalize to the new ones
during testing. Also, it is unrealistic to enumerate all possi-
ble corruptions during training since there are infinite types
of corruptions and their combinations. 2) With the ever-
evolving data distributions in CL, an effective augmentation
strategy learned on previous tasks may gradually become
less effective because the optimal augmentation strategies
are task-dependent and dynamically change over tasks [11].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24521
(a) Original data
(b) Brightness
(c) Contrast
(d) Defocus Blur
(e) Elastic
(f) Fog
Figure 1. Visualization of five types of different corruption operations on the testing images of Split-miniImageNet.
Although we can adopt a memory buffer to store data from
previous tasks, augmenting and replaying them at later train-
ing iterations, the augmented memory may gradually be-
come less effective as the model could memorize the stored
information after replay runs. Recent approaches, such as
Augmix [22] composes and combines multiple pre-defined
augmentation operations with different depths and widths,
show efficacy in improving robustness under traditional su-
pervised classification tasks. However, these approaches are
not directly applicable to corruption-robust CL since they
only use a fixed random augmentation strategy that is often
not optimal for non-stationary data distributions in CL.
To address these unique challenges of corruption-
robustness in CL, we propose a temporally self-adaptive
Augmix within a meta-learning framework, named MetaMix.
It adaptively augments the memory buffer data and the cur-
rently received new data by learning to mix the augmentation
operations tailored to the evolving data distributions. In par-
ticular, our automatic self-adaptive MetaMix is a bi-level
optimization, simulating the evaluation process on unseen
corruptions. We randomly divide the training augmentation
operations into pseudo-seen and pseudo-unseen operations
at each CL step. The lower-level optimization is to optimize
the model performance on the pseudo-seen operations; the
upper-level optimization is to optimize the generalization of
the pseudo-unseen operations. The augmentation strategy
is governed by an LSTM, which inputs context information
and outputs the corresponding mixing parameters for the
augmentations. The proposed MetaMix ensures the augmen-
tation strategy automatically adapts to non-stationary data
distribution. Furthermore, the objective is to optimize the
performance of the pseudo-unseen corruption operations,
which aligns with our goal during testing and encourages the
generalization to unseen corruptions.
To evaluate the corruption robustness of existing and the
proposed methods, we propose a new challenging benchmark
where various corruptions perturb the testing data of each CL
task. To facilitate future research, we construct several new
datasets, including split-CIFAR-10-C, split-CIFAR-100-C,
and split-miniImageNet-C. Extensive experiments on the
constructed benchmarks demonstrate the effectiveness of
our proposed MetaMix approach compared with several
SOTA data-augmentation approaches adapted for CL. We
summarize our contributions as follows:•To our best knowledge, we are the first to study the
corruption-robustness of CL methods. Accordingly, we
propose the first set of novel benchmarks for evaluating
the corruption-robustness of existing CL methods and
moving towards trustworthy CL.
•We propose a self-adaptive augmentation method,
MetaMix, by learning to mix and augment the training
data of each CL task to achieve corruption-robustness
on unseen corruptions for each CL task.
•Our method is versatile and can be seamlessly inte-
grated with existing CL methods. Extensive experi-
ments with both task/class continual learning demon-
strate the effectiveness of MetaMix.
|
Wang_Accelerating_Vision-Language_Pretraining_With_Free_Language_Modeling_CVPR_2023 | Abstract
The state of the arts in vision-language pretraining
(VLP) achieves exemplary performance but suffers from
high training costs resulting from slow convergence and
long training time, especially on large-scale web datasets.
An essential obstacle to training efficiency lies in the entan-
gled prediction rate (percentage of tokens for reconstruc-
tion) and corruption rate (percentage of corrupted tokens)
in masked language modeling (MLM), that is, a proper cor-
ruption rate is achieved at the cost of a large portion of
output tokens being excluded from prediction loss. To ac-
celerate the convergence of VLP , we propose a new pre-
training task, namely, free language modeling (FLM), that
enables a 100% prediction rate with arbitrary corruption
rates. FLM successfully frees the prediction rate from the
tie-up with the corruption rate while allowing the corrup-
tion spans to be customized for each token to be predicted.
FLM-trained models are encouraged to learn better and
faster given the same GPU time by exploiting bidirectional
contexts more flexibly. Extensive experiments show FLM
could achieve an impressive 2.5×pretraining time reduc-
tion in comparison to the MLM-based methods, while keep-
ing competitive performance on both vision-language un-
derstanding and generation tasks. Code will be public
athttps://github.com/TencentARC/FLM .
| 1. Introduction
Vision-language pretraining (VLP) has recently demon-
strated impressive performance on a handful of vision-
language tasks [7,10,14,18,19,22], e.g., visual question an-
swering, cross-modal retrieval, and image captioning. Sev-
eral factors are responsible for the success: the availabil-
ity of large-scale image-text datasets collected from the
web [30], high-capacity model architectures like Trans-
†Work done during internship in ARC Lab, Tencent PCG.
∗Corresponding author
69.7375.7776.2(7 epoch)75.676.8977.8478.1178.4378.2978.6(41 epoch)
73.9477.2577.5278.63(8 epoch)
697173757779
0102030405060NLVR2 Performance (%)Pretraining Time (GPU Days)ARMLMFLM
70717273747576777879
1.21.41.61.82.02.22.4
020406080100NLVR2 PerformanceValidation MLM Loss Iterations (k)MLM (corr. rate=0.4, pred. rate=0.4)MLM (corr. rate=0.4, pred. rate=0.2)MLM (corr. rate=0.4, pred. rate=0.1)
70717273747576777879
1.21.41.61.82.02.22.4
020406080100NLVR2 PerformanceValidation MLM Loss Iterations (k)MLM (cr=0.4, pr=0.4)MLM (cr=0.4, pr=0.2)MLM (cr=0.4, pr=0.1)
70717273747576777879
1.21.41.61.82.02.22.4
020406080100NLVR2 PerformanceValidation MLM Loss Iterations (k)MLM (cr=0.4, pr=0.4)MLM (cr=0.4, pr=0.2)MLM (cr=0.4, pr=0.1)70717273747576777879
1.21.41.61.82.02.22.4
020406080100NLVR2 PerformanceValidation MLM Loss Iterations (k)MLM (cr=0.4, pr=0.4)MLM (cr=0.4, pr=0.2)MLM (cr=0.4, pr=0.1)Figure 1. (a) Large prediction rate accelerates training. Given
a fixed corruption rate, we vary the prediction rate by randomly
selecting a subset of output tokens for prediction loss. The learn-
ing rate schedule follows METER [10]. (b) The proposed FLM
achieves competitive performance compared with MLM mean-
while significantly accelerating the pretraining stage. The down-
stream performance on NLVR2[32] is reported. We show accu-
racy curves before convergence for better visualization.
former [34], and effective pretraining objectives for cross-
modal learning.
One of the dominant pretraining objectives is masked
language modeling (MLM), which was first introduced in
natural language processing [9] and has been applied to
vision-language areas in recent years [19]. MLM is a
generative pretraining task designed to reconstruct a few
(usually 40% for VLP) masked text tokens via reasoning
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23161
among the context of the remaining texts and the paired
image. While effective in capturing cross-modal interac-
tions, MLM-based methods [7,15,21] suffer from slow con-
vergence and long training time, especially for large-scale
models and noisy web data.
We argue that the limited prediction rate in MLM im-
pedes the convergence speed of pretraining, since a large
portion of tokens accompanied by corruption are excluded
from prediction loss. As shown in Fig. 1 (top), under the
same corruption rate , a larger prediction rate for MLM
results in faster convergence of validation loss and down-
stream performance. It is intuitive to set a prediction rate
of 100% to fully exploit text tokens. However, a paradox
emerges where a large prediction rate can only be achieved
with a greater corruption rate in MLM, but an extremely
large corruption rate leads to an extremely tough pretrain-
ing task that may cause training collapse.
Autoregressive language modeling (AR) provides a
workable solution to enable a 100% prediction rate. It pre-
dicts the next token given the observation of previous to-
kens. As shown in Fig. 1 (bottom), AR performs favor-
ably in training efficiency against MLM, i.e., 6.1×speed-up
for convergence. However, the converged performance by
AR is, unfortunately, much inferior to MLM. It is probably
caused by the sub-optimal unidirectional corruption pattern,
which is insufficient for downstream understanding tasks
that usually rely on bidirectional contexts.
A natural question arises, can we accelerate the conver-
gence of VLP by predicting 100% tokens like AR mean-
while achieving competitive performance with MLM? To-
wards this end, we introduce a new pretraining task, dubbed
free language modeling (FLM), for VLP, that enjoys an ex-
treme 100% prediction rate and flexible bidirectional con-
textualized representations. We for the first time break up
the entanglement between corruption and prediction rates,
making the two factors freely determined. Furthermore, for
each output token to be predicted, we allow independent and
arbitrary-length spans (from one to 100% tokens) as cor-
rupted connections. Rather than the suffix-like corruption
pattern as in AR (as well as PrefixLM [37]), the corruption
span of FLM is primarily distributed in the middle of the
sequence, establishing a flexible perception of bidirectional
contexts for better adaptation to VL understanding tasks.
The comparison between different pretraining objectives is
illustrated in Fig. 2.
To perform VLP with FLM, we propose an encode-
corrupt-predict framework, which performs feature encod-
ing once and reconstructs several corrupted versions of the
text sequence in parallel. In the encoding step, bidirec-
tional representations are achieved by learning forward and
reverse unidirectional representations respectively, the or-
der of which is manipulated by (reverse) casual masks in
the same text Transformer. Subsequently, we ensure a100% prediction rate by customizing corruption-prediction
tasks for predicting each input token. In each corruption-
prediction task, a span of corruption is randomly sampled
and attached to the encoded sequence, followed by a re-
constructor to solve the prediction task by reasoning among
the remaining contexts. Unlike previous works ( e.g., MLM,
AR) that adopt pre-encoding corruption, we inject corrup-
tions after one-time feature encoding, encouraging flexible
corruption patterns and efficient parallel prediction.
Our contributions are three-fold. (1) A novel pretraining
objective for VLP, namely, free language modeling (FLM),
is proposed to free the prediction rate from the constraints of
corruption rate, enabling an appealing 100% prediction rate
for accelerating convergence speed during pretraining. (2)
An encode-corrupt-predict framework built upon FLM ob-
jective is proposed, allowing efficient and effective learning
of a set of prediction tasks by merely conducting feature en-
coding once. (3) Extensive experiments on VQA, NLVR2,
image captioning, and image-text retrieval demonstrate the
effectiveness of our FLM, where comparable performances
to MLM are achieved with less than 50% pretraining time.
|
Wang_Turning_Strengths_Into_Weaknesses_A_Certified_Robustness_Inspired_Attack_Framework_CVPR_2023 | Abstract
Graph neural networks (GNNs) have achieved state-of-
the-art performance in many graph learning tasks. How-
ever, recent studies show that GNNs are vulnerable to both
test-time evasion and training-time poisoning attacks that
perturb the graph structure. While existing attack methods
have shown promising attack performance, we would like to
design an attack framework to further enhance the perfor-
mance. In particular, our attack framework is inspired by
certified robustness, which was originally used by defend-
ersto defend against adversarial attacks. We are the first,
from the attacker perspective, to leverage its properties to
better attack GNNs. Specifically, we first derive nodes’ cer-
tified perturbation sizes against graph evasion and poison-
ing attacks based on randomized smoothing, respectively.
A larger certified perturbation size of a node indicates this
node is theoretically more robust to graph perturbations.
Such a property motivates us to focus more on nodes with
smaller certified perturbation sizes, as they are easier to be
attacked after graph perturbations. Accordingly, we design
a certified robustness inspired attack loss, when incorpo-
rated into (any) existing attacks, produces our certified ro-
bustness inspired attack counterpart. We apply our frame-
work to the existing attacks and results show it can signifi-
cantly enhance the existing base attacks’ performance.
| 1. Introduction
Learning with graphs, such as social networks, citation
networks, chemical networks, has attracted significant at-
tention recently. Among many methods, graph neural net-
works (GNNs) [ 14,33,38,41,44] have achieved state-of-the-
art performance in graph related tasks such as node classi-
fication, graph classification, and link prediction. However,
recent studies [ 8,19,20,23,30,34,36,37,39,40,50,51] show
that GNNs are vulnerable to both test-time graph evasion
*Corresponding authorsattacks and training-time graph poisoning attacks1. Take
GNNs for node classification as an instance, graph eva-
sion attacks mean that, given a learnt GNN model and a
(clean) graph, an attacker carefully perturbs the graph struc-
ture (i.e., inject new edges to or remove the existing edges
from the graph) such that as many testing nodes as possi-
ble are misclassified o by the GNN model. Whereas, graph
poisoning attacks mean that, given a GNN algorithm and a
graph, an attacker carefully perturbs the graph structure in
the training phase, such that the learnt GNN model misclas-
sifies as many testing nodes as possible in the testing phase.
While existing methods have shown promising attack per-
formance, we want to ask: Can we design a general attack
framework that can further enhance both the existing graph
evasion and poisoning attacks to GNNs? The answer is yes.
We design an attack framework inspired by certified ro-
bustness. Certified robustness was originally used by de-
fenders to guarantee the robustness of classification mod-
els against evasion attacks. Generally speaking, a testing
example (e.g., an image or a node) with a better certified
robustness guarantee indicates this example is theoretically
more robust to adversarial (e.g., pixel or graph) perturba-
tions. While certified robustness is mainly derived for do-
ing the good, attackers , on the other hand, can also leverage
its property to do the bad. For instance, when an attacker
knows the certified robustness of nodes in a graph, he can
base on nodes’ certified robustness to reversely reveal the
vulnerable region of the graph and leverage this vulnerabil-
ity to design better attacks. We are inspired by such prop-
erty of certified robustness and design the first certified ro-
bustness inspired attacks to GNNs.
Our attack framework consists of three parts: i) In-
spired by the state-of-the-art randomized smoothing based
certified robustness against evasion attacks to image mod-
els [7,28] and GNN models [ 35], we first propose to gen-
eralize randomized smoothing and derive the node’s cer-
1We mainly consider the graph structure attack in the paper, as it is
more effective than the feature attack. However, our attack framework can
be easily extended to the feature attack.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16394
tified perturbation size against graph poisoning attacks to
GNNs. Particularly, a larger certified perturbation size of a
node indicates this node is theoretically more robust to ad-
versarial graph perturbations. In other words, an attacker
needs to perturb more edges during the training phase in
order to make this node wrongly predicted by the learnt
GNN model. This property inspires us to focus more on
disrupting nodes with relatively smaller certified perturba-
tion sizes under a given perturbation budget. ii) We design
a certified robustness inspired attack loss. Specifically, we
modify the classic node-wise loss by assigning each node
a weight based on its certified perturbation size—A node
with a larger/smaller certified perturbation size will be as-
signed a smaller/larger weight. In doing so, losses for nodes
with smaller certified perturbation sizes will be enlarged,
and most of the perturbation budget will be automatically
allocated to perturb these nodes. Thus, more nodes will be
misclassified with the given perturbation budget. iii) We
design the certified robustness inspired attack framework to
generate adversarial graph perturbations to GNNs, based on
our certified robustness inspired attack loss. We emphasize
that, as our new attack loss only modifies the existing attack
loss with certified perturbation size defined node weights,
any existing graph evasion or poisoning attack method can
be used as the base attack in our framework.
We apply our certified robustness inspired attack frame-
work to the state-of-the-art graph evasion and poisoning
attacks [ 40,51] to GNNs. Evaluation results on multiple
benchmark datasets show our attack framework can sub-
stantially enhance the attack performance of the base at-
tacks. Our contributions are as follows:
•We propose a certified robustness inspired attack frame-
work to GNNs. Our framework can be plugged into any
existing graph evasion and poisoning attacks.
•To our best knowledge, we are the first work to use certi-
fied robustness for an attack purpose.
•Evaluation results validate the effectiveness of our attack
framework when applied to the existing attacks to GNNs.
|
Wu_MagicPony_Learning_Articulated_3D_Animals_in_the_Wild_CVPR_2023 | Abstract
We consider the problem of predicting the 3D shape, ar-
ticulation, viewpoint, texture, and lighting of an articulated
animal like a horse given a single test image as input. We
present a new method, dubbed MagicPony, that learns this
predictor purely from in-the-wild single-view images of the
object category, with minimal assumptions about the topol-
ogy of deformation. At its core is an implicit-explicit repre-
sentation of articulated shape and appearance, combining
the strengths of neural fields and meshes. In order to help
the model understand an object’s shape and pose, we distil
the knowledge captured by an off-the-shelf self-supervised
vision transformer and fuse it into the 3D model. To over-
come local optima in viewpoint estimation, we further in-
troduce a new viewpoint sampling scheme that comes at
no additional training cost. MagicPony outperforms prior
work on this challenging task and demonstrates excellent
generalisation in reconstructing art, despite the fact that it
is only trained on real images. The code can be found on the
project page at https://3dmagicpony.github.io/ .
| 1. Introduction
Reconstructing the 3D shape of an object from a sin-
gle image of it requires knowing a priori what are the
possible shapes and appearances of the object. Learning
*Equal contribution.such a prior usually requires ad-hoc data acquisition se-
tups [2, 20, 35, 36], involving at least multiple cameras, and
often laser scanners, domes and other hardware, not to men-
tion significant manual effort. This is viable for certain
types of objects such as humans that are of particular in-
terest in applications, but it is unlikely to scale to the long
tail of objects that can appear in natural images. The alter-
native is to learn a 3D prior from 2D images only, which are
available in abundance. However, this prior is highly com-
plex and must be learned while using it to reconstruct in 3D
the 2D training data, which is a major challenge.
In this paper, we propose MagicPony , a novel approach
to learning 3D models of articulated object categories such
as horses and birds with only single-view input images for
training. We leverage recent progress in unsupervised rep-
resentation learning, unsupervised image matching, effi-
cient implicit-explicit shape representations and neural ren-
dering, and devise a new auto-encoder architecture that re-
constructs the 3D shape, articulation and texture of each ob-
ject instance from a single image. For training, we only
require a 2D segmenter for the object category and a de-
scription of the topology and symmetry of its 3D skeleton
(i.e., the number and connectivity of bones). We do not
require apriori knowledge of the objects’ 3D shapes, key-
points, viewpoints, or of any other 2D or 3D cue which are
often used in prior work [14, 21, 22, 31]. From this, we
learn a function that, at test time, can estimate the shape
and texture of a new object from a single image, in a feed-
forward manner. The function exhibits remarkable gener-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8792
alisation properties, including reconstructing objects in ab-
stract drawings , despite being trained on real images only.
In order to learn such disentangled 3D representations
simply from raw images, MagicPony addresses a few key
challenges. The first challenge is viewpoint estimation. Be-
fore the 3D model is available, it is very difficult to assign a
viewpoint to the objects in the training images. To tackle
this problem, prior works have often assumed additional
cues such as 2D keypoint correspondences [21, 22, 31]. In-
stead, we avoid requiring additional keypoint supervision
and implicitly infer noisy correspondences by fusing into
the 3D model knowledge distilled from DINO-ViT [5], a
self-supervised visual transformer network (ViT) [28]. We
also develop a new efficient disambiguation scheme that
explores multiple viewpoint assignment hypotheses at es-
sentially no cost, avoiding local optima that are caused by
greedily matching the noisy 2D correspondences.
The second challenge is how to represent the 3D shape,
appearance and deformations of the object. Most prior
works have used textured meshes [13,14,21,31,32,67,70],
but these are difficult to optimise from scratch, leading to
problems that often require ad-hoc heuristics such as re-
meshing [13, 70]. The other and increasingly popular ap-
proach is to use a volumetric representation such as a neu-
ral radiance field [3, 40, 52, 63], which can model complex
shapes, including manipulating their topology during train-
ing. However, this modelling freedom comes at the cost
of over-parametrisation, which is particularly problematic
in monocular reconstruction and often leads to meaning-
less short-cut solutions [63]. Furthermore, modelling ar-
ticulation with a volumetric representation is difficult. A
posing transformation is only defined for the object’s sur-
face and interior and is more easily expressed from the
canonical/pose-free space to the posed space. However,
rendering a radiance field requires transforming 3D points
off the object surface and in the direction opposite to the
posing transformation, which is hard [8].
We address these issues by using a hybrid volumetric-
mesh representation based on DMTet [42, 56]. Shape and
appearance are defined volumetrically in canonical space,
but a mesh is extracted on the fly for posing and rendering.
This sidesteps the challenges of using neural rendering di-
rectly while retaining most of its advantages and enabling
the use of powerful shape regularisers.
To summarise, we make the following contributions :
(1) A new 3D object learning framework that combines re-
cent advances in unsupervised learning, 3D representations
and neural rendering, achieving better reconstruction results
with less supervision; (2) An effective mechanism for fus-
ing self-supervised features from DINO-ViT into the 3D
model as a form of self-supervision; and (3) an efficient
multi-hypothesis viewpoint prediction scheme that avoids
local optima in reconstruction with no additional cost.Table 1. Related Work Overview on Weakly-supervised Learn-
ing of 3D Objects. Annotations: template shape, 7viewpoint,
ὑ12D keypoint, /ctobject mask,/frardoptical flow,video,1coarse
template shape from keypoints,2camera estimated from keypoints
using SfM,3outputs texture flow,4shape bases initialised from
CMR.†UMR relies on part segmentations from SCOPS [18].
Supervision Output
Method 7 ὑ1/ct/frard 3D 2.5D Motion View Texture
Unsup3D [69] ✓ ✓ ✓
CSM [30] ✓ ✓ ✓
A-CSM [29] ✓ ✓ ✓ ✓
CMR [21] ( ✓)1(✓)2✓ ✓ ✓ ✓ (✓)3
U-CMR [14] ✓ ✓ ✓ ✓ ✓
UMR†[32] ✓ ✓ ✓ (✓)3
ACMR [31] ( ✓)4(✓)2✓ ✓ ✓ ✓ (✓)3
DOVE [67] ✓ ✓ ✓ ✓ ✓ ✓ ✓
Ours ✓ ✓ ✓ ✓ ✓
We compare our method to prior work on several chal-
lenging articulated animal categories and show that our
method obtains significantly better quantitative and quali-
tative results while using significantly less supervision, and
also demonstrate generalisation to abstract drawings.
|
Wang_Towards_Domain_Generalization_for_Multi-View_3D_Object_Detection_in_Bird-Eye-View_CVPR_2023 | Abstract
Multi-view 3D object detection (MV3D-Det) in Bird-
Eye-View (BEV) has drawn extensive attention due to its
low cost and high efficiency. Although new algorithms
for camera-only 3D object detection have been continu-
ously proposed, most of them may risk drastic performance
degradation when the domain of input images differs from
that of training. In this paper, we first analyze the causes
of the domain gap for the MV3D-Det task. Based on the
covariate shift assumption, we find that the gap mainly at-
tributes to the feature distribution of BEV , which is deter-
mined by the quality of both depth estimation and 2D im-
age’s feature representation. To acquire a robust depth pre-
diction, we propose to decouple the depth estimation from
the intrinsic parameters of the camera (i.e. the focal length)
through converting the prediction of metric depth to that
of scale-invariant depth and perform dynamic perspective
augmentation to increase the diversity of the extrinsic pa-
rameters (i.e. the camera poses) by utilizing homography.
Moreover, we modify the focal length values to create mul-
tiple pseudo-domains and construct an adversarial train-
ing loss to encourage the feature representation to be more
domain-agnostic. Without bells and whistles, our approach,
namely DG-BEV , successfully alleviates the performance
drop on the unseen target domain without impairing the
accuracy of the source domain. Extensive experiments on
Waymo, nuScenes, and Lyft, demonstrate the generalization
and effectiveness of our approach.
| 1. Introduction
3D object detection, aiming at localizing objects in the
3D space, is critical for various applications such as au-
*Shuo Wang and Xinhai Zhao contributed equally. This work was done
when Shuo Wang was an intern at Huawei Noah’s Ark Lab.
†Corresponding author.
(a) Baseline
(b) DG-BEV
Figure 1. Qualitative comparisons between BEVDepth and the
proposed DG-BEV . The red and blue bounding boxes represent
ground truth and detected results on the target domain respectively.
Depth-shift is shown in green arrows. Our approach can detect
correct 3D results on unknown domains.
tonomous driving [6, 38], robotic navigation [2], and vir-
tual reality [32], etc. Despite the remarkable progress of
LiDAR-based methods [17,30,33], camera-based 3D object
detection in Bird-Eye-View (BEV) [14, 19, 21] has drawn
increasing attention in recent years due to its rich semantic
information and low cost for deployment.
However, most of the detectors assume that the training
and testing data are obtained in the same domain which may
be hardly guaranteed in realistic scenarios. Thus, tremen-
dous performance degradation will appear when the domain
of the input image shifts. For example, nuScenes [3] and
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13333
Figure 2. Illustration of the difficulty in estimating depth based
on cameras with different focal length. O1andO2are the optical
centers of two cameras and C is the object being photographed. A
and B denote the imaging planes of the two cameras respectively
and the red parts show the size of the same object in their corre-
sponding image planes.
Waymo [34] are two popular benchmarks for 3D object de-
tection and their data collection devices are not identical,
i.e., both of the intrinsic and extrinsic parameters are differ-
ent. Empirical results presented in Fig. 1 show that detec-
tors trained on nuScenes have location bias when predicting
objects on the Waymo dataset.
Domain Generalization (DG) [8, 18, 26], aiming to learn
a model that generalizes well on unseen target domains,
can be a plausible solution to alleviate the bias mentioned
above. In the literature, DG has been widely explored for
2D vision tasks, e.g., image recognition [7, 16], object de-
tection [31, 44], and semantic segmentation [27, 41]. How-
ever, most of these works are designed for the case where
there are multiple source domains available which are ob-
viously infeasible due to the diversity of the real world in
autonomous driving scenarios. Alternatively, one recent
work [39] proposed to study the single-domain generaliza-
tion for LiDAR-based detection. However, it is not tractable
to directly adapt this method to solve the camera-based de-
tection task due to the fundamental differences between the
characteristics of points and images. Therefore, developing
a general domain generalization framework for MV3D-Det
is still highly desirable.
In this paper, we theoretically analyze the causes of the
domain gap for MV3D-Det. Based on the covariate shift
assumption [4], we find that such a gap mainly attributes to
the feature distribution of BEV , which is determined by the
depth estimation and 2D image feature jointly. Based on
this, we propose DG-BEV , a domain generalization method
for MV3D-Det in BEV . Specifically, we first conduct a thor-
ough analysis of why the estimated depth becomes inaccu-
rate when the domain shifts and find the key factor lies in
that intrinsic parameters of cameras used in various domains
are hardly guaranteed to be identical (please refer to Fig. 2
for a better understanding). To alleviate this issue, we pro-
pose to decouple the depth estimation from the intrinsic pa-
rameters by converting the prediction of metric depth to that
of scale-invariant depth. On the other hand, extrinsic param-eters of cameras ( e.g. camera poses) also play an important
role in camera-based depth estimation, which is often ig-
nored in previous works. Instead, we introduce homogra-
phy learning to dynamically augment the image perspec-
tives by simultaneously adjusting the imagery data and the
camera pose.
Moreover, since domain-agnostic feature representations
are favored for better generalization, we propose to build
up multiple pseudo-domains by modifying the focal length
values of camera intrinsic parameters in the source domain
and construct an adversarial training loss to further enhance
the quality of feature representations. In summary, the main
contributions of this paper are:
•We present a theoretical analysis on the causes of the
domain gap in MV3D-Det. Based on the covariate
shift assumption, we find the gap lies in the feature
distribution of BEV , which is determined by the depth
estimation and 2D image feature jointly.
•We propose DG-BEV , a domain generalization method
to alleviate the domain gap from both of the two per-
spectives mentioned above.
•Extensive experiments on various public datasets, in-
cluding Waymo, nuScenes, and Lyft, demonstrate the
generalization and effectiveness of our approach.
•To the best of our knowledge, this is the first systematic
study to explore a domain generalization method for
multi-view 3D object detectors.
|
Wang_Score_Jacobian_Chaining_Lifting_Pretrained_2D_Diffusion_Models_for_3D_CVPR_2023 | Abstract
A diffusion model learns to predict a vector field of gradi-
ents. We propose to apply chain rule on the learned gradients,
and back-propagate the score of a diffusion model through
the Jacobian of a differentiable renderer, which we instan-
tiate to be a voxel radiance field. This setup aggregates 2D
scores at multiple camera viewpoints into a 3D score, and re-
purposes a pretrained 2D model for 3D data generation. We
identify a technical challenge of distribution mismatch that
arises in this application, and propose a novel estimation
mechanism to resolve it. We run our algorithm on several off-
the-shelf diffusion image generative models, including the
recently released Stable Diffusion trained on the large-scale
LAION 5B dataset.
| 1. Introduction
We introduce a method that converts a pretrained 2D
diffusion generative model on images into a 3D generative
model of radiance fields, without requiring access to any
3D data. The key insight is to interpret diffusion models as
* Equal contribution.learned predictors of a gradient field, often referred to as the
score function of the data log-likelihood. We apply the chain
rule on the estimated score, hence the name Score Jacobian
Chaining (SJC).
Following Hyvärinen [16], the score is defined as the
gradient of the log-density function with respect to the data
(rather than parameter). Diffusion models of various fam-
ilies [ 13,49,50,52] can all be interpreted [ 20,23,52] as
modeling ∇xlogpσ(x)i.e. the denoising score at noise
levelσ. For readability, we refer to the denoising score as the
score. Generating a sample from a diffusion model involves
repeated evaluations of the score function from large to small
σlevel, so that a sample xgradually moves closer to the data
manifold. It can be loosely interpreted as gradient descent,
with precise control on the step sizes so that data distribution
evolves to match the annealed σlevel (ancestral sampler [ 13],
SDE and probability-flow ODE [ 52], etc.). While there are
other perspectives to a diffusion model [ 13,49], here we
are primarily motivated from the viewpoint that diffusion
models produce a gradient field.
A natural question to ask is whether the chain rule can be
applied to the learned gradients. Consider a diffusion model
on images. An image xmay be parameterized by some
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12619
function fwith parameters θ,i.e.,x=f(θ). Applying the
chain rule through the Jacobian∂x
∂θconverts a gradient on
image xinto a gradient on the parameter θ. There are many
potential use cases for pairing a pretrained diffusion model
with different choices of f. In this work we are interested
in exploring the connection between 3D and multiview 2D
by choosing fto be a differentiable renderer, thus creating a
3D generative model using only pretrained 2D resources.
Many prior works [ 2,58,60] perform 3D generative mod-
eling by training on 3D datasets [ 5,24,54,59]. This ap-
proach is often as challenging as it is format-ambiguous. In
addition to the high data acquisition cost of 3D assets [ 9],
there is no universal data format: point clouds, meshes, volu-
metric radiance field, etc, all have computational trade-offs.
What is common to these 3D assets is that they can be ren-
dered into 2D images. An inverse rendering system, or a
differentiable renderer [ 25,27,30,34,39], provides access
to the Jacobian Jπ≜∂xπ
∂θof a rendered image xπat camera
viewpoint πwith respect to the underlying 3D parameteriza-
tionθ. Our method uses differentiable rendering to aggregate
2D image gradients over multiple viewpoints into a 3D asset
gradient, and lifts a generative model from 2D to 3D. We
parameterize a 3D asset θas a radiance field stored on voxels
and choose fto be the volume rendering function.
A key technical challenge is that computing the 2D score
by directly evaluating a diffusion model on a rendered image
xπleads to an out-of-distribution (OOD) problem. Gener-
ally, diffusion models are trained as denoisers and have only
seen noisy inputs during training. On the other hand, our
method requires evaluating the denoiser on non-noisy ren-
dered images from a 3D asset during optimization, and it
leads to the OOD problem. To address the issue, we propose
Perturb-and-Average Scoring , an approach to estimate the
score for non-noisy images.
Empirically, we first validate the effectiveness of Perturb-
and-Average Scoring at solving the OOD problem and ex-
plore the hyperparameter choices on a simple 2D image can-
vas. Here we identify open problems on using unconditioned
diffusion models trained on FFHQ and LSUN Bedroom.
Next, we use Stable Diffusion, a model pretrained on the
web-scale LAION dataset to perform SJC for 3D generation,
as shown in Fig. 1. Our contributions are as follows:
•We propose a method for lifting a 2D diffusion model
to 3D via an application of the chain rule.
•We illustrate the challenge of OOD when using a
pretrained denoiser and propose Perturb-and-Average
Scoring to resolve it.
•We point out the subtleties and open problems on ap-
plying Perturb-and-Average Scoring as gradient for
optimization.
•We demonstrate the effectiveness of SJC for the task of
3D text-driven generation. |
Xu_Iterative_Geometry_Encoding_Volume_for_Stereo_Matching_CVPR_2023 | Abstract
Recurrent All-Pairs Field Transforms (RAFT) has shown
great potentials in matching tasks. However, all-pairs cor-
relations lack non-local geometry knowledge and have dif-
ficulties tackling local ambiguities in ill-posed regions. In
this paper, we propose Iterative Geometry Encoding Volume
(IGEV-Stereo), a new deep network architecture for stereo
matching. The proposed IGEV-Stereo builds a combined
geometry encoding volume that encodes geometry and con-
text information as well as local matching details, and itera-
tively indexes it to update the disparity map. To speed up the
convergence, we exploit GEV to regress an accurate starting
point for ConvGRUs iterations. Our IGEV-Stereo ranks 1st
on KITTI 2015 and 2012 (Reflective) among all published
methods and is the fastest among the top 10 methods. In
addition, IGEV-Stereo has strong cross-dataset generaliza-
tion as well as high inference efficiency. We also extend our
IGEV to multi-view stereo (MVS), i.e. IGEV-MVS, which
achieves competitive accuracy on DTU benchmark. Code
is available at https://github.com/gangweiX/IGEV.
| 1. Introduction
Inferring 3D scene geometry from captured images is
a fundamental task in computer vision and graphics with
applications ranging from 3D reconstruction, robotics and
autonomous driving. Stereo matching which aims to re-
construct dense 3D representations from two images with
calibrated cameras is a key technique for reconstructing 3D
scene geometry.
Many learning-based stereo methods [5, 17, 24, 47, 48]
have been proposed in the literature. The popular repre-
sentative is PSMNet [5] which apply a 3D convolutional
encoder-decoder to aggregate and regularize a 4D cost vol-
ume and then use soft argmin to regress the disparity map
from the regularized cost volume. Such 4D cost volume
filtering-based methods can effectively explore stereo ge-
ometry information and achieve impressive performance on
†Corresponding author.
(a) (b)Figure 1. (a) Comparison with state-of-the-art stereo methods
[9, 21, 25, 43, 47, 59] on KITTI 2012 and 2015 leaderboards. (b)
Performance comparison with RAFT-Stereo [24] on Scene Flow
test set as the number of iterations changes.
several benchmarks. However, it usually demands a large
amount of 3D convolutions for cost aggregation and regu-
larization, and in turn yield high computational and memory
costs. As a result, it can hardly be applied to high-resolution
images and/or large-scale scenes.
Recently, iterative optimization-based methods [21, 24,
30, 39, 43] have exhibited attractive performance on both
high resolution images and standard benchmarks. Different
from existing methods, iterative methods bypass the com-
putationally expensive cost aggregation operations and pro-
gressively update the disparity map by repeatedly fetching
information from a high-resolution 4D cost volume. Such
solution enables the direct usage of high-resolution cost vol-
ume and hence is applicable to high-resolution images. For
instance, RAFT-Stereo [24] exploits a multi-level Convo-
lutional Gated Recurrent Units (ConvGRUs) [10] to recur-
rently update the disparity field using local cost values re-
trieved from all-pairs correlations (APC).
However, without cost aggregation the original cost vol-
ume lacks non-local geometry and context information (see
Fig. 2 (b)). As a result, existing iterative methods have
difficulties tackling local ambiguities in ill-posed regions,
such as occlusions, texture-less regions and repetitive struc-
tures. Even though, the ConvGRU-based updater can im-
prove the predicted disparities by incorporating context and
geometry information from context features and hidden lay-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21919
(a) Left image (b) Disparity from APC (c) Disparity from GEV (d) Final disparity Figure 2. (a) Input images from KITTI 2015. Illustration of (b) disparity regressed from All-pairs Correlations (APC) in RAFT-Stereo [24],
(c) disparity regressed from our Geometry Encoding V olume (GEV), (d) our final disparity. The APC lacks non-local geometry knowledge
and thus has difficulties tackling local ambiguities in ill-posed region. We take full advantage of cost filtering and iterative optimization: 1)
exploiting 3D CNN to filter cost volume and obtain the strong scene representation and the initial disparity with smooth edges, 2) exploiting
ConvGRUs to optimize the initial disparity to recover object edges and details.
ers, such limitation in the original cost volume greatly lim-
its the effectiveness of each iteration and in turn yields a
large amount of ConvGRUs iterations for satisfactory per-
formance.
We claim that cost filtering-based methods and itera-
tive optimization-based methods have complementary ad-
vantages and limitations. The former can encode sufficient
non-local geometry and context information in the cost vol-
ume which is essential for disparity prediction in particu-
lar in challenging regions. The latter can avoid high com-
putational and memory costs for 3D cost aggregation, yet
are less capable in ill-posed regions based only on all-pairs
correlations. To combine complementary advantages of the
two methods, we propose Iterative Geometry Encoding V ol-
ume (IGEV-Stereo), a new paradigm for stereo matching
(see Fig. 3). To address ambiguities caused by ill-posed
regions, we compute a Geometry Encoding V olume (GEV)
by aggregating and regularizing a cost volume using an ex-
tremely lightweight 3D regularization network. Compared
to all-pairs correlations of RAFT-Stereo [24], our GEV en-
codes more geometry and context of the scene after aggre-
gation, shown in Fig. 2 (c). A potential problem of GEV is
that it could suffer from over-smoothing at boundaries and
tiny details due to the 3D regularization network. To com-
plement local correlations, we combine the GEV and all-
pairs correlations to form a Combined Geometry Encoding
V olume (CGEV) and input the CGEV into the ConvGRU-
based update operator for iterative disparity optimization.
Our IGEV-Stereo outperforms RAFT-Stereo in terms of
both accuracy and efficiency. The performance gains come
from two aspects. First, our CGEV provides more compre-
hensive yet concise information for ConvGRUs to update,
yielding more effective optimization in each iteration and
in turn could significantly reduce the amount of ConvGRUs
iterations. As shown in Fig. 1, our method achieves even
smaller EPE (i.e., 0.58) using only 3 ConvGRUs iterations
(i.e.,100ms totally for inference) than RAFT-Stereo using
32 ConvGRUs iterations (i.e., EPE of 0.61 and 440ms for
inference). Second, our method regresses an initial disparity
map from the GEV via soft argmin which could providean accurate starting point for the ConvGRU-based update
operator, and in turn yield a fast convergence. In compari-
son, RAFT-Stereo starts disparity prediction from an initial
starting point d0=0, which demands a large number Con-
vGRUs iterations to achieve an optimized result.
We demonstrate the efficiency and effectiveness of our
method on several stereo benchmarks. Our IGEV-Stereo
achieves the state-of-the-art EPE of 0.47 on Scene Flow
[31] and ranks 1ston KITTI 2015 [32] and 2012 (Re-
flective) [15] leaderboards among all the published meth-
ods. Regarding the inference speed, our IGEV-Stereo is
the fastest among the top 10 methods on KITTI leader-
boards. IGEV-Stereo also exhibits better cross-dataset gen-
eralization ability than most existing stereo networks. When
trained only on synthetic data Scene Flow, our IGEV-Stereo
performs very well on real datasets Middlebury [34] and
ETH3D [35]. We also extend our IGEV to MVS, i.e. IGEV-
MVS, which achieves competitive accuracy on DTU [1].
|
Wang_AutoRecon_Automated_3D_Object_Discovery_and_Reconstruction_CVPR_2023 | Abstract
A fully automated object reconstruction pipeline is cru-
cial for digital content creation. While the area of 3D recon-
struction has witnessed profound developments, the removal
of background to obtain a clean object model still relies
on different forms of manual labor, such as bounding box
labeling, mask annotations, and mesh manipulations. In
this paper, we propose a novel framework named AutoRe-
con for the automated discovery and reconstruction of an
object from multi-view images. We demonstrate that fore-
ground objects can be robustly located and segmented from
SfM point clouds by leveraging self-supervised 2D vision
transformer features. Then, we reconstruct decomposed neu-
ral scene representations with dense supervision provided
by the decomposed point clouds, resulting in accurate ob-
ject reconstruction and segmentation. Experiments on the
DTU, BlendedMVS and CO3D-V2 datasets demonstrate the
effectiveness and robustness of AutoRecon. The code and
supplementary material are available on the project page:
https://zju3dv.github.io/autorecon/ .
| 1. Introduction
3D object reconstruction has long been investigated in
computer vision. In this work, we focus on the specific set-
ting of reconstructing a salient foreground object from multi-
view images and automatically segmenting the object from
the background without any annotation, which enables scal-
able 3D content creation for VR/AR and may open up the
possibility to generate free 2D and 3D object annotations at
a large scale for supervised-learning tasks.
Traditional multi-view stereo [8, 32] and recent neural
scene reconstruction methods [40, 46] have attained impres-
sive reconstruction quality. However, these methods cannot
identify objects and the reconstructed object models are typ-
ically coupled with the surrounding background. A straight-
forward solution is utilizing the foreground object masks to
The authors are affiliated with the ZJU-SenseTime Joint Lab of 3D Vision.
†Corresponding author: Xiaowei Zhou.
Coarse SegmentationInput Images
Trans.
Structure-from-Motion
DINO Point CloudForeground Reconstruction
Foreground Segmentation
SDF Radiance FieldReg.Figure 1. Overview of our fully-automated pipeline and results.
Given an object-centric video, we achieve coarse decomposition by
segmenting the salient foreground object from a semi-dense SfM
point cloud, with pointwise-aggregated 2D DINO features [3]. Then
we train a decomposed neural scene representation from multi-view
images with the help of coarse decomposition results to reconstruct
foreground objects and render multi-view consistent high-quality
foreground masks.
obtain clean foreground object models. However, accurate
2D object masks are expensive to annotate, and salient ob-
ject segmentation techniques [21, 34, 41] generally produce
masks with limited granularity, thus degrading the recon-
struction quality, especially for objects with thin structures.
Recently, some methods [23,30,50] attempt to automatically
decompose objects from 3D scenes given minimal human
annotations, such as 3D object bounding boxes, scribbles
or pixel labels. But the requirement of manual annotations
limits the feasibility of more scalable 3D content creation.
In this paper, we propose a novel two-stage framework for
the fully-automated 3D reconstruction of salient objects, as
illustrated in Fig. 1. We first perform coarse decomposition to
automatically segment the foreground SfM point cloud, and
then reconstruct the foreground object geometry by learning
an implicit neural scene representation under explicit super-
vision from the coarse decomposition. The key idea of our
coarse decomposition is to leverage the semantic features
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21382
provided by a self-supervised 2D Vision Transformer (ViT)
[3]. Specifically, we aggregate multi-view ViT features from
input images to the SfM point cloud and then segment salient
foreground points with a point cloud segmentation Trans-
former. To train the Transformer on large-scale unlabeled
data, we devise a pseudo-ground-truth generation pipeline
based on Normalized Cut [33] and show its ability to pro-
duce accurate segmentations and 3D bounding boxes upon
training. For object reconstruction, we learn a neural scene
representation within the estimated foreground bounding
box from multi-view images. Our main idea is to reconstruct
a decomposed scene representation with the help of explicit
regularization provided by the previously decomposed point
cloud. Finally, we can extract a clean object model and obtain
high-quality object masks with foreground-only rendering.
We conduct experiments on the CO3D [29], Blended-
MVS [45], and DTU [12] datasets to validate the effective-
ness of the proposed pipeline. The experimental results show
that our approach can automatically and robustly recover
accurate 3D object models and high-quality segmentation
masks from RGB videos, even with cluttered backgrounds.
In summary, we make the following contributions:
•We propose a fully-automated framework for recon-
structing background-free object models from multi-
view images without any annotation.
•We propose a coarse-to-fine pipeline for scene decom-
position by first decomposing the scene in the form of
an SfM point cloud, which then guides the decomposi-
tion of a neural scene representation.
•We propose an SfM point cloud segmentation Trans-
former and devise an unsupervised pseudo-ground-truth
generation pipeline for its training.
•We demonstrate the possibility of automatically creat-
ing object datasets with 3D models, 3D bounding boxes,
and 2D segmentation masks.
|
Wei_Joint_Token_Pruning_and_Squeezing_Towards_More_Aggressive_Compression_of_CVPR_2023 | Abstract
Although vision transformers (ViTs) have shown promis-
ing results in various computer vision tasks recently, their
high computational cost limits their practical applications.
Previous approaches that prune redundant tokens have
demonstrated a good trade-off between performance and
computation costs. Nevertheless, errors caused by prun-
ing strategies can lead to significant information loss. Our
quantitative experiments reveal that the impact of pruned
tokens on performance should be noticeable. To address
this issue, we propose a novel joint Token Pruning &
Squeezing module (TPS) for compressing vision transform-
ers with higher efficiency. Firstly, TPS adopts pruning to get
the reserved and pruned subsets. Secondly, TPS squeezes
the information of pruned tokens into partial reserved to-
kens via the unidirectional nearest-neighbor matching and
similarity-based fusing steps. Compared to state-of-the-
art methods, our approach outperforms them under all to-
ken pruning intensities. Especially while shrinking DeiT-
tiny&small computational budgets to 35%, it improves the
accuracy by 1%-6% compared with baselines on ImageNet
classification. The proposed method can accelerate the
throughput of DeiT-small beyond DeiT-tiny, while its accu-
racy surpasses DeiT-tiny by 4.78%. Experiments on various
transformers demonstrate the effectiveness of our method,
while analysis experiments prove our higher robustness to
the errors of the token pruning policy. Code is available at
https://github.com/megvii-research/TPS-
CVPR2023 .
| 1. Introduction
The transformer architecture has become popular for var-
ious natural language processing (NLP) tasks, and its im-
proved variants have been adopted for many vision tasks.
Vision transformers (ViTs) [5] leverage the long-range de-
*The first two authors contributed equally to this work
†Corresponding author
Input Image
Token PruningLabel: Lawn Mower
OursLabel: Baseball
Prediction: Folding Chair × Prediction: Rugby Ball ×
Prediction: Lawn Mower √ Prediction: Baseball √Figure 1. Comparisons between token pruning paradigm [25] (the
2nd row) and our joint Token Pruning & Squeezing (the 3rd row).
The context information, such as the sod in the examples, is help-
ful for prediction but is discarded. Our method remits the informa-
tion loss by squeezing the pruned tokens into reserved ones instead
of naively dropping them, as indicated by the stacked patches. By
this design, we could apply more aggressive token pruning with
less performance drop. The example results are from the Ima-
geNet1K [4], and we reduce the actual patches grid 14×14to
7×7for visualization clarity.
pendencies of self-attention mechanisms to achieve excel-
lent performance, often surpassing that of CNNs. In ad-
dition to the vanilla ViT architecture, recent studies [17,
31, 33] have explored hybrid ViT designs incorporating
convolution layers and multi-scale architectures. Despite
their excellent performance, transformers still require rel-
atively high computational budgets. This is due to the
quadratic computation and memory costs associated with
token length. To address this issue, contemporary ap-
proaches [8, 14, 16, 21, 25, 27, 35, 36] propose pruning re-
dundant tokens. They trade acceptable performance degra-
dation for a more cost-effective model. Knowledge distil-
lation [11] and other techniques can further mitigate the re-
sulting performance drop.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2092
However, a steep drop in performance is inevitable as
pruning tokens further increases because both essential sub-
ject and auxiliary context information drop significantly, es-
pecially when the number of reserved tokens is closely be-
low 10. Aggressive token pruning could lead to incomplete
subject and background context loss, causing the wrong pre-
diction, as shown in Fig. 1. Specifically, the background
tokens containing sod help recognize the input image as a
lawn mower rather than a folding chair. Meanwhile, miss-
ing subject tokens make the baseball indistinguishable from
a rugby ball. To regain adequate information from pruned
tokens, EViT [16] and Evo-ViT [35] propose aggregating
pruned tokens as one, as shown in Fig. 2 (b). Still, they ne-
glect the discrepancy among these tokens, leading to feature
collapse and hindering more aggressive token pruning.
Towards more aggressive pruning, we argue that infor-
mation in pruned tokens deserves better treatment. We did
a toy experiment to answer what accuracy token pruning
could achieve if it applied the reversed pruning policy in the
first pruned transformer block as Fig. 3 shows. Taking dy-
namicViT [25] as a case study, the performance of reversed
policy is enough to bring extra accuracy complementary to
the original one (denoted by bonus accuracy). Moreover,
this phenomenon would become more significant as prun-
ing continues (red line in Fig. 3.).
To conserve the information from the pruned tokens,
we propose a Joint Token Pruning & Squeezing (TPS)
module to accommodate more aggressive compression of
ViTs. TPS module utilizes a feature dispatch mechanism
that squeezes essential features from pruned tokens into re-
served ones, as shown in Fig. 2 (c). Firstly, based on the
scoring result, the TPS module divides input tokens into two
complementary subsets: the reserved and pruned sets. Sec-
ondly, instead of discarding or collapsing tokens from the
pruned set into a single one, we employ a unidirectional
nearest-neighbor matching algorithm to dispatch each of
them independently to the associated reserved token dubbed
as the host token. This design reduces information loss
without sacrificing computational efficiency. Subsequently,
we apply a similarity-based fusing way to squeeze the fea-
tures of matched pruned tokens into corresponding host to-
kens while the non-selected reserved tokens remain identi-
cal. This design reduces the context information loss while
retaining a reasonable computation budget. We can easily
achieve hardware-friendly constant shape inference when
fixing the cardinality of the reserved token set. Furthermore,
we introduce two flexible variants: the inter-block version
dTPS and the intra-block version eTPS, which are essen-
tially plug-and-play blocks for both vanilla ViTs and hybrid
ViTs.
We conduct extensive experiments on two datasets: Im-
ageNet1K [4] and large fine-grained dataset iNaturalist
2019 [29] to prove our efficiency, flexibility, and robustness.Firstly, experiments under different token pruning settings
demonstrate the superior performance of our TPS while op-
erating more aggressive compression compared with token
pruning [25] and token reorganization [16]; further compar-
isons with state-of-the-art transformers [8,13,20,28,31,36,
39, 40] show our promising efficiency. Secondly, we man-
ifest the flexibility of our TPS by integrating it into popu-
lar ViTs, including both vanilla ViTs and hybrid ViTs. Fi-
nally, the evaluations under the random token selection pol-
icy confirm the higher robustness of our TPS.
Overall, our contributions are summarized as follows:
• We propose the joint Token Pruning & Squeezing
(TPS) and its two variants: dTPS and eTPS, to con-
serve the information of discarded tokens and facilitate
more aggressive compression of vision transformers.
• Extensive experiments demonstrate our higher perfor-
mance compared with prior approaches. Especially
while compressing GFLOPs of DeiT-small&tiny to
35%, our TPS outperforms baselines with accuracy
improvements of 1%-6%.
• Broadest experiments applying our method to vanilla
ViTs and hybrid ViTs show our flexibility, while the
analysis experiments prove that our TPS is more robust
than token pruning and token reorganization.
|
van_Amsterdam_ASPnet_Action_Segmentation_With_Shared-Private_Representation_of_Multiple_Data_Sources_CVPR_2023 | Abstract
Most state-of-the-art methods for action segmentation
are based on single input modalities or na ¨ıve fusion of mul-
tiple data sources. However, effective fusion of comple-
mentary information can potentially strengthen segmenta-
tion models and make them more robust to sensor noise
and more accurate with smaller training datasets. In order
to improve multimodal representation learning for action
segmentation, we propose to disentangle hidden features
of a multi-stream segmentation model into modality-shared
components, containing common information across data
sources, and private components; we then use an attention
bottleneck to capture long-range temporal dependencies in
the data while preserving disentanglement in consecutive
processing layers. Evaluation on 50salads, Breakfast and
RARP45 datasets shows that our multimodal approach out-
performs different data fusion baselines on both multiview
and multimodal data sources, obtaining competitive or bet-
ter results compared with the state-of-the-art. Our model is
also more robust to additive sensor noise and can achieve
performance on par with strong video baselines even with
less training data.
| 1. Introduction
Action segmentation is the task of predicting which ac-
tion is occurring at each frame in untrimmed videos of com-
plex and semantically structured human activities [18, 32].
While conventional methods for human action understand-
ing focus on classification of short video clips [6, 27, 34],
action segmentation models have to learn the semantics of
This research was funded in part by the Wellcome/EPSRC Cen-
tre for Interventional and Surgical Sciences (WEISS) [203145/Z/16/Z];
the Engineering and Physical Sciences Research Council (EPSRC)
[EP/P012841/1]; and the Royal Academy of Engineering Chair in Emerg-
ing Technologies Scheme. For the purpose of open access, the author
has applied a CC BY public copyright licence to any author accepted
manuscript version arising from this submission.
Figure 1. Different paradigms for multi-source data fusion via
(a) early fusion, (b) disentanglement of modality-shared and
modality-specific representations (our model) and (c) late fusion;
(d) Example from 50salads highlighting shared and private infor-
mation that can be extracted from video and accelerometer data.
While both modalities can detect the activation of relevant tools
and common motion cues, RGB videos additionally capture funda-
mental details about objects without acceleration sensors and their
state ( e.g. chopped tomatoes), the overall spatial configuration and
the localization of motion in the scene. Accelerometer signals,
on the other hand, contain explicit and complementary informa-
tion about 3D fine motion patterns of activated objects and their
co-occurrence. In the presence of noise ( e.g. video occlusions) or
other variability factors, some shared attributes could become part
of the private space of the uncorrupted modality.
all action classes as well as their temporal boundaries and
contextual relations, which is challenging and requires the
design of efficient strategies to capture long range temporal
information and inter-action correlations.
Recent methods for action segmentation input pre-
computed low-dimensional visual features [6, 11] into dif-
ferent long-range temporal processing units, such as tem-
poral convolutions [11, 19], temporal self-attention [38, 45]
or graph neural networks [46]. While these methods utilize
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2384
only video data, recent computer vision datasets have in-
creasing availability of multiple synchronized data sources
[9,24,32], some of which could be collected readily in real-
case scenarios ( e.g. audio recordings [9], teleoperated robot
kinematics [35]). Effective fusion of different data modali-
ties or different ‘ views’ of the same modality (here we use
the term ‘view’ to denote any different representation of the
same data source) is not trivial and still a very active area
of research, as potential advantages include higher recogni-
tion performance, improved robustness to sensor noise and
mitigating the need for large training datasets [3].
Action segmentation with multiple data sources has not
been investigated as extensively as similar tasks like action
classification. It has generally been addressed via na ¨ıve
fusion strategies such as multimodal feature concatenation
[5, 45] and prediction fusion [37], or limited to the fea-
ture encoding stage [22]. However, sensor fusion can also
benefit from long-range temporal modelling performed in
later stages. Inspired by work on multimodal representa-
tion learning [5, 20], we approach the problem implement-
ing a multi-stream action segmentation model, one stream
for each available data source, and disentangling their latent
space into modality-shared versus modality-specific repre-
sentations (Fig. 1b and 1d), aiming at learning more dis-
criminative features and more robust action recognition.
We assume that creating a shared feature space across data
sources produces more abstract action representations and
reduces over-fitting to modality-specific nuances and noise,
while private features could retain useful complementary
information for the downstream task. Instead of relying
on adversarial mechanisms [41], autoencoders [5, 20] or
generative approaches [20], we learn shared feature spaces
with minimal model modification by minimizing Maxi-
mum Mean Discrepancy (MMD) on partitions of the la-
tent spaces to reduce the distance between their distribu-
tions. In order to capture long-range temporal dependen-
cies in the data while preserving feature disentanglement in
consecutive processing layers, an attention bottleneck [26]
is then integrated into the segmentation model and initial-
ized with learned modality-shared features, allowing inde-
pendent processing of all private features. We called the
model ASPnet (Action Shared-Private network).
Evaluation results of our model on three challeng-
ing benchmark datasets show improvement over unimodal
baselines and different fusion strategies using both multi-
modal ( e.g. video and accelerometer) and multiview ( e.g.
RGB and optical flow) inputs, leading to competitive or
better results than the state-of-the-art. In addition, results
suggest that ASPnet could generalize well to multiple data
sources, improving its performance with growing number
of inputs. Despite requiring synchronized recordings of
multiple sensors, we demonstrated that our model is also
more robust to additive input noise and can match the per-formance of strong video baselines with less data. In sum-
mary, our contributions are the following:
• We present ASPnet, a new multi-source activity recog-
nition model to effectively exploit shared and com-
plementary information contained in multiple data
sources for robust action segmentation. ASPnet par-
titions the latent representation of each modality and
exploits a bottleneck mechanism to allow feature inter-
action at multiple levels of abstraction while preserv-
ing disentanglement. Additionally, modality fusion is
influenced by long-range temporal dynamics captured
at different scales.
• We show the advantage of feature disentanglement to
fuse not only multimodal data, but also multiple repre-
sentations of the same modality.
• We perform extensive ablation studies to evaluate
ASPnet against strong baselines, different levels of
noise and less training data.
• We evaluate ASPnet on three challenging benchmark
datasets and achieved competitive or better results than
state-of-the-art models.
|
Wan_Learning_Neural_Duplex_Radiance_Fields_for_Real-Time_View_Synthesis_CVPR_2023 | Abstract
Neural radiance fields (NeRFs) enable novel-view synthe-
sis with unprecedented visual quality. However, to render
photorealistic images, NeRFs require hundreds of deep mul-
tilayer perceptron (MLP) evaluations – for each pixel. This
is prohibitively expensive and makes real-time rendering
infeasible, even on powerful modern GPUs. In this paper,
we propose a novel approach to distill and bake NeRFs into
highly efficient mesh-based neural representations that are
fully compatible with the massively parallel graphics render-
ing pipeline. We represent scenes as neural radiance features
encoded on a two-layer duplex mesh, which effectively over-
comes the inherent inaccuracies in 3D surface reconstruc-
tion by learning the aggregated radiance information from a
reliable interval of ray-surface intersections. To exploit local
geometric relationships of nearby pixels, we leverage screen-
space convolutions instead of the MLPs used in NeRFs to
achieve high-quality appearance. Finally, the performance
of the whole framework is further boosted by a novel multi-
view distillation optimization strategy. We demonstrate the
effectiveness and superiority of our approach via extensive
experiments on a range of standard datasets.
| 1. Introduction
Reconstructing 3D scenes by a representation that can be ren-
dered from unobserved viewpoints using only a few posed
* Corresponding author.images has been a long-standing goal in the computer graph-
ics and computer vision communities. Significant progress
has recently been achieved by neural radiance fields (NeRFs)
[26], which are capable of generating photorealistic novel
views and modeling view-dependent effects such as specu-
lar reflections. In particular, a radiance field is a volumetric
function parameterized by MLPs that estimates density and
emitted radiance at sampled 3D locations in a given direction.
Differentiable volume rendering then allows the optimization
of this function by minimizing the photometric discrepancy
between the real observed color and the rendered color.
Despite the unprecedented success and enormous practi-
cal potential of NeRF and its various extensions [ 3,6,57],
an inescapable problem is the high computational cost of
rendering novel views. For instance, even using a powerful
modern GPU, NeRF requires about 30 seconds to render a
single image with 800 ×800 pixels, which prevents its use
for interactive applications in virtual and augmented real-
ity. On the other hand, the rapid development of NeRF has
spawned abundant follow-up works that focus on optimiza-
tion acceleration [ 18,28,54], generalization [ 5,47,53], and
enabled different downstream tasks, including 3D styliza-
tion [ 11,16,30], editing [ 24,50,55,56], or even perception
[10,17]. Thus, a generalized method that can learn and ex-
tract a real-time renderable representation given an arbitrary
pretrained NeRF-based model, while maintaining high ren-
dering quality, is highly desirable.
The huge computational cost of rendering a NeRF rep-
resentation mainly comes from two aspects: (1) For each
individual pixel, NeRF requires sampling hundreds of loca-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8307
tions along the corresponding ray, querying density and then
accumulating radiance using volume rendering; and (2) A
large model size is required to represent the geometric details
of complex scenes well, so the MLPs used in NeRF architec-
ture are relatively deep and wide, which incurs significant
computation for the evaluation of each point sample.
Figure 2. NeRF surface.In this paper, we present an ap-
proach that achieves high-fidelity
real-time view synthesis by ad-
dressing these issues. To avoid the
dense sampling along each ray, one
solution is to generate a geome-
try proxy from a pretrained NeRF
model, e.g., via marching cubes
[25]. By exploiting the highly-
optimized graphics pipeline, we
could almost instantly obtain the
sample location for each ray. How-
ever, due to inaccuracies in the den-
sity field around surfaces, the extracted mesh may not faith-
fully represent the true underlying geometry and can contain
artifacts, as shown in Figure 2. Dense local sampling could
alleviate these errors to some degree, but cannot handle miss-
ing geometries or occlusions. An alternative approach is to
use rasterization for fast neural rendering [ 1,37,38,43] by
directly baking neural features onto the surface of explicit
the geometry or point cloud. This is usually accompanied
by a deep CNN to translate rasterized features to colors and
learn to resolve the existing errors, which is expensive to
evaluate and can prevent real-time rendering.
To significantly reduce the sampled numbers while effi-
ciently handling the geometric errors, our firstkey technical
innovation is to infer the final RGB color according to the
radiance of duplex points along the ray. As illustrated in
Figure 3, NeRFs represent a continuous density distribution
along each ray. While it will generally be difficult to deter-
mine the exact location of a surface along the ray, it will be
easier to extract a reliable interval that contributes the most
to the final prediction by using an under- and an overesti-
mation of the geometry. Motivated by this idea, instead of
selecting a specific location for appearance calculation, or
performing expensive dense volume rendering in this inter-
val, we represent the scene using learnable features at the
two intersection points of a ray with the duplex geometry
and use a neural network to learn the color from this ag-
gregated duplex radiance information. Although only two
sampled locations are considered, we found this proposed
neural duplex radiance field to be robust in compensating
for the errors of the geometry proxy even without a deep
neural network, while effectively preserving the efficiency of
rasterization-based approaches. The NeRF MLP has become
the most standard architecture for most neural implicit repre-
sentations [ 7,8,26,27,31,39]. Yet, with only a few points
Figure 3. Motivation. The continuous density distribution of NeRF
(curve above the ray) makes it difficult to identify the accurate
location of a surface ( H) for appearance calculation. We thus seek
to extract a reliable interval from the density field (between dashed
lines), and learn the duplex radiance combinations to tolerate errors.
considered along the ray, the MLP struggles to constrain the
proposed neural duplex radiance field. Instead , we use a shal-
low convolutional network, which can effectively capture
the local geometric information of neighboring pixels, and
leads to a considerably better rendering quality. Finally , we
found that directly training the neural duplex radiance field
from scratch will lead to noticeable artifacts. We therefore
propose a multi-view distillation optimization strategy that
enables us to effectively approximate the rendering quality
of the original NeRF models. Remarkably, our method im-
proves run-time performance by 10,000 times compared to
the original NeRF while maintaining high-quality rendering.
|
Wang_Conflict-Based_Cross-View_Consistency_for_Semi-Supervised_Semantic_Segmentation_CVPR_2023 | Abstract
Semi-supervised semantic segmentation (SSS) has re-
cently gained increasing research interest as it can re-
duce the requirement for large-scale fully-annotated train-
ing data. The current methods often suffer from the confir-
mation bias from the pseudo-labelling process, which can
be alleviated by the co-training framework. The current
co-training-based SSS methods rely on hand-crafted per-
turbations to prevent the different sub-nets from collaps-
ing into each other, but these artificial perturbations can-
not lead to the optimal solution. In this work, we propose a
new conflict-based cross-view consistency (CCVC) method
based on a two-branch co-training framework which aims
at enforcing the two sub-nets to learn informative features
from irrelevant views. In particular, we first propose a new
cross-view consistency (CVC) strategy that encourages the
two sub-nets to learn distinct features from the same input
by introducing a feature discrepancy loss, while these dis-
tinct features are expected to generate consistent prediction
scores of the input. The CVC strategy helps to prevent the
two sub-nets from stepping into the collapse. In addition,
we further propose a conflict-based pseudo-labelling (CPL)
method to guarantee the model will learn more useful infor-
mation from conflicting predictions, which will lead to a sta-
ble training process. We validate our new CCVC approach
on the SSS benchmark datasets where our method achieves
new state-of-the-art performance. Our code is available at
https://github.com/xiaoyao3302/CCVC .
| 1. Introduction
Among different vision tasks, semantic segmentation is
a fundamental vision task that enables the network to under-
stand the world [3,11,12,32,33]. In recent years, deep neu-
*This work was done during an internship at Samsung Research China-
Beijing. This work is supported by Australian Research Council (ARC
DP200103223).
†Corresponding authors.
Figure 1. We compare the cosine similarity values between the
features extracted by the two sub-nets of the traditional cross-
consistency regularization (CCR) method and our CVC method.
We also compare the prediction accuracies of the two methods,
measured by mIoU. We show that our CVC method can prevent
the two sub-nets from collapsing into each other and inferring the
input from irrelevant views, while CCR cannot guarantee the in-
ferred views are different. We show our new method can increase
the perception of the model, which produces more reliable pre-
dictions. The experiments are implemented on the original Pascal
VOC dataset, under the 1/4 split partition with ResNet-101 as the
backbone of the encoder.
ral networks (DNNs) have shown great potential in seman-
tic segmentation [18,31,57]. However, the success of DNNs
is mainly due to the huge amount of annotated datasets. For
the task of semantic segmentation, pixel-level annotations
are often required, which means the annotators need to man-
ually label up to hundreds of thousands of pixels per image.
Therefore, it takes great effort to collect precisely labelled
data for training DNNs [1, 27, 30].
Various semi-supervised learning (SSL) methods are
proposed to tackle the problem, which aim at learning a
network by using only a small set of pixel-wise precisely
annotated data and a large set of unlabelled data for seman-
tic segmentation [2, 34, 37, 53, 54, 58]. It is obvious that
the information from the labelled data is very limited as the
number of labelled data is far less than the number of un-
labelled data. Therefore, it becomes a key issue to fully
exploit the unlabelled data to assist the labelled data for the
model training.
One intuitive way to tackle this issue is pseudo-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19585
labelling [28,37,48]. However, SSL methods along this line
may suffer from the so-called confirmation bias [48], which
often leads to performance degradation due to the unsta-
ble training process. Recently, consistency regularization-
based SSL methods show promising performance [35, 38,
41, 46]. However, most of them rely on producing the pre-
dictions of the weakly perturbed inputs to generate pseudo-
labels, which are then used as the supervision to generate
the predictions of the strongly perturbed inputs. Therefore,
they still suffer from the confirmation bias issue.
On the other hand, co-training is a powerful framework
for SSL as it enables different sub-nets to infer the same
instance from different views and transfer the knowledge
learnt from one view to another through pseudo-labelling.
Particularly, co-training relies on multi-view reference to
increase the perception of the model, thus enhancing the
reliability of the generated pseudo-labels [40]. Various
semi-supervised semantic segmentation (SSS) approaches
are based on co-training [10, 39]. The key point is how to
prevent different sub-nets from collapsing into each other
such that we can make correct predictions based on the in-
put from different views. However, the hand-crafted pertur-
bations used in most SSS methods cannot guarantee hetero-
geneous features to be learned to effectively prevent sub-
nets from stepping into a collapse.
Facing the above-mentioned issue, in this work, we
come up with a new conflict-based cross-view consistency
(CCVC) strategy for SSS, which makes sure the two sub-
nets in our model can learn for different features separately
so that reliable predictions could be learned from two irrel-
evant views for co-training, thus further enabling each sub-
net to make reliable and meaningful predictions. In particu-
lar, we first raise a cross-view consistency (CVC) approach
with a discrepancy loss to minimize the similarity between
the feature extracted by the two sub-nets to encourage them
to extract different features, which prevents the two sub-nets
from collapsing into each other. Then we employ the cross
pseudo-labelling to transfer the knowledge learnt from one
sub-net to another to improve the perception of the network
to correctly reason the same input from different views, thus
producing more reliable predictions.
However, the discrepancy loss may introduce too strong
a perturbation to the model that the feature extracted by
the sub-nets may contain less meaningful information for
the prediction, leading to inconsistent and unreliable predic-
tions from the two sub-nets. This will incur the confirma-
tion bias problem and thus harm the co-training of the sub-
nets. To tackle this problem, we further propose a conflict-
based pseudo-labelling (CPL) method, where we encourage
the pseudo-labels generated by the conflicting predictions
of each sub-net to have stronger supervision for the pre-
diction of each other, to enforce the two sub-nets to make
consistent predictions. Thereby, the useful features for theprediction could be preserved as well as the reliability of
the predictions. In this way, hopefully, the influence of the
confirmation bias can be reduced and the training process
can be more stable.
As shown in Fig. 1, we can see the similarity scores be-
tween the features extracted from the two sub-nets of the
cross-consistency regularization (CCR) model remain at a
high level, indicating the reasoning views of CCR are kind
of relevant. In contrast, our CVC method ensures the rea-
soning views are sufficiently different and thus produces
more reliable predictions.
It should be mentioned that our CCVC method is com-
patible with various existing data augmentation methods
and it also benefits from an augmented training set with in-
creased data diversity.
The contributions of our work are summarized as below:
• We introduce a cross-view consistency (CVC) strategy
based on a co-training framework to make reliable pre-
dictions, where we propose a feature discrepancy loss
to enable the two-branch network to learn how to rea-
son the input differently but make consistent predic-
tions.
• We further propose a new conflict-based pseudo-
labelling (CPL) method based on our cross-view con-
sistency strategy to enable the two sub-nets to learn
more useful semantic information from conflicting
predictions to produce reliable and consistent predic-
tions, which leads to a more stable training process.
• Our method achieves the state-of-the-art performance
on the commonly used benchmark datasets, PASCAL
VOC 2012 [16] and Cityscapes [13].
|
Vidit_Learning_Transformations_To_Reduce_the_Geometric_Shift_in_Object_Detection_CVPR_2023 | Abstract
The performance of modern object detectors drops when
the test distribution differs from the training one. Most of
the methods that address this focus on object appearance
changes caused by, e.g., different illumination conditions,
or gaps between synthetic and real images. Here, by con-
trast, we tackle geometric shifts emerging from variations in
the image capture process, or due to the constraints of the
environment causing differences in the apparent geometry
of the content itself. We introduce a self-training approach
that learns a set of geometric transformations to minimize
these shifts without leveraging any labeled data in the new
domain, nor any information about the cameras. We evalu-
ate our method on two different shifts, i.e., a camera’s field
of view (FoV) change and a viewpoint change. Our results
evidence that learning geometric transformations helps de-
tectors to perform better in the target domains.
| 1. Introduction
While modern object detectors [1, 2, 17, 23, 24] achieve
impressive results, their performance decreases when the
test data depart from the training distribution. This prob-
lem arises in the presence of appearance variations due to,
for example, differing illumination or weather conditions.
Considering the difficulty and cost of acquiring annotated
data in the test (i.e., target) domain, Unsupervised Domain
Adaptation (UDA) has emerged as the standard strategy to
address such scenarios [3, 4, 9, 26, 38].
In this context, much effort has been made to learn do-
main invariant features, such that the source and target dis-
tributions in this feature space are similar. This has led to
great progress in situations where the appearance of the ob-
jects changes drastically from one domain to the other, as
in case of real-to-sketch adaptation (e.g., Pascal VOC [10]
to Comics [15]), or weather adaptation (e.g., Cityscapes [6]
to Foggy Cityscapes [27]). Nevertheless, such object ap-
pearance changes are not the only sources of domain shifts.
They can also have geometric origins. For example, as
shown in Fig. 1, they can be due to a change in camera view-
Figure 1. Geometric shifts. (Left) Due to a different FoV ,
the cars highlighted in green, undergo different distortions even
though they appear in similar image regions. (Right) Different
camera viewpoints (front facing vs downward facing) yield dif-
ferent distortions and occlusion patterns for pedestrian detection.
(Bottom) The distributions of pedestrian bounding box sizes in
Cityscapes [6] and MOT [8] differ significantly as the pedestrians
are usually far away or in the periphery in Cityscapes. The top im-
ages are taken from Cityscapes [6], and the bottom-left and right
ones from KITTI [12] and MOT [8], respectively.
point or field-of-view (FoV), or a change of object scale due
to different scene setups. In practice, such geometric shifts
typically arise from a combination of various factors, in-
cluding but not limited to the ones mentioned above.
In this paper, we introduce a domain adaptation approach
tackling such geometric shifts. To the best of our knowl-
edge, the recent work of [13] constitutes the only attempt at
considering such geometric distortions. However, it intro-
duces a method solely dedicated to FoV variations, assum-
ing that the target FoV is fixed and known. Here, we de-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17441
velop a more general framework able to cope with a much
broader family of geometric shifts.
To this end, we model geometric transformations as a
combination of multiple homographies. We show both the-
oretically and empirically that this representation is suffi-
cient to encompass a broad variety of complex geometric
transformations. We then design an aggregator block that
can be incorporated to the detector to provide it with the
capacity to tackle geometric shifts. We use this modified
detector to generate pseudo labels for the target domain,
which let us optimize the homographies so as to reduce the
geometric shift.
Our contributions can be summarized as follows. (i)
We tackle the problem of general geometric shifts for ob-
ject detection. (ii) We learn a set of homographies using
unlabeled target data, which alleviates the geometric bias
arising in source-only training. (iii) Our method does not
require prior information about the target geometric distor-
tions and generalizes to a broad class of geometric shifts.
Our experiments demonstrate the benefits of our approach
in several scenarios. In the presence of FoV shifts, our
approach yields similar performance to the FoV-dedicated
framework of [13] but without requiring any camera infor-
mation. As such, it generalizes better to other FoVs. Fur-
thermore, we show the generality of our method by using
it to adapt to a new camera viewpoint in the context of
pedestrian detection.Our implementation can be accessed at
https://github.com/vidit09/geoshift.
|
Wu_Uncovering_the_Disentanglement_Capability_in_Text-to-Image_Diffusion_Models_CVPR_2023 | Abstract
Generative models have been widely studied in computer
vision. Recently, diffusion models have drawn substantial
attention due to the high quality of their generated im-
ages. A key desired property of image generative models is
the ability to disentangle different attributes, which should
enable modification towards a style without changing the
semantic content, and the modification parameters should
generalize to different images. Previous studies have found
that generative adversarial networks (GANs) are inherently
endowed with such disentanglement capability, so they can
perform disentangled image editing without re-training or
fine-tuning the network. In this work, we explore whether
diffusion models are also inherently equipped with such a
capability. Our finding is that for stable diffusion models,
by partially changing the input text embedding from a neu-
tral description ( e.g., “a photo of person”) to one with style
(e.g., “a photo of person with smile”) while fixing all the
Gaussian random noises introduced during the denoising
process, the generated images can be modified towards the
target style without changing the semantic content. Basedon this finding, we further propose a simple, light-weight
image editing algorithm where the mixing weights of the two
text embeddings are optimized for style matching and con-
tent preservation. This entire process only involves optimiz-
ing over around 50 parameters and does not fine-tune the
diffusion model itself. Experiments show that the proposed
method can modify a wide range of attributes, with the
performance outperforming diffusion-model-based image-
editing algorithms that require fine-tuning. The optimized
weights generalize well to different images. Our code is
publicly available at https://github.com/UCSB-
NLP-Chang/DiffusionDisentanglement .
| 1. Introduction
Image generation has been a widely-studied research
problem in computer vision, with many competitive gen-
erative models proposed over the last decade, such as gen-
erative adversarial networks (GANs) [ 5,10,18,30,32] and
variational autoencoders (V AE) [ 39,57,59,60]. Recently,
diffusion models [ 23,71–73], with their ability to gener-
ate high-quality and high-resolution images in different do-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1900
Scenes Person
4GlobalStyles (children drawing, cyberpunk, anime), Building appearance Styles (renaissance, Egyptian mural, sketch, Pixar)
(wooden, red brick), Weather & time (sunset, night, snowy) Appearance (young, tanned, male)
Local Cherry blossom, rainbow, foothills Expressions (smiling, crying, angry)
8Small edits Cake toppings, remove people on the street Hats, hair colors, earrings
Table 1. Summarization of explored attributes. 4shows successfully disentangled attributes and 8shows failure cases. Small edits on
the image are harder to be disentangled when the target attribute correlates with other parts of the image.
mains, have soon attracted wide research attention.
One important research direction regarding image gener-
ative models is the ability to disentangle different aspects of
the generated images, such as semantic contents and styles,
which is crucial for image editing and style transfer. A
generative model with a good disentanglement capability
should satisfy the following two desirable properties. First,
it should permit separate modification of one aspect without
changing other aspects. As an example shown in Fig. 2, in
text-to-image generation, when the text input changes from
“a photo of person” to “a photo of person with smile”, the
generative model should have the ability to modify just the
expression of the person ( i.e.,from the top image to mid-
dle image in Fig. 2) without changing the person’s iden-
tity (the bottom image in Fig. 2). Second, the parameters
learned from modifying one image should transfer well to
other similar images. For example, the optimal parameters
that can add smile to one person should also work for im-
ages of different people with different genders and races.
Previous studies have discovered that GANs are inher-
ently endowed with a strong disentanglement capability.
Specifically, it is found that there exist certain directions in
the latent space separately controlling different attributes.
Therefore, by identifying these directions, e.g., via prin-
cipal component analysis [ 19], GAN can achieve effective
disentanglement without any re-training or fine-tuning. On
the other hand, such an inherent disentanglement capabil-
ity has yet to be found in diffusion models. Hence come
our research questions: Do diffusion models also possess
a disentanglement capability with the aforementioned nice
properties? If so, how can we uncover it?
In this paper, we seek to answer these research questions.
Our finding is that for stable diffusion model [ 61], one of the
diffusion models that can generate images based on an in-
put text description, disentangled image modifications can
be achieved by partial modifications in the text embedding
space. In particular, if we fix the standard Gaussian noises
introduced in the denoising process, and partially change
the input text embedding from a neutral description ( e.g., “a
photo of person”) to one with style ( e.g., “a photo of person
with smile”), the generated image will also shift towards the
target style without changing the semantic content. Based
on this finding, we further propose a simple, light-weight al-gorithm, where we optimize the mixing weights of the two
text embeddings under two objectives, a perceptual loss for
content preservation and a CLIP-based style matching loss.
The entire process only involves optimizing over around 50
parameters and does not fine-tune the diffusion model.
Our experiments show that the inherent disentanglement
capability in stable diffusion model can already disentan-
gle a wide range of concepts and attributes, ranging from
global styles such as painting styles to local styles like fa-
cial expressions, as shown in Table 1. As shown in Fig. 1,
by learning the optimal mixing weights of the two de-
scriptions, stable diffusion models can generate convinc-
ing image pairs that only modify the target attribute, and
the optimal weights can generalize well to different im-
ages. The experiment results also show that our proposed
image editing algorithm, without fine-tuning the diffusion
model, can match or outperform the more sophisticated
diffusion-model-based image-editing baselines that require
fine-tuning. The findings of this paper can shed some light
on how diffusion models work and how they can be applied
to image editing tasks.
|
Wang_RIFormer_Keep_Your_Vision_Backbone_Effective_but_Removing_Token_Mixer_CVPR_2023 | Abstract
This paper studies how to keep a vision backbone ef-
fective while removing token mixers in its basic building
blocks. Token mixers, as self-attention for vision transform-
ers (ViTs), are intended to perform information communica-
tion between different spatial tokens but suffer from consid-
erable computational cost and latency. However, directly
removing them will lead to an incomplete model structure
prior, and thus brings a significant accuracy drop. To this
end, we first develop an R epIdentityFormer base on the re-
parameterizing idea, to study the token mixer free model
architecture. And we then explore the improved learn-
ing paradigm to break the limitation of simple token mixer
free backbone, and summarize the empirical practice into 5
guidelines. Equipped with the proposed optimization strat-
egy, we are able to build an extremely simple vision back-
bone with encouraging performance, while enjoying the
high efficiency during inference. Extensive experiments and
ablative analysis also demonstrate that the inductive bias of
network architecture, can be incorporated into simple net-
work structure with appropriate optimization strategy. We
hope this work can serve as a starting point for the explo-
ration of optimization-driven efficient network design.
| 1. Introduction
The monumental advance in computer vision in the past
few years was partly brought by the revolution of vision
backbones, including convolutional neural networks (Con-
vNets) [13,17,25,30] and vision transformers (ViTs) [14,
38]. Both of them have particular modules in their basic
building blocks that aggregate information between differ-
ent spatial locations, which are called token mixer [46],
such as self-attention for ViTs. Although the effective-
∗Corresponding author.
Token Mixer
(Attention)LN
LNChannel MLP
Input
Embedding
ImageIdentity
MappingLN
LNChannel MLP
Input
Embedding
ImageRemove
token mixer
81.57ms232.42ms1666.01ms1725.61ms1874.02ms3070.33ms3096.26ms
46.3%
Latency
(a) Latency analysis of ViT-B (b) Remove token mixer with heavy latencyFigure 1. Latency analysis of different components in ViT-
Base [ 14]. (a) For token mixer (self-attention), the latency oc-
cupies about 46.3% of the backbone. (b) Our motivation was to
remove the token mixer while striving to keep the performance.
ness of token mixer has been demonstrated on many vision
tasks [ 5,6,24,45], its computational complexity typically
takes up a significant portion of the network. In practice,
heavy token mixers make the vision backbone limited espe-
cially on the edge-side devices due to the issue of speed and
computation cost.
There have been several attempts in the literature to in-
vestigate efficient token mixers for slimming vision back-
bones [ 29,31,46]. Although those works have already
achieve competitive performance with light-weight design,
they do retain the token mixers, which brings non-negligible
increase in latency, as illustrated in Fig. 1. The recent
work [ 47] finds that removing the token mixer is possible
but leads to performance degeneration. Those explorations
in efficient token mixers inspire us to think that can we keep
the vision backbone effective but removing the token mixer?
The resulting token mixer free vision backbone is expected
to be efficient and effective for the realistic application.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14443
In this work, we first review the current model architec-
tures and learning paradigms. Most of the previous works
concentrate on the improvement of the architecture while
adopting the conventional supervised learning to optimize
the model from scratch. Differently, we propose to adopt
the simplified model architecture, and explore the learning
paradigm design to fully exploit the potential of the simple
model. We aim to simultaneously maintain the efficiency
and efficacy of token mixer free vision backbone (namely
IdentityFormer, in Fig. 1-(b)). To this end, we investigate
the simple and yet effective learning strategy, knowledge
distillation (KD) [18] thoroughly in the following sections.
Our main idea is distilling the knowledge from powerful
teacher model (with token mixer) to the student model (to-
ken mixer free). We instantiate the re-parameterizing idea to
enlarge the modeling capacity of student network but retain
its efficiency, as shown in Fig. 2. Specifically, the simple
affine transformation is introduced into student model, to re-
place the token mixer for training. The parameters of affine
transformation can be merged into LayerNorm [ 2] during
inference, which makes the student token mixer free finally.
We empirically summarize the our learning strategy as
the following guidelines, hope to shed light on how to learn
the extremely simple model. Concretely, 1) soft distillation
without using ground-truth labels is more effective; 2) using
affine transformation without distillation is difficult to tailor
the performance degeneration; 3) the proposed block-wise
knowledge distillation, called module imitation , helps lever-
aging the modeling capacity of affine operator; 4) teacher
with large receptive field is beneficial to improve receptive
field limited student; 5) loading the pre-trained weight of
teacher model (except the token mixer) into student improve
the convergence and performance.
Based on the above guidelines, we finally obtain a to-
ken mixer free vision model with competitive performance
enjoying the high efficiency, dubbed as RepIdentityFormer
(RIFormer) . RIFormer shares nearly the same macro and
micro design as MetaFormer [ 46], but safely removing all
token mixers. The quantitative results show that our net-
works outperform many prevailing backbones with faster
inference speed on ImageNet-1K [ 8]. And the ablative
analyses on the feature distribution and Effective Recep-
tive Fields (ERFs) also demonstrate that the inductive bias
brought by an explicit token mixer, can be implicitly incor-
porated into the simple network structure with appropriate
optimization strategies. In summary, the main contributions
of our work are as the following:
• We propose to explore the vision backbone by develop-
ing advanced learning paradigm for simple model archi-
tecture , to satisfy the demand of realistic application.
• We instantiate the re-parameterizing idea to build a to-
ken mixer free vision model, RIFormer, which owns the
improved modeling capacity for the inductive bias whileenjoying the efficiency during inference.
• Our proposed practical guidelines of distillation strategy
has been demonstrated effective in keeping the vision
backbone competitive but removing the token mixer.
|
Wu_Incremental_3D_Semantic_Scene_Graph_Prediction_From_RGB_Sequences_CVPR_2023 | Abstract
3D semantic scene graphs are a powerful holistic rep-
resentation as they describe the individual objects and de-
pict the relation between them. They are compact high-level
graphs that enable many tasks requiring scene reasoning.
In real-world settings, existing 3D estimation methods pro-
duce robust predictions that mostly rely on dense inputs.
In this work, we propose a real-time framework that incre-
mentally builds a consistent 3D semantic scene graph of
a scene given an RGB image sequence. Our method con-
sists of a novel incremental entity estimation pipeline and
a scene graph prediction network. The proposed pipeline
simultaneously reconstructs a sparse point map and fuses
entity estimation from the input images. The proposed net-
work estimates 3D semantic scene graphs with iterative
message passing using multi-view and geometric features
extracted from the scene entities. Extensive experiments
on the 3RScan dataset show the effectiveness of the pro-
posed method in this challenging task, outperforming state-
of-the-art approaches. Our implementation is available at
https://shunchengwu.github.io/MonoSSG .
| 1. Introduction
Scene understanding is a cornerstone in many computer
vision applications requiring perception, interaction, and
manipulation, such as robotics, AR/VR and autonomous
systems [17, 54–56]. Semantic Scene Graphs (SSGs) go
beyond recognizing individual entities (objects and stuff)
by reasoning about the relationships among them [61, 66].
They also proved to be a valuable representation for com-
plex scene understanding tasks, such as image caption-
ing [26, 67], generation [13, 24], scene manipulation [10,
11], task planning [27], and surgical procedure estima-
tion [42, 43]. Given the benefits of such representations,
scene graph estimation received increasing attention in the
computer vision community.
While earlier methods mainly estimate SSGs from im-
ages [18, 19, 33, 66, 72], recent approaches have also in-
vestigated estimating them from 3D data. Compared to
2D scene graphs, which describe a single image, 3D scene
input: RGB+Pose
output: 3D boxes of instances (obj.+struc.) Sparse
Entity
Fusion
Per-frame
Entity
estimation temporal
label
consistency global label
merging
Mapping Instance-level
Segmentation 3DBBox
extraction Labeling InSeg[3]++? ORBSLAM [1] ApproxMVBB [2] weighted
inheritance
Local graph: 3Dbbox.
covis graph (green).
neighbor braph
(yellow) Global graph: 3Dbbox. covis graph (green). neighbor
braph (yellow)
Floor Sofaclose by
Table
Cushion
close by Table
Cushion
stanting onstanding on
supported byRGB sequence
Sparse Reconstruction and Scene Graph Prediction at frame: 7
frame: 93
frame: 7
Incremental entity estimation and 3D scene graph estimation Figure 1. We propose a real-time 3D semantic scene graph estima-
tion method that relies on an abstract understanding of a scene ge-
ometry built with RGB input. Our method estimates scene graphs
incrementally by continuously estimating scene graphs and fusing
local predictions into a global 3D scene graph.
graphs depict the entire 3D scenes, enabling applications
requiring a holistic understanding of the whole scene, such
as path planning [47], camera localization, and loop clo-
sure detection [23]. However, existing 3D methods either
require dense 3D geometry of the scenes to estimate 3D
scene graphs [1, 23, 61, 64], which limits the use case since
dense geometry is not always available, or constraints the
scene graph estimation at the image-level [15,27,66], which
tend to fail inferring relationships among objects beyond the
individual viewpoints. A method that estimates 3D scene
graphs relies on sparse scene geometry and reasoning about
relationships globally has not been explored yet.
In this work, we propose a real-time framework that in-
crementally estimates a global 3D SSG of a scene simply
requiring an RGB sequence as input. The process is illus-
trated in Fig. 1. Our method simultaneously reconstructs
a segmented point cloud while estimating the SSGs of the
current map. The estimations are bound to the point map,
which allows us to fuse them into a consistent global scene
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5064
graph. The segmented map is constructed by fusing en-
tity estimation from images to the points estimated from
a sparse Simultaneous Localization and Mapping (SLAM)
method [3]. Our network takes the entities and other proper-
ties extracted from the segmented map to estimate 3D scene
graphs. Fusing entities across frames is non-trivial. Exist-
ing methods often rely on dense inputs [38,58] and struggle
with sparse inputs since the points are not uniformly dis-
tributed. Estimating scene graphs with sparse input points
is also challenging. Sparse and ambiguous geometry ren-
ders the node representations unreliable. On the other hand,
directly estimating scene graphs from 2D images ignores
the relationship beyond visible viewpoints. We aim to over-
come the aforementioned issues by proposing two novel
approaches. First, we propose a confidence-based fusion
scheme which is robust to variations in the point distribu-
tion. Second, we present a scene graph prediction network
that mainly relies on multi-view images as the node feature
representation. Our approach overcomes the need for exact
3D geometry and is able to estimate relationships without
view constraints. In addition, our network is flexible and
generalizable as it works not only with sparse inputs but
also with dense geometry.
We comprehensively evaluate our method on the 3D SSG
estimation task from the public 3RScan dataset [60]. We ex-
periment and compare with three input types, as well as 2D
and 3D approaches. Moreover, we provide a detailed abla-
tion study on the proposed network. The results show that
our method outperforms all existing approaches by a signif-
icant margin. The main contributions of this work can be
summarized as follows: (1) We propose the first incremen-
tal 3D scene graph prediction method using only RGB im-
ages. (2) We introduce an entity label association method
that works on sparse point maps. (3) We propose a novel
network architecture that generalizes with different input
types and outperforms all existing methods.
|
Weber_Power_Bundle_Adjustment_for_Large-Scale_3D_Reconstruction_CVPR_2023 | Abstract
We introduce Power Bundle Adjustment as an expansion
type algorithm for solving large-scale bundle adjustment
problems. It is based on the power series expansion of the
inverse Schur complement and constitutes a new family of
solvers that we call inverse expansion methods. We theo-
retically justify the use of power series and we prove the
convergence of our approach. Using the real-world BAL
dataset we show that the proposed solver challenges the
state-of-the-art iterative methods and significantly acceler-
ates the solution of the normal equation, even for reaching a
very high accuracy. This easy-to-implement solver can also
complement a recently presented distributed bundle adjust-
ment framework. We demonstrate that employing the pro-
posed Power Bundle Adjustment as a sub-problem solver
significantly improves speed and accuracy of the distributed
optimization.
| 1. Introduction
Bundle adjustment (BA) is a classical computer vision
problem that forms the core component of many 3D recon-
struction and Structure from Motion (SfM) algorithms. It
refers to the joint estimation of camera parameters and 3D
landmark positions by minimization of a non-linear repro-
jection error. The recent emergence of large-scale internet
photo collections [1] raises the need for BA methods that
are scalable with respect to both runtime and memory. And
building accurate city-scale maps for applications such as
augmented reality or autonomous driving brings current BA
approaches to their limits.
As the solution of the normal equation is the most time
consuming step of BA, the Schur complement trick is usu-
ally employed to form the reduced camera system (RCS).
This linear system involves only the pose parameters and is
significantly smaller. Its size can be reduced even more by
using a QR factorization, deriving only a matrix square root
of the RCS, and then solving an algebraically equivalent
1Technical University of Munich
2Munich Center for Machine Learning
3University of Oxford
(a)Ladybug-1197
(b)Venice-1102
Figure 1. Power Bundle Adjustment (PoBA) is a novel solver
for large-scale BA problems that is significantly faster and more
memory-efficient than existing solvers. (a) Optimized 3D recon-
struction of a Ladybug BAL problem with 1197 poses. PoBA- 32
(resp. PoBA- 64) is41% (resp. 36% ) faster than the best competing
solver to reach a cost tolerance of 1%. (b) Optimized 3D recon-
struction of a Venice BAL problem with 1102 poses. PoBA- 32
(resp. PoBA- 64) is71% (resp. 69% ) faster than the best compet-
ing solver to reach a cost tolerance of 1%.PoBA is five times (resp.
twice) less memory-consuming than√
BA(resp. Ceres).
problem [4]. Both the RCS and its square root formulation
are commonly solved by iterative methods such as the pop-
ular preconditioned conjugate gradients algorithm for large-
scale problems or by direct methods such as Cholesky fac-
torization for small-scale problems.
In the following, we will challenge these two families
of solvers by relying on an iterative approximation of the
inverse Schur complement. In particular, our contributions
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
281
are as follows:
•We introduce Power Bundle Adjustment ( PoBA ) for ef-
ficient large-scale BA. This new family of techniques
that we call inverse expansion methods challenges the
state-of-the-art methods which are built on iterative
and direct solvers.
•We link the bundle adjustment problem to the theory
of power series and we provide theoretical proofs that
justify this expansion and establish the convergence of
our solver.
•We perform extensive evaluation of the proposed ap-
proach on the BAL dataset and compare to several
state-of-the-art solvers. We highlight the benefits
ofPoBA in terms of speed, accuracy, and memory-
consumption. Figure 1 shows reconstructions for two
out of the 97 evaluated BAL problems.
•We incorporate our solver into a recently proposed
distributed BA framework and show a significant im-
provement in terms of speed and accuracy.
•We release our solver as open source to facili-
tate further research: https://github.com/
simonwebertum/poba
|
Wu_Discriminating_Known_From_Unknown_Objects_via_Structure-Enhanced_Recurrent_Variational_AutoEncoder_CVPR_2023 | Abstract
Discriminating known from unknown objects is an im-
portant essential ability for human beings. To simulate this
ability, a task of unsupervised out-of-distribution object de-
tection (OOD-OD) is proposed to detect the objects that
are never-seen-before during model training, which is ben-
eficial for promoting the safe deployment of object detec-
tors. Due to lacking unknown data for supervision, for this
task, the main challenge lies in how to leverage the known
in-distribution (ID) data to improve the detector’s discrim-
ination ability. In this paper, we first propose a method
of Structure-Enhanced Recurrent Variational AutoEncoder
(SR-VAE), which mainly consists of two dedicated recurrent
VAE branches. Specifically, to boost the performance of ob-
ject localization, we explore utilizing the classical Lapla-
cian of Gaussian (LoG) operator to enhance the structure
information in the extracted low-level features. Meanwhile,
we design a VAE branch that recurrently generates the aug-
mentation of the classification features to strengthen the dis-
crimination ability of the object classifier. Finally, to alle-
viate the impact of lacking unknown data, another cycle-
consistent conditional VAE branch is proposed to synthesize
virtual OOD features that deviate from the distribution of
ID features, which improves the capability of distinguishing
OOD objects. In the experiments, our method is evaluated
on OOD-OD, open-vocabulary detection, and incremental
object detection. The significant performance gains over
baselines show the superiorities of our method. The code
will be released at https://github.com/AmingWu/SR-VAE.
| 1. Introduction
Recent years have witnessed the rapid development of
deep learning based object detection [5,12,34,36,41], which
often follows a close-set assumption that the training and
testing processes share the same category space. How-
ever, the practical scenario is open and filled with unknown
*Corresponding author.
I D T r a i n i n g D a t a
S R -V A E O O D T e s t i n g D a t a
S R -V A E S y n t h e s i z e V i r t u a l O O D F e a t u r e s
OO D I D ❌
❌Figure 1. Discriminating known from unknown objects (as shown
in green boxes) by synthesizing virtual OOD features. For OOD-
OD, to alleviate the impact of lacking unknown data, we present an
SR-V AE method to constrain the synthesized features (as shown in
blue stars) to deviate from the distribution of ID features (as shown
in orange). Meanwhile, we consider enhancing the discrimination
of the classifier to reduce the risk of misclassifying the ID objects
into the OOD category. Through these operations, the ability of
distinguishing OOD objects could be improved significantly.
objects, e.g., in Fig. 1, an autonomous vehicle may en-
counter an unseen camel, presenting significant challenges
for close-set assumption based detectors. To promote the
safe application of detectors, a task of unsupervised out-of-
distribution object detection (OOD-OD) [7] is recently pro-
posed, which aims to detect the objects never-seen-before
during training without accessing any auxiliary data.
Towards unsupervised OOD-OD, since there is no aux-
iliary data available for supervision, leveraging the known
in-distribution (ID) data to enhance the detector’s discrimi-
nation ability becomes the critical challenge. One feasible
solution is to synthesize a series of virtual OOD features
[7, 35] based on the ID data, which is beneficial for pro-
moting the object detector to learn a clear decision bound-
ary between ID and OOD objects. To this end, the work
[7] attempts to synthesize virtual features from the low-
likelihood region of the estimated class-conditional distri-
bution. However, this method requires a large number of
objects for each category to estimate the distribution, limit-
ing its application to the case of few samples.
As shown in Fig. 1, in this paper, we consider improv-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23956
ing the performance of OOD object detection from two per-
spectives: One is to strengthen the discrimination capabil-
ity of the object classifier for the known ID objects, which
is conducive to reduce the risk of misclassifying the ID ob-
jects into the OOD category. Another is to synthesize the
virtual OOD features that significantly deviate from the dis-
tribution of the ID features, which is instrumental in boost-
ing the performance of distinguishing OOD objects from
ID objects. To attain these two goals, we explore exploiting
Variational AutoEncoder (V AE) [15, 20] to separately gen-
erate the augmented ID features and virtual OOD features.
Specifically, an approach of Structure-Enhanced Recur-
rent Variational AutoEncoder (SR-V AE) is proposed, which
mainly consists of two dedicated recurrent V AE branches.
In general, an object detector should first localize objects.
Then, an object classifier is used to discriminate these ob-
jects [12, 36]. To improve the localization performance, en-
hancing the object-related information in the extracted fea-
tures is meaningful. To this end, after extracting the low-
level features of an input image, we present to utilize the
classical Laplacian of Gaussian (LoG) operator [23] to ob-
tain structure-related information, which is used to fuse into
the existing features to strengthen the localization ability.
Next, in order to enhance the discrimination ability, in-
spired by Invariant Risk Minimization [1], we explore con-
structing a set of diverse environments to intensify the vari-
ance of the input classification features. Concretely, a V AE
module [17, 20, 43] is exploited to recurrently output mul-
tiple augmented features of the classification features, i.e.,
the current output is taken as the input of the next iteration.
Since the input of each iteration is different, by means of
the variation operation, the diversity of the output features
could be enlarged. Then, the discrimination ability is im-
proved by minimizing the prediction discrepancy between
the augmented features and the classification features.
Finally, to alleviate the impact of lacking unknown data,
we present a cycle-consistent conditional V AE [40] to syn-
thesize virtual OOD features in the absence of paired super-
vision samples. Concretely, to ameliorate the synthesized
features to deviate from the distribution of ID features, we
first insert label information in the latent space to force a de-
terministic constrained representation. Meanwhile, by max-
imizing the discrepancy between the synthesized features
and the input features, the synthesized features could be
facilitated to contain plentiful OOD-relevant information,
which enhances the ability of distinguishing OOD objects.
In the experiments, our method is separately evaluated on
three different tasks. Extensive experimental results on mul-
tiple datasets demonstrate the superiorities of our method.
The contributions are summarized as follows:
(1) For unsupervised OOD-OD, we observe that using
the classical LoG operator could effectively enhance object-
related information in the extracted low-level features.(2) To reduce the risk of misclassifying ID objects into
the OOD category, we design a dedicated recurrent V AE to
generate diverse augmented features of the input classifica-
tion features, which is beneficial for improving the discrim-
ination ability of the object classifier.
(3) To alleviate the impact of lacking unknown data for
supervision, we present a cycle-consistent conditional V AE
to synthesize virtual OOD features, which is instrumental
in distinguishing OOD objects from ID objects.
(4) In the experiments, our method is evaluated on OOD-
OD [7], open-vocabulary detection [33, 49], and incremen-
tal object detection [22, 39]. Particularly, for OpenImages
dataset [24], compared with the baseline method [7], our
method significantly reduces FPR95 by around 13.73% .
|
Wang_PlaneDepth_Self-Supervised_Depth_Estimation_via_Orthogonal_Planes_CVPR_2023 | Abstract
Multiple near frontal-parallel planes based depth repre-
sentation demonstrated impressive results in self-supervised
monocular depth estimation (MDE). Whereas, such a repre-
sentation would cause the discontinuity of the ground as it is
perpendicular to the frontal-parallel planes, which is detri-
mental to the identification of drivable space in autonomous
driving. In this paper, we propose the PlaneDepth, a novel
orthogonal planes based presentation, including vertical
planes and ground planes. PlaneDepth estimates the depth
distribution using a Laplacian Mixture Model based on
orthogonal planes for an input image. These planes are
used to synthesize a reference view to provide the self-
supervision signal. Further, we find that the widely used
resizing and cropping data augmentation breaks the or-
thogonality assumptions, leading to inferior plane predic-
tions. We address this problem by explicitly constructing
the resizing cropping transformation to rectify the prede-
fined planes and predicted camera pose. Moreover, we
propose an augmented self-distillation loss supervised with
a bilateral occlusion mask to boost the robustness of or-
thogonal planes representation for occlusions. Thanks to
our orthogonal planes representation, we can extract the
ground plane in an unsupervised manner, which is impor-
tant for autonomous driving. Extensive experiments on
the KITTI dataset demonstrate the effectiveness and effi-
ciency of our method. The code is available at https:
//github.com/svip-lab/PlaneDepth .
| 1. Introduction
Monocular depth estimation (MDE) is an important task
in computer vision and it has tremendous potential applica-
tions, such as autonomous driving. However, the expensive
data and labels acquisition process restricts the data scale
in supervised MDE [3, 4, 9, 16, 26, 27]. Thus, researchers
*Corresponding Author.turn to solve the data constraints in supervised MDE with
the self-supervised MDE framework by leveraging videos
or stereo image pairs.
Most of the early works in self-supervised MDE leverage
a regression module to estimate pixel-wise depth map [11,
12, 15, 29, 36, 40] and warp the reference view image to the
target view based on the estimated depth. Then, a photo-
metric consistency loss is used to guide the learning of the
depth regression module. However, these methods usually
encounter the local minimum issue because of the locality
of bilinear interpolation on the reference view. To avoid this
issue, rather than using simple depth regression, multiple
frontal-parallel planes based depth representation is intro-
duced where depth space is divided into a fixed number of
frontal-parallel planes, and the depth network learns to clas-
sify which predefined plane each pixel belongs to [13, 14].
It has been shown that such representation could produce
much sharper depth on the edges of the object. However,
they are insufficient to represent the ground because the
ground plane is perpendicular to these predefined frontal-
parallel planes. As shown in Fig. 1, such vertical depth
planes only solution would lead to discontinuity on the
ground, which is obviously detrimental to the identification
of drivable space in autonomous driving. Further, photo-
metric consistency loss is applied to the weighted compo-
sition of each plane-warped image, which is sub-optimal as
the combination of different weights may lead to exactly the
same color image [30], resulting in ambiguous solutions for
depth plane classification.
Considering the ground is perpendicular to the frontal-
parallel planes, in this paper, we propose to leverage orthog-
onal planes to represent the scene where the ground planes
favor the depth estimation in the ground region. Further, we
propose to model the depth as a mixture of Laplace distri-
butions of orthogonal planes [38], where each Laplacian is
centered at one plane. We compute the photometric consis-
tency loss independently on the color image warped by each
plane, resulting in a more deterministic and less ambiguous
optimization objective compared with the weighted compo-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21425
③Ours
①PladeNet
①
②
③
①
②
③
①① ③
Input
① ② ③②②DepthHintFigure 1. Monocular Depth Estimation Results . The zoomed-in visualizations and bird-eye-view colored point cloud show that our
method can predict continuous depth in the ground region while preserving the sharp edges of objects. Compared with PladeNet [13], our
prediction has smoother depth in the ground region. Compared with DepthHint [40], our method addresses the occlusion problem, which
can be seen in figure that our prediction have eliminated depth artifacts in the edges of street lights and cars.
sition strategy mentioned above [13,14], consequently lead-
ing to better depth estimation results, as shown in Fig. 1.
Moreover, we can extract the ground plane in an unsuper-
vised manner thanks to our orthogonal planes representa-
tion.
Resizing and cropping are widely used as a data augmen-
tation strategy in the stereo training setting of MDE [13,14].
However, it would destroy the orthogonality of our prede-
fined planes, leading to inferior plane distributions estima-
tion. To remedy this issue, in this paper, we deeply ana-
lyze the effects of resizing and cropping on the world co-
ordinates system. We explicitly compute the resizing and
cropping transformation and use it to rectify the predefined
planes and the predicted camera rotation, which eases the
learning of plane distributions. We further propose to use
neural positional encoding (NPE) for the resizing and crop-
ping parameter and incorporate it into our PlaneDepth net-
work, which improves the robustness of network training.
The self-distillation strategy is commonly used to solve
occlusion problems [13, 14] and improve the depth predic-
tion results [29]. Post-processing is widely used to improve
the final prediction [11], which can be used naturally to gen-
erate more accurate self-distillation labels. In this paper, we
propose to combine post-processing with self-distillation by
using a bilateral occlusion mask to generate more accurate
supervision of network training, which improves both accu-
racy and efficiency of our method.
We summarize our contributions as follows:
1. We propose the PlaneDepth, a novel orthogonal plane-
based monocular depth estimation network, which
favours the representation of both vertical objects and
ground. Such representation leads to a much smoother
depth for ground regions and would facilitate the iden-
tification of drivable regions.
2. The depth within the scene is modeled by a mixture
of Laplacian distributions, and the depth classifica-tion problem is cast as the optimizing the mixture of
Laplace distribution, which avoids the ambiguity in
color expectation based depth estimation and leads to
more stable depth estimation.
3. An orthogonality-preserved data augmentation strat-
egy is proposed, which improves the robustness of net-
work training.
4. We combine post-processing with self-distillation by
our augmented self-distillation, which improves both
efficiency and accuracy.
|
Xu_Abstract_Visual_Reasoning_An_Algebraic_Approach_for_Solving_Ravens_Progressive_CVPR_2023 | Abstract Visual Reasoning: An Algebraic Approach for Solving Raven’s
Progressive Matrices
Jingyi Xu1∗Tushar Vaidya2◦∗Yufei Wu2◦*Saket Chandra1Zhangsheng Lai3◦Kai Fong Ernest Chong1†
1Singapore University of Technology and Design
2Nanyang Technological University3Singapore Polytechnic
jingyi xu@mymail.sutd.edu.sg tushar.vaidya@ntu.edu.sg yufei002@e.ntu.edu.sg
laizhangsheng@sp.edu.sg {saket chandra,ernest chong }@sutd.edu.sg
Abstract
We introduce algebraic machine reasoning, a new rea-
soning framework that is well-suited for abstract reasoning.
Effectively, algebraic machine reasoning reduces the diffi-
cult process of novel problem-solving to routine algebraic
computation. The fundamental algebraic objects of interest
are the ideals of some suitably initialized polynomial ring.
We shall explain how solving Raven’s Progressive Matri-
ces (RPMs) can be realized as computational problems in
algebra, which combine various well-known algebraic sub-
routines that include: Computing the Gr ¨obner basis of an
ideal, checking for ideal containment, etc. Crucially, the
additional algebraic structure satisfied by ideals allows for
more operations on ideals beyond set-theoretic operations.
Our algebraic machine reasoning framework is not only
able to select the correct answer from a given answer set,
but also able to generate the correct answer with only the
question matrix given. Experiments on the I-RAVEN dataset
yield an overall 93.2%accuracy, which significantly out-
performs the current state-of-the-art accuracy of 77.0%and
exceeds human performance at 84.4%accuracy.
| 1. Introduction
When we think of machine reasoning, nothing captures
our imagination more than the possibility that machines
would eventually surpass humans in intelligence tests and
general reasoning tasks. Even for humans, to excel in IQ
tests, such as the well-known Raven’s progressive matrices
(RPMs) [5], is already a non-trivial feat. A typical RPM
instance is composed of a question matrix and an answer
set; see Fig. 1. A question matrix is a 3×3grid of panels
*Equal contributions.†Corresponding author.
◦This work was done when the author was previously at SUTD.
Code: https://github.com/Xu-Jingyi/AlgebraicMR
Figure 1. An example of RPM instance from the I-RA VEN dataset.
The correct answer is marked with a red box.
that satisfy certain hidden rules, where the first 8 panels are
filled with geometric entities, and the 9-th panel is “miss-
ing”. The goal is to infer the correct answer for this last
panel from among the 8 panels in the given answer set.
The ability to solve RPMs is the quintessential display of
what cognitive scientists call fluid intelligence. The word
“fluid” alludes to the mental agility of discovering new re-
lations and abstractions [28], especially for solving novel
problems not encountered before. Thus, it is not surprising
that abstract reasoning on novel problems is widely hailed
as the hallmark of human intelligence [6].
Although there has been much recent progress in ma-
chine reasoning [15, 17, 30–33, 37, 38, 46, 47], a common
criticism [9, 25, 26] is that existing reasoning frameworks
have focused on approaches involving extensive training,
even when solving well-established reasoning tests such as
RPMs. Perhaps most pertinently, as [9] argues, reasoning
tasks such as RPMs should not need task-specific perfor-
This work is supported by the National Research Foundation, Sin-
gapore under its AI Singapore Program (AISG Award No: AISG-RP-
2019-015) and under its NRFF Program (NRFFAI1-2019-0005), and by
Ministry of Education, Singapore, under its Tier 2 Research Fund (MOE-
T2EP20221-0016).
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6715
Answer
selection
Answer
generation
Question matrix Concept matrixAnswer set
Stage 1 Algebraic representation Stage 2 Algebraic machine reasoningCompute
Concept matrix with answer 𝒊𝒊
Select answer
Invariance
modules
“Inverse”
invariance
modules
Generate answer
Invariance patterns extracted
from the first two rows
(Perceptual attribute values are obtained
from standard object detection models)Figure 2. An overview of our algebraic machine reasoning framework, organized into 2 stages.
mance optimization. After all, if a machine optimizes per-
formance by training on task-specific data, then that task
cannot possibly be novel to the machine.
To better emulate human reasoning, we propose what we
call “algebraic machine reasoning”, a new reasoning frame-
work that is well-suited for abstract reasoning. Our frame-
work solves RPMs without needing to optimize for perfor-
mance on task-specific data, analogous to how a gifted child
solves RPMs without needing practice on RPMs. Our key
starting point is to define concepts as ideals of some suitably
initialized polynomial ring. These ideals are treated as the
“actual objects of study” in algebraic machine reasoning,
which do not require any numerical values to be assigned to
them. We shall elucidate how the RPM task can be realized
as a computational problem in algebra involving ideals.
Our reasoning framework can be broadly divided into
two stages: (1) algebraic representation, and (2) algebraic
machine reasoning; see Fig. 2. In the first stage, we rep-
resent RPM panels as ideals, based on perceptual attribute
values extracted from object detection models. In the sec-
ond stage, we propose 4 invariance modules to extract pat-
terns from the RPM question matrix.
To summarize, our main contributions are as follows:
• We reduce “solving the RPM task” to “solving a
computational problem in algebra”. Specifically, we
present how the discovery of abstract patterns can
be realized very concretely as algebraic computations
known as primary decompositions of ideals.
• In our algebraic machine reasoning framework, we in-
troduce 4 invariance modules for extracting patterns
that are meaningful to humans.
• Our framework is not only able to select the correct an-
swer from a given answer set, but also able to generate
answers without needing any given answer set .
• Experiments conducted on RA VEN and I-RA VEN
datasets demonstrate that our reasoning framework
significantly outperforms state-of-the-art methods. |
Xie_Poly-PC_A_Polyhedral_Network_for_Multiple_Point_Cloud_Tasks_at_CVPR_2023 | Abstract
In this work, we show that it is feasible to perform multi-
ple tasks concurrently on point cloud with a straightforward
yet effective multi-task network. Our framework, Poly-PC,
tackles the inherent obstacles (e.g., different model architec-
tures caused by task bias and conflicting gradients caused
by multiple dataset domains, etc.) of multi-task learning on
point cloud. Specifically, we propose a residual set abstrac-
tion (Res-SA) layer for efficient and effective scaling in both
width and depth of the network, hence accommodating the
needs of various tasks. We develop a weight-entanglement-
based one-shot NAS technique to find optimal architec-
tures for all tasks. Moreover, such technique entangles the
weights of multiple tasks in each layer to offer task-shared
parameters for efficient storage deployment while providing
ancillary task-specific parameters for learning task-related
features. Finally, to facilitate the training of Poly-PC, we
introduce a task-prioritization-based gradient balance al-
gorithm that leverages task prioritization to reconcile con-
flicting gradients, ensuring high performance for all tasks.
Benefiting from the suggested techniques, models optimized
by Poly-PC collectively for all tasks keep fewer total FLOPs
and parameters and outperform previous methods. We also
demonstrate that Poly-PC allows incremental learning and
evades catastrophic forgetting when tuned to a new task.
| 1. Introduction
With the advances in deep learning, modern architec-
tures offer tremendous improvements in 3D understand-
ing [29, 36, 39, 54], e.g., point classification, segmentation,
and detection, etc. Nevertheless, these networks are inef-
ficient when handling numerous tasks since they are often
intended to accomplish a single task. Even if parallel com-
puting can address this issue, the memory footprints and
storage costs grow linearly with the number of networks,
*Corresponding author.rendering them unaffordable with constrained resources.
Multitask learning (MTL) [3, 12, 13] offers a solution to
this difficulty. In vision tasks, MTL models have been pre-
dominantly proposed to simultaneously perform depth esti-
mation, surface normal estimation, and semantic segmenta-
tion on an input image [14, 19, 52]. Besides, the joint part-
of-speech tagging, chunking, and named-entity recognition
for Natural Language Processing (NLP) have also been in-
vestigated [8, 9]. Since a substantial piece of the network
(i.e., the backbone) is shared among tasks, an MTL model
offers benefits in terms of complexity, inference time, and
learning efficiency. However, training multiple tasks for
point cloud poses two key challenges:
1) In contrast to common vision tasks, where a backbone
that performs well on image classification can be directly
ported to other tasks, the backbone for point cloud tasks
must be carefully developed. Consequently, it is not feasible
for all point cloud tasks to directly share a single backbone.
2) Instead of using a multi-task dataset as input, we seek
to jointly perform multiple tasks on point cloud with mul-
tiple dataset domains as input. Thus, in such a circum-
stance, multi-task learning would result in considerable dis-
parities in directions and magnitude of different task gradi-
ents, a phenomenon known as task interference or negative
transfer [34]. Meanwhile, task difficulty induced by multi-
ple dataset domains is also different, so task prioritization
should be considered to prevent placing undue emphasis on
easier tasks when optimizing the multi-task network.
To address the first challenge, we introduce residual set
abstraction (Res-SA) layer, a scalable point feature learning
module that can adapt to requirements for a variety of tasks
in terms of width and depth of the network. Simultane-
ously, when multiple tasks are presented to us, to reduce the
manpower loss caused by manually designing the network,
we seek to find the optimal architecture for each task us-
ing neural network search (NAS). Thus, we construct differ-
ent search spaces (neighbour points number, group radius,
width, and depth, etc.) for multiple tasks on Res-SA. Then,
inspired by AutoFormer [4], BigNAS [49] and slimmable
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1233
networks [48], we propose weight-entanglement-based one-
shot NAS technique that entangles the weights of different
tasks in the same layer, enabling different tasks to share
parameters in their common parts for efficient storage de-
ployment and offering task-specific parameters for learn-
ing task-related features. Moreover, unlike previous works
that share or monopolize all parameters in one layer for all
tasks, such strategy allows different tasks to share a certain
proportion of parameters in a single layer, achieving fine-
grained parameter sharing.
For negative transfer, previous methods narrow down the
problem to two types of differences (i.e., gradient magni-
tudes and directions) among task gradients, and propose
several algorithms [5, 6, 12, 18, 35, 50] to homogenize the
differences. However, these methods exclusively focus on
one aspect of task gradient differences and also disregard
the difficulty of various tasks. Intuitively, it is conceiv-
able for easy tasks to dominate learning while harder ones
stagnate during multi-task training. In response, we intro-
duce a task-prioritization-based gradient balance algorithm
that resolves negative transfer as a whole by homogeniz-
ing both gradient magnitudes and directions across tasks
via task prioritization. Specifically, we evaluate task prior-
itization using the previous loss record. Then, we use task
prioritization to homogenize task gradient magnitudes and
directions in the current epoch, endowing difficult task with
larger grad scale and enabling direction of final merged gra-
dient vector closer to that of the difficult task. Since the task
prioritization is dynamically adjusted, our algorithm avidly
concentrates on the difficult task learning at each epoch and
ensures that each task converges to the optimal solution.
Over the well-trained Poly-PC, we undertake an evolu-
tionary search with model size constraint to identify promis-
ing architectures for different tasks. Experiments show that
the searched models with weights inherited from the super-
net outperform several baselines and are comparable with
the state-of-the-arts trained individually for specific tasks.
We also demonstrate that Poly-PC allows incremental learn-
ing and evades catastrophic forgetting when generalizing to
a new task. Thus, Poly-PC is parameter-efficient and can
scale up more gracefully as the number of tasks increases.
The key contributions can be summarized as follows:
1) We propose Poly-PC to perform multi-task learning on
point cloud. To the best of our knowledge, Poly-PC is the
first framework that takes multiple dataset domains as in-
put for multi-task learning on point cloud. 2) We intro-
duce Res-SA layer that meets the needs of different tasks
in both the width and depth of the network. 3) We de-
velop weight-entanglement-based one-shot NAS technique
to find optimal architectures for different tasks as well as
shared parameters inside each layer for efficient storage. 4)
We propose task-prioritization-based gradient balance algo-
rithm that resolves negative transfer as a whole to promotethe training of Poly-PC. 5) We demonstrate that Poly-PC
allows incremental learning with fewer parameters.
|
Wang_EfficientSCI_Densely_Connected_Network_With_Space-Time_Factorization_for_Large-Scale_Video_CVPR_2023 | Abstract
Video snapshot compressive imaging (SCI) uses a two-
dimensional detector to capture consecutive video frames
during a single exposure time. Following this, an effi-
cient reconstruction algorithm needs to be designed to re-
construct the desired video frames. Although recent deep
learning-based state-of-the-art (SOTA) reconstruction al-
gorithms have achieved good results in most tasks, they
still face the following challenges due to excessive model
complexity and GPU memory limitations: 1) these models
need high computational cost, and 2) they are usually un-
able to reconstruct large-scale video frames at high com-
pression ratios. To address these issues, we develop an ef-
ficient network for video SCI by using dense connections
and space-time factorization mechanism within a single
residual block, dubbed EfficientSCI . The EfficientSCI net-
work can well establish spatial-temporal correlation by us-
ingconvolution in the spatial domain and Transformer in
the temporal domain , respectively. We are the first time to
show that an UHD color video with high compression ra-
tio can be reconstructed from a snapshot 2D measurement
using a single end-to-end deep learning model with PSNR
above 32 dB. Extensive results on both simulation and real
data show that our method significantly outperforms all pre-
vious SOTA algorithms with better real-time performance.
The code is at https://github.com/ucaswangls/
EfficientSCI.git .
| 1. Introduction
Traditional high-speed camera imaging methods usually
suffer from high hardware and storage transmission cost.
Inspired by compressed sensing (CS) [5, 9], video snapshot
compressive imaging (SCI) [45] provides an elegant solu-
tion. As shown in Fig. 2, video SCI consists of a hardware
encoder and a software decoder. In the encoder part, multi-
ple raw video frames are modulated by different masks and
*Equal Contribution, †Corresponding Author
Testing time (s)
PSNR (dB)
Figure 1. Comparison of reconstruction quality (average PSNR in
dB on 6 benchmark grayscale datasets) and testing time of several
SOTA deep learning based algorithms. Our proposed EfficientSCI
achieves higher reconstruction quality with fewer parameters and
shorter testing time.
then integrated by the camera to get a compressed measure-
ment, giving low-speed cameras the ability to capture high-
speed scenes. For the decoding part, the desired high-speed
video is retrieved by the reconstruction algorithm using the
captured measurement and masks.
So far, many mature SCI imaging systems [14, 24, 31]
have been built, but for the decoding part, there are still
many challenges. In particular, although the model-based
methods [21, 43, 44] have good flexibility and can recon-
struct videos with different resolutions and compression
rates, they require long reconstruction time and can only
achieve poor reconstruction quality. In order to improve the
reconstruction quality and running speed, PnP-FFDNet [46]
and PnP-FastDVDnet [47] integrate the pre-trained denois-
ing network into an iterative optimization algorithm. How-
ever, they still need a long reconstruction time on large-
scale datasets, e.g., PnP-FastDVDNet takes hours to recon-
struct a UHD video from a single measurement.
By contrast, deep learning based methods [28,30,35,40]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18477
Recovered
Color VideoOriginal Scene
Masks
. . .
Proposed
EfficientSCI
NetworkCameraBayer
Filter
Grayscale
MeasurementBayer
Measurement
Recovered
Grayscale Video●
. . .
t
. . .
t . . .
t
. . .
tFigure 2. Schematic diagram of grayscale and color video SCI.
have better real-time performance and higher reconstruction
quality. For example, BIRNAT [8] uses bidirectional recur-
rent neural network and generative adversarial method to
surpass model-based method DeSCI [21] for the first time.
MetaSCI [39] has made some explorations for the model
to adapt to different masks, which reduces the model train-
ing time. DUN-3DUnet [40] and ELP-Unfolding [42] com-
bine iterative optimization ideas with deep learning mod-
els to further improve reconstruction quality. However, due
to the high model complexity and insufficient GPU mem-
ory, most existing deep learning algorithms cannot train
the models required for reconstructing HD or large-scale
videos. RevSCI [7] uses a reversible mechanism [2] to re-
duce the memory used for model training, and can recon-
struct HD video with a compression rate up to 24, but the
model training time increases exponentially. In addition, the
current reconstruction algorithms generally use convolution
to establish spatial-temporal correlation. Due to the local
connection of convolution, long-term dependencies cannot
be well established, and the model cannot reconstruct data
with high compression rates.
In summary, model-based methods usually require long
reconstruction time and can only achieve poor reconstruc-
tion quality. Learning-based methods have high model
complexity but cannot be well applied to large-scale color
video reconstruction. To address these challenges, we de-
velop an efficient network for video SCI by using dense
connections and space-time factorization mechanism . As
shown in Fig. 1, our proposed method dramatically outper-
forms all previous deep learning based reconstruction algo-
rithms in terms of reconstruction quality and running speed
with fewer parameters. Our main contributions can be sum-
marized as follows:
• An efficient end-to-end network, dubbed EfficientSCI,
is proposed for reconstructing high quality video
frames from a snapshot SCI measurement.
• By building hierarchical dense connections within a
single residual block, we devise a novel ResDNet
block to effectively reduces model computational com-
plexity but enhance the learning ability of the model.• Based on the space-time factorization mechanism, a
Convolution and Trans former hybrid block (CFormer)
is built, which can efficiently establish space-time cor-
relation by using convolution in the spatial domain and
Transformer in the temporal domain, respectively.
• Experimental results on a large number of simu-
lated and real datasets demonstrate that our proposed
method achieves state-of-the-art (SOTA) results and
better real-time performance.
|
Vendrow_JRDB-Pose_A_Large-Scale_Dataset_for_Multi-Person_Pose_Estimation_and_Tracking_CVPR_2023 | Abstract
Autonomous robotic systems operating in human envi-
ronments must understand their surroundings to make ac-
curate and safe decisions. In crowded human scenes with
close-up human-robot interaction and robot navigation, a
deep understanding of surrounding people requires reason-
ing about human motion and body dynamics over time with
human body pose estimation and tracking. However, ex-
isting datasets captured from robot platforms either do not
provide pose annotations or do not reflect the scene distri-
bution of social robots. In this paper, we introduce JRDB-
Pose, a large-scale dataset and benchmark for multi-person
pose estimation and tracking. JRDB-Pose extends the ex-
isting JRDB which includes videos captured from a social
navigation robot in a university campus environment, con-
taining challenging scenes with crowded indoor and out-
door locations and a diverse range of scales and occlu-
sion types. JRDB-Pose provides human pose annotations
with per-keypoint occlusion labels and track IDs consistent
across the scene and with existing annotations in JRDB.
We conduct a thorough experimental study of state-of-the-
art multi-person pose estimation and tracking methods on
JRDB-Pose, showing that our dataset imposes new chal-
lenges for the existing methods. JRDB-Pose is available at
https://jrdb.erc.monash.edu/ .
| 1. Introduction
Visual scene understanding of human environments is a
difficult and crucial task for autonomous driving, human-
robot interaction, safe robotic navigation, and human action
recognition. Although rough predictions of human location
are sufficient for some applications, a deep understanding
of crowded human scenes and close-up human-robot inter-
action requires reasoning about human motion and body
dynamics with human body pose estimation and tracking.
Developing an AI model to predict human body pose is
*Equal contribution
Figure 1. JRDB-Pose provides high frequency annotations of
tracks and body joints in long scenes of crowded indoor and out-
door locations featuring dynamic motion and occlusion.
made more difficult by the varied and highly imbalanced
range of human motion found in daily living environments,
including a variety of scales, occlusions, and overlapping
humans, representing a long-tailed distribution of human
poses which is difficult for existing methods.
Human pose estimation and tracking is an active research
area with many new large-scale datasets [ 13,16,26,46]
contributing to significant recent progress; however, these
datasets do not primarily target robotic perception tasks in
social navigation environments, and thus rarely reflect spe-
cific challenges found in human-robot interaction and robot
navigation in crowded human environments, e.g. shopping
malls, university campus, etc.
JRDB [ 28] previously introduced a large-scale dataset
and a benchmark for research in perception tasks related to
robotics in human environments. The dataset was captured
using a social manipulator robot with a multi-modal sensor
suite including a stereo RGB 360° cylindrical video stream,
3D point clouds from two LiDAR sensors, audio and GPS
positions. JRDB [ 28] additionally introduced annotations
for 2D bounding boxes and 3D oriented cuboids. Recently,
JRDB-Act [ 15] further introduced new annotations on the
JRDB videos for individual actions, human social group
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4811
Dataset # Poses # Boxes Tracks Crowd ppF Occlusion Action Indoor +
OutdoorRobot
NavigationMulti-
ModalMulti-
Task
MPII [ 3] 40K 1-17 X
Penn Action [ 54] 160k 160k 1 XX
COCO [ 26] 250k 500k 1-20 XX X
KITTI [ 18] 80k XX X X X
H3D [ 35] 460k XX X
MOT20 [ 12] 1.65M XX X X
TH¨OR [ 41] 2.5M XX X
PoseTrack21 [ 13] 177k 429k XX 1-13 XX
Waymo [ 46] 173K 9.9M XX unk XX X
JRDB-Pose 636k 2.8M XX 1-36 XX X X XX
JTA†[16] 10M X 0-60 X X
MotSynth†[16] 40M X X 0-125 X X
Table 1. Comparison of existing public datasets related to 2D pose estimation and tracking. For each dataset we report the numbers of
poses, boxes, as well as the availability of tracking information, crowd data, people per frame (ppF), occlusion labels, action labels, scene
type, and if the data comes from robot navigation in human environments. We mark if a dataset has data modalities besides RGB frames,
and if it contains annotations for multi-task types. Note that JRDB-Pose is a multi-modal dataset captured from a social navigation robot,
addressing different research challenges than many existing works.†Synthetic dataset. unk: Unknown.
formation, and social activity of each social group.
JRDB was collected from a robotic navigation platform
in crowded human environments, diversely capturing both
indoor and outdoor scenes. Additionally since the robot’s
camera is located at person-level, and moves around, the
data is not just collected from a far-off view but captures
close-up scenes.
For robotic systems to safely navigate dynamic human
environments and perform collision risk prediction, they
must be able to accurately track and forecast motion of
people in their surroundings. Human motion is often fast
and requires high frame rate data for accurate prediction
and tracking, making high-frequency annotated human pose
data crucial for the development and evaluation of robotic
perception systems in human environments. Complex so-
cial interactions add difficulty and similarly benefit from
high-frequency data. In crowded scenes with high levels
of occlusions or overlap with other humans, tracking may
be also difficult.
We introduce JRDB-Pose, a large-scale dataset captured
from a mobile robot platform containing human pose and
head box annotations. JRDB-Pose provides 600k pose an-
notations and 600k head box annotations, each with an as-
sociated tracking ID. JRDB-Pose includes a wide distri-
bution of pose scales and occlusion levels, each with per-
keypoint occlusion labels and consistent tracking IDs across
periods of occlusion. The combination of JRDB-Pose with
JRDB and JRDB-Act forms a valuable multi-modal dataset
providing a comprehensive suite of annotations suited for
robotic interaction and navigation tasks.
Our contributions are:
•We introduce JRDB-Pose, a large-scale pose estima-
tion and tracking dataset providing pose annotations
and head boxes with tracking IDs and per-keypoint oc-
clusion labels.
•In addition to adopting the popular metrics, we intro-duce new metrics, OSPA-Pose and OSPA(2)-Pose for
pose estimation and tracking, respectively.
•We conduct a comprehensive evaluation of state-
of-the-art methods on JRDB-Pose and discuss the
strengths and weaknesses of existing methods.
|
Wei_Enhancing_the_Self-Universality_for_Transferable_Targeted_Attacks_CVPR_2023 | Abstract
In this paper, we propose a novel transfer-based tar-
geted attack method that optimizes the adversarial pertur-
bations without any extra training efforts for auxiliary net-
works on training data. Our new attack method is pro-
posed based on the observation that highly universal ad-
versarial perturbations tend to be more transferable for
targeted attacks. Therefore, we propose to make the per-
turbation to be agnostic to different local regions within
one image, which we called as self-universality. Instead
of optimizing the perturbations on different images, opti-
mizing on different regions to achieve self-universality can
get rid of using extra data. Specifically, we introduce a
feature similarity loss that encourages the learned pertur-
bations to be universal by maximizing the feature similar-
ity between adversarial perturbed global images and ran-
domly cropped local regions. With the feature similarity
loss, our method makes the features from adversarial per-
turbations to be more dominant than that of benign im-
ages, hence improving targeted transferability. We name the
proposed attack method as Self-Universality (SU) attack.
Extensive experiments demonstrate that SU can achieve
high success rates for transfer-based targeted attacks. On
ImageNet-compatible dataset, SU yields an improvement of
12% compared with existing state-of-the-art methods. Code
is available at https://github.com/zhipeng-
wei/Self-Universality .
| 1. Introduction
It has been demonstrated in recent works that adversar-
ial examples have the properties of transferability, which
means an adversarial example generated on one white-
box model can be used to fool other black-box models
[3, 15, 27, 30, 33]. The existence of transferability brings
convenience to performing black-box attacks, hence raising
security concerns for deploying deep models in real-world
†Corresponding author.applications [14,22,28,35]. Consequently, considerable re-
search attention has been spent on improving the transfer-
ability of adversarial examples for both non-targeted and
targeted attacks [4, 29, 36].
Compared to non-targeted attacks, transfer-based tar-
geted attacks are inherently much more challenging since
the goal is to fool deep models into predicting the specific
target class. The major difficulty of transfer-based targeted
attacks is caused by the fact that the gradient directions
from a source image to a target class are usually different
among different DNNs [16]. Hence, transfer-based attack
methods designed for non-targeted attacks typically work
poorly for targeted attacks. To increase the transferabil-
ity, previous studies make efforts in aligning the feature of
the generated adversarial example with the feature distribu-
tions of the targeted class, which are learned from class-
specific auxiliary networks [7, 8] or generative adversarial
networks [18]. However, these works assume that the train-
ing dataset is available and require extra training efforts for
auxiliary networks, making it hard to apply in real-world
scenarios.
This paper investigates the problem of transfer-based tar-
geted attacks. Specifically, we propose a new method that
improves the transferability of adversarial examples in a
more efficient way, i.e., without any training efforts for
auxiliary networks to learn the feature distributions of the
targeted class. Our method is proposed based on the ob-
servation that more universal perturbations yield better at-
tack success rates in targeted attacks. To this end, our
goal is to enhance the universality of the generated adver-
sarial perturbations, in order to improve its targeted trans-
ferability. Note that existing universal adversarial pertur-
bation (UAP) attacks [17] require optimizing the pertur-
bations on an abundant of images to achieve universality,
which is not applicable in our setting. To get rid of us-
ing extra data and make transfer-based targeted attacks as
convenient as non-targeted attacks, we propose to make the
perturbation to be agnostic to different local regions within
one image, which we called as self-universality. Then our
method optimizes the self-universality of adversarial pertur-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12281
Shared Perturbations
CNNs
Local Input
Random
Cropping
Resizing
Data AugmentationGlobal Input
Feature Similarity Loss
Classification LossGlobal Feature Local Feature
𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎 𝒄𝒄𝒄𝒄𝒄𝒄𝜽𝜽𝜽𝜽Score
Global PredictionTarget Class
Local PredictionTotal Loss𝝀𝝀×Perturbation OptimizationFigure 1. Overview of the proposed SU attack. The random cropping is applied to the given benign image to generate the local image
patch. After cropping, the local patch is resized to the shape of the benign image. Then both benign and local adversarial images with the
shared perturbations are input to a surrogate white-box CNN model. Finally, the gradients obtained from the classification loss and the
feature similarity loss are used to optimize perturbations.
bations instead. To be specific, in addition to classification
loss, our Self-Universality (SU) attack method introduces a
feature similarity loss that maximizes the feature similarity
between adversarial perturbed global images and randomly
cropped local regions to achieve self-universality. In this
way, our method makes the features from adversarial per-
turbations to be more dominant than that of benign images,
hence improving targeted transferability.
Figure 1 gives an overview of the proposed Self-
Universality (SU) attack. SU firstly applies random crop-
ping on benign images to obtain local cropped patches.
Then it resizes local patches to the same size with benign
images. Consequently, global and local inputs with shared
perturbations are input to the white-box model. Finally, ad-
versarial perturbations are updated by minimizing the clas-
sification loss ( e.g., Cross Entropy) between inputs and the
target class and maximizing the feature similarity loss ( e.g.,
Cosine Similarity) of adversarial intermediate features be-
tween local and global inputs. Benefiting from satisfying
the prediction of the target class between global and lo-
cal inputs and approximating adversarial intermediate fea-
tures between the two, the proposed SU attack can generate
perturbations with self-universality, thereby improving the
cross-model targeted transferability. We briefly summarize
our primary contributions as follows:
• Through experiments, we find that highly universal ad-
versarial perturbations tend to be more transferable for
targeted attacks, which brings new insight into the de-
sign of transfer-based targeted attack methods.
• Based on the finding, we propose a novel Self-
Universality (SU) attack method that enhances the uni-
versality of adversarial perturbations for better targeted
transferability without the requirement for extra data.
• We conduct comprehensive experiments to demon-strate that the proposed SU attack can significantly im-
prove the cross-model targeted transferability of adver-
sarial images. Notably, SU can be easily combined
with other existing methods.
|
Wang_Rethinking_the_Correlation_in_Few-Shot_Segmentation_A_Buoys_View_CVPR_2023 | Abstract
Few-shot segmentation (FSS) aims to segment novel ob-
jects in a given query image with only a few annotated
support images. However, most previous best-performing
methods, whether prototypical learning methods or affinity
learning methods, neglect to alleviate false matches caused
by their own pixel-level correlation. In this work, we rethink
how to mitigate the false matches from the perspective of
representative reference features (referred to as buoys), and
propose a novel adaptive buoys correlation (ABC) network
to rectify direct pairwise pixel-level correlation, including a
buoys mining module and an adaptive correlation module.
The proposed ABC enjoys several merits. First, to learn
the buoys well without any correspondence supervision, we
customize the buoys mining module according to the three
characteristics of representativeness, task awareness and re-
silience. Second, the proposed adaptive correlation module
is responsible for further endowing buoy-correlation-based
pixel matching with an adaptive ability. Extensive experimen-
tal results with two different backbones on two challenging
benchmarks demonstrate that our ABC, as a general plu-
gin, achieves consistent improvements over several leading
methods on both 1-shot and 5-shot settings.
| 1. Introduction
Semantic segmentation has achieved conspicuous
achievements attributed to the recent advances in deep neural
network [20]. However, its data-driven nature makes it heav-
ily dependent on massive pixel-level training data, which is
labor-intensive and time-consuming to collect. To imitate
the human learning habits which can recognize new classes
with only a glance, few-shot segmentation [25] (FSS) has
attracted increasing interest in recent years, which aims at
segmenting novel objects in the given query image with a
few annotated support images.
In previous literature, superior prototypical learning meth-
ods [15, 28, 37] and affinity learning methods [11, 26, 30, 40]
*Equal contribution
†Corresponding author
②
②
③
False match
① ③
②Buoys
Distribution similarity between and : 0.9Distribution similarity between and : 0.2
×Pixel similarity
of and : 0.8
Pixel similarity
of and : 0.9
(a) Pair -wise Correlation (b) Distribution based Correlation①
Support foreground pixel
Query foreground pixel
Support background pixelPixel -level similarity
Buoys correlationSupportQuery𝑝2𝑝1𝑝3
𝑝2𝑝1𝑝3Figure 1. The motivation of our proposed method. False matches
tend to occur in the pixel-level correlation due to large intra-class
variations. We introduce a series of representative features (buoys)
as references and calculate the buoys-level correlation to suppress
false matches.
are almost all equipped with pixel-level correlation. In spe-
cific, for prototypical learning methods, pixel-level correla-
tion is implicitly endowed with the expectation to generate
the foreground prior mask [28] for guiding the query pixel
classification. For affinity learning methods, pixel-level cor-
relation directly serves to aggregate support information and
convey it to the query image [40].
Despite their promising results, these methods neglect
the fact that there may exist cluttered background and inher-
ent large intra-class variations between support and query
images. In this case, directly employing pairwise pixel cor-
relation may lead to considerable false matches. To make
matters worse, the negative impact is inevitably amplified
by inbuilt low-data regimes of FSS, leading to sub-optimal
results. As shown in Fig. 1 (a), due to the significant pose
difference of the object plane in the support-query image
pair,p2in the query image located in the plane hatch is erro-
neously closer to p3situated on the ground than counterpart
p1in the support image. Therefore, it is highly desirable
to rectify these false matches caused by the direct pairwise
pixel-level correlation.
In this paper, we aim to mitigate the false matches in
previous FSS methods from the perspective of representative
reference features (referred to as buoys ). Specifically, we
design a novel Adaptive Buoys Correlation ( ABC ) network
that can be applied as a generic plugin, including a buoy
mining module and an adaptive correlation module to rec-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7183
tify direct pairwise pixel-level correlation for robust FSS.
The main idea is, for each pixel from the support or query
image, we can obtain the buoy-level correlation ( i.e., a like-
lihood vector) by comparing this pixel with a set of buoys.
In essence, the buoy-level correlation reflects the consensus
among representative buoys with a broader receptive field,
thus it encodes the relative semantic comparability of the
buoys that can be relied upon. Intuitively, each pair of true
pixel correlation ( e.g., the p1-p2pair in Fig. 1 (b)) derived
from the query and support images should be not only vi-
sually similar to each other ( i.e., high pairwise pixel-level
correlation), but also similar to any other buoys ( i.e., simi-
lar buoy-level correlation pair). Based on this correlation
consistency in ABC, false matches caused by similar vision
but dissimilar buoy correlations will be suppressed ( e.g.,
the point p3-p2pair in Fig. 1 (b)), ensuring that true pixel
correlations enjoy higher weights to safely extract support
information.
However, it is non-trivial to learn the buoys well without
any correspondence supervision for training. In the buoys
mining module (BMM), we carefully design this module
customized for the following three characteristics. (1) Rep-
resentativeness. Intuitively, the buoys should have the ability
to represent the diverse semantic clues from both support and
query pixels with a broader semantic contrast descriptive.
In other words, the matching between support-query pixels
based on buoy-level correlation should preserve as much
critical information as possible in the correspondence based
on pixel-level correlation. In specific, we take advantage
of Singular Value Decomposition (SVD) in pursuit of con-
trollable information decay. Besides, a representation decay
loss is devised to prevent the degradation of buoys. (2) Task
awareness. Since tasks are randomly sampled during FSS
training, each individual task consists of unique categories
with large distribution differences. Therefore, to enable the
buoys to perceive the current task and generalize to novel
classes well, we employ the cross-aggregation mechanism to
flexibly adjust buoys to meet the expectations of any tasks,
even the tasks with unseen classes. (3) Resilience. Consider-
ing the large gap between support and query images caused
by large intra-class variations and cluttered background, it
is necessary for buoys to bridge this gap and become more
referable. In specific, we prepend the self-aggregation mech-
anism to amend buoys by reconciling the intrinsic resilience
between support and query images.
Moreover, we observe that not all buoys are profitable
when calculating pixel pair matching based on buoy-level
correlation, and comprehensive consideration of the relation-
ships between helpful buoys with potential intersections can
assist in the final matching score. In the adaptive corre-
lation module (ACM), we endow buoy-correlation-based
pixel matching with an adaptive ability, which can flexibly
assign less weight to irrelevant buoys conditioned to dif-ferent pixel pairs and focus on the structural similarity of
related buoys. In specific, given the corresponding buoy
pair for each pixel pair as the initial marginal distribution of
the optimal transport(OT) algorithm, we can attain the opti-
mal transport plan which can be regarded as the structural
buoy contribution adaptive to the current pixel pair, and the
corresponding OT distance is adopted for scoring matches.
In this work, our contributions can be concluded as fol-
lows: (1) We propose an Adaptive Buoys Correlation (ABC)
network to rectify the widely used pairwise pixel correla-
tion in FSS. To the best of our knowledge, this is the first
work to mitigate the false matches in FSS methods, from
the perspective of representative reference features (buoys).
(2) We introduce two novel modules, namely Buoys Mining
Module (BMM) and Adaptive Correlation Module (ACM),
for representative buoys construction and adaptive matching
respectively. They can cooperate well to achieve effective
false match suppression. (3) Extensive experimental results
with two different backbones on two challenging bench-
marks demonstrate that our ABC, as a general plugin mod-
ule, achieves consistent improvements over several leading
methods on both 1-shot and 5-shot settings.
|
Wu_GANHead_Towards_Generative_Animatable_Neural_Head_Avatars_CVPR_2023 | Abstract
To bring digital avatars into people’s lives, it is highly
demanded to efficiently generate complete, realistic, and
animatable head avatars. This task is challenging, and it is
difficult for existing methods to satisfy all the requirements
at once. To achieve these goals, we propose GANHead
(Generative Animatable Neural Head Avatar), a novel gen-
erative head model that takes advantages of both the fine-
grained control over the explicit expression parameters and
the realistic rendering results of implicit representations.
Specifically, GANHead represents coarse geometry, fine-
gained details and texture via three networks in canonical
space to obtain the ability to generate complete and realistic
∗Corresponding author
Project page: https://wsj-sjtu.github.io/GANHead/head avatars. To achieve flexible animation, we define the
deformation filed by standard linear blend skinning (LBS),
with the learned continuous pose and expression bases and
LBS weights. This allows the avatars to be directly ani-
mated by FLAME [ 22] parameters and generalize well to
unseen poses and expressions. Compared to state-of-the-art
(SOTA) methods, GANHead achieves superior performance
on head avatar generation and raw scan fitting.
| 1. Introduction
How to efficiently generate photorealistic and animat-
able head avatars without manual effort is an open prob-
lem in computer vision and computer graphics, which has
numerous applications in VR/AR, games, movies, and the
metaverse. In these applications, it is desirable that the head
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
437
avatar models fulfill the following requirements: (1) Com-
plete ,i.e., the 3D model can cover the entire head including
the frontal face, the back of head, and the hair region; (2)
Realistic , where the avatar is expected to display vivid tex-
ture and detailed geometry; (3) Animatable ,i.e., the avatar
is supposed to be fully riggable over poses and expressions,
and can be controlled with low-dimensional parameters; (4)
Generative model can be more flexibly applied to various
downstream tasks, therefore, the head avatar model is pre-
ferred to be generative rather than discriminative for large-
scale content generation tasks.
We investigate the research on neural head avatars and
summarize previous works in Tab. 1. 3D morphable models
(3DMMs) built from registered meshes have been widely
employed to model head avatars. Principal component anal-
ysis (PCA) is applied to shape and texture, and novel sub-
jects can be generated by sampling the coefficients of PCA
bases. However, registering real-world raw scans to a tem-
plate mesh with fixed topology is non-trivial, and it is diffi-
cult to define a fixed topology for complex regions like hair.
As a result, most of these methods only model the facial
region [ 3±6,13,15,23,39], while a few cover the full head
without hair [ 2,11,22,33,36]. Moreover, the oversimplifi-
cation of PCA makes the models lack of realism.
In parallel with explicit meshes, implicit representations
have been utilized to approximate complex surfaces. Some
discriminative models [ 14,19,24,31,49] successfully model
the complete head geometry with realistic texture. How-
ever, these methods can only be applied to the reconstruc-
tion task, incapable of generating new samples. Mean-
while, 3D-aware GANs based on implicit representations
[7,8,29,48] can generate multi-view-consistent frontal face
images. Nevertheless, the heads are still incomplete. In
addition, it is difficult to animate the neural head avatars
generated by 3D-aware GANs. Recently, several implicit
generative models [ 18,41,47,50] achieve realistic and ani-
matable head avatars. However, these models either cannot
generate complete head with satisfactory geometry [ 18,50],
or can only be animated implicitly via the learned latent
codes [ 47], which is inconvenient and limits the general-
ization ability to unseen poses and expressions.
It is natural to ask a question: can we build a model
that can generate diverse realistic head avatars, and mean-
while be compatible with the animation parameters of the
common parametric face model (such as FLAME [ 22])?
In this work, we propose a generative animatable neural
head avatar model, namely, GANHead, that simultaneously
fulfills these requirements. Specifically, GANHead repre-
sents the 3D head implicitly with neural occupancy func-
tion learned by MLPs, where coarse geometry, fine-gained
details and texture are respectively modeled via three net-
works. Supervised with unregistered ground truth 3D head
scans, all these networks are defined in canonical space viaScheme Methods Complete Realistic Animatable Generative
Explicit
3DMMs[2,3,15,40] ✗ ✗ ✓ ✓
3D-aware
GANs[7,8,29,48] ✗ ✓ ✗ ✓
Personalized
Avatars[16,49] ✓ ✓ ✓ ✗
[14,31,44] ✗ ✓ △ ✗
Implicit
Head Models[50] ✗ ✓ △ ✓
[18] ✗ ✓ ✓ ✓
[47] ✓ ✓ △ ✓
Ours ✓ ✓ ✓ ✓
Table 1. A summary of current head avatar methods. △denotes
that the head avatar can only be animated implicitly via the learned
latent codes, and cannot generalize well to unseen expressions.
auto-decoder structures that are conditioned by shape, detail
and color latent codes, respectively. This framework allows
GANHead to achieve complete andrealistic generation re-
sults, while yielding desirable generative capacity.
The only remaining question is how to control the im-
plicit representation with animation parameters? To answer
this question, we extend the multi-subject forward skinning
method designed for human bodies [ 9] to human faces, en-
abling our framework to achieve flexible animation explic-
itly controlled by FLAME [ 22] pose and expression param-
eters. Inspired by IMAvatar [ 49], the deformation field in
GANHead is defined by standard vertex based linear blend
skinning (LBS) with the learned pose-dependent corrective
bases, the linear blend skinning weights, and the learned
expression bases to capture non-rigid deformations. In this
way, GANHead can be learned from textured scans, and no
registration or canonical shapes are needed.
Once GANHead is trained, we can sample shape, de-
tail and color latent codes to generate diverse textured
head avatars, which can then be animated flexibly by
FLAME parameters with nice geometry consistency and
pose/expression generalization capability. We compare our
method with the state-of-the-art (SOTA) complete head
generative models, and demonstrate the superiority of our
method.
In summary, our main contributions are:
• We propose a generative animatable head model that
can generate complete head avatars with realistic tex-
ture and detailed geometry.
• The generated avatars can be directly animated by
FLAME [ 22] parameters, robust to unseen poses and
expressions.
• The proposed model achieves promising results in
head avatar generation and raw scan fitting compared
with SOTA methods.
438
|
Worchel_Differentiable_Shadow_Mapping_for_Efficient_Inverse_Graphics_CVPR_2023 | Abstract
We show how shadows can be efficiently generated in
differentiable rendering of triangle meshes. Our central ob-
servation is that pre-filtered shadow mapping, a technique
for approximating shadows based on rendering from the
perspective of a light, can be combined with existing dif-
ferentiable rasterizers to yield differentiable visibility infor-
mation. We demonstrate at several inverse graphics prob-
lems that differentiable shadow maps are orders of mag-
nitude faster than differentiable light transport simulation
with similar accuracy – while differentiable rasterization
without shadows often fails to converge.
| 1. Introduction
Differentiable renderers have become an essential tool
for solving inverse problems in computer vision. They cur-
rently come in two flavors: (1) forward rasterization us-
inglocal shading models [9, 10, 39] and (2) path tracing
and/or Monte Carlo methods for global light transport sim-
ulation [22, 36, 53, 76]. While local methods are orders of
magnitude faster, they lack effects of global light interaction
such as shadows, caustics, or indirect illumination.
Modern methods in real-time graphics can generate sur-
prisingly realistic images by using efficient approximations
of global effects. The single most important aspect for in-
creasing the realism of local shading is the consideration of
shadows (see Figure 1): for each pixel to be shaded, check
if the path to a light source is unobstructed before evaluating
a local shading model. Doing this accurately is costly and
many approximate techniques have been developed. Our
central observation is that one of the oldest and most widely
used, shadow maps [71] (see Section 3), can be adapted to
work in differentiable rendering frameworks.
In Section 4 we explain how (certain approximations
of) shadow mapping can be differentiated, exploiting ex-
isting differentiable rasterizers. Our main idea is simi-
lar to shadow maps: exploit efficient rasterization from
the light’s point of view. For differentiable shadows
this means: Existing differentiable rasterizers handle dis-
Local Shading + Shadows Global Shading Local ShadingFigure 1. Adding shadows (middle) to local shading (left) is a
significant step towards global light transport simulation (right).
Rendering Numerical Deriva�ves Our Deriva�ves
Figure 2. Complex scene (330k triangles) rendered in real time
with our differentiable shadow mapping. It shows self-shadowing
of objects, shadowing between objects, and colored surfaces. Fi-
nite differences and our automatic derivatives w.r.t. movement of
the light (note the non-zero derivatives at shadow boundaries).
continuities of primary visibility along primitive borders;
Primary
Discon�nui�es
Secondary
Discon�nui�es
we use this machinery to handle dis-
continuities of secondary visibility along
shadow borders. The resulting images
contain shadows and are differentiable
(see Figure 2). For many inverse graph-
ics problems the level of realism they
provide will suffice, while being gener-
ated significantly faster than with global
methods. This is important for machine
learning tasks, where the renderings (and
their derivatives w.r.t. scene parameters)
are computed repeatedly. We provide de-
tails of the implementation and how parameters affect opti-
mization based on target images in Section 5.
Given the importance of shadows for realistic image syn-
thesis, it is unsurprising that many inverse problems heavily
depend on them. We demonstrate the trade-off and possi-
billities of differentiable shadow mappings in several appli-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
142
cations, ranging from pose estimation, over different types
of geometry reconstruction, to interactive editing by manip-
ulating shadows (Section 6).
The main idea of this work, using existing differentiable
rendering frameworks that readily resolve the problem of
visibility discontinuities for dealing with discontinuities of
secondary rays, may be useful for various other scenarios
beyond shadows. We elaborate on this and other immediate
consequences of our approach in Section 7.
|
Wang_BEV-LaneDet_An_Efficient_3D_Lane_Detection_Based_on_Virtual_Camera_CVPR_2023 | Abstract
3D lane detection which plays a crucial role in vehicle
routing, has recently been a rapidly developing topic in au-
tonomous driving. Previous works struggle with practical-
ity due to their complicated spatial transformations and in-
flexible representations of 3D lanes. Faced with the issues,our work proposes an efficient and robust monocular 3D
lane detection called BEV-LaneDet with three main contri-butions. First, we introduce the Virtual Camera that unifies
the in/extrinsic parameters of cameras mounted on differ-ent vehicles to guarantee the consistency of the spatial re-
lationship among cameras. It can effectively promote thelearning procedure due to the unified visual space. We sec-
ondly propose a simple but efficient 3D lane representation
called Key-Points Representation. This module is more suit-
able to represent the complicated and diverse 3D lane struc-tures. At last, we present a light-weight and chip-friendlyspatial transformation module named Spatial Transforma-
tion Pyramid to transform multiscale front-view features
into BEV features. Experimental results demonstrate that
our work outperforms the state-of-the-art approaches interms of F-Score, being 10.6% higher on the OpenLane
dataset and 4.0% higher on the Apollo 3D synthetic dataset,with a speed of 185 FPS. Code is released at https:
//github.com/gigo-team/bev_lane_det .
| 1. Introduction
As one of the fundamental guarantees for autonomous
driving, lane detection has recently received much attentionfrom researchers. Robust lane detection in real-time is oneof the foundations for advanced autonomous driving, which
can provide substantial amounts of useful information for
*Corresponding author
This work was supported by the National Key Research and Devel-
opment Project of New Generation Artificial Intelligence of China underGrant 2018AAA0102504.Autonomous Driving Systems (ADS), vehicle self-control,
localization, and map construction.
3D Head
Backbone
STPBEV output
Virtual
CameraZ
STP
Figure 1. End-to-end framework illustrated. The original image
in bottom-right is transformed to an input image by the Virtual
Camera module. The input image is then encoded into front-view
features by the backbone. Multiscale features from the backbone
are put into the Spatial Transformation Pyramid (STP) to obtain
BEV features. Given the BEV features, the 3D Head called Key-
Points Representation generates BEV output and Z, the height of
BEV lanes. At last, with the BEV output and Z, we can obtain 3D
lanes.
2D lane detection methods have demonstrated remark-
able performances [ 4,18,20,22]. Moreover, their outputs
are usually projected to the flat ground plane by Inverse Per-spective Transformation (IPM) with the camera in/extrinsic
parameters, and then curve fitting is performed to obtain the
BEV lanes. However, the pipeline might cause other prob-lems in the actual driving process [ 1,20] for challenging
situations like uphill and downhill.
In order to overcome these problems, more recent meth-
ods [ 3,6,8,9,14] have started to focus on the more com-
plicated 3D lane perception domain. There are two signif-
icant challenges in 3D lane detection: an efficient spatialtransformation module to obtain BEV features and a robust
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1002
representation for 3D lane structures. The acquisition of
BEV features is heavily dependent on camera in/extrinsic
parameters, and previous methods have chosen to incor-porate camera in/extrinsic parameters into the network toobtain BEV features. Moreover, unlike obstacles on theroad, lane structures are slender and diverse. These meth-ods [ 3,8,9] carefully design the 3D anchor representation
for lane structures with strong priors. However, they lacksufficient flexibility in some specific scenarios, as shown inFigure 5. Moreover, the anchor-free method [ 6] proposes a
3D lane representation based on the hypothesis that a linesegment in each predefined tile is straight. This representa-tion is complicated and inaccurate.
Towards the issues, we introduce BEV-LaneDet, an ef-
ficient and real-time pipeline that achieves 3D lane detec-tion from a single image, as shown in Figure 1. Different
from incorporating camera in/extrinsic parameters into the
network to get BEV features, we establish a Virtual Cam-
era, which is applied to images directly. The module unifies
the in/extrinsic parameters of front-facing cameras in differ-
ent vehicles by the homography method [ 2] based on BEV .
This module guarantees the consistency of the spatial rela-
tionship of front-facing cameras in different vehicles and re-
duces variance in data distribution. Therefore, it can effec-
tively promote the learning procedure due to the unified vi-sual space. We also propose a Key-Points Representation as
our 3D lane representation. We demonstrate that it is a sim-ple but effective module to represent 3D lanes and is more
expandable for complicated lane structures in some special
scenarios. Moreover, the cost of computation and the chip’s
friendliness are also crucial factors in autonomous driving.Therefore, a light-weight and easy-to-deploy spatial trans-
formation module based on MLP is our preference. Mean-while, inspired by FPN [ 17], we present the Spatial Trans-
formation Pyramid , which transforms multiscale front-view
features to BEV and provides robust BEV features for 3D
lane detection. In our experiments, we perform extensive
studies to confirm that our BEV-LaneDet significantly out-
performs the state-of-the-art PersFormer [ 3] in terms of F-
Score, being 10.6% higher on the OpenLane real-world test
set [ 3] and 4.0% higher on the Apollo simulation test set [ 9]
with a speed of 185 FPS.
In summary, our main contributions are three-fold:
1)Virtual Camera , a novel preprocessing module to unify
the in/extrinsic parameters of cameras, ensuring data distri-
bution consistency. 2)Key-Points Representation , a simple
but effective representation of 3D lane structures. 3)Spa-
tial Transformation Pyramid , a light-weight and easy-to-
deploy architecture based on MLP to realize transformation
from multiscale front-view features to BEV . Experimentsdemonstrate that our BEV-LaneDet achieves the state-of-
the-art performance compared to other 3D lane detection
algorithms. |
Xiong_CAPE_Camera_View_Position_Embedding_for_Multi-View_3D_Object_Detection_CVPR_2023 | Abstract
In this paper, we address the problem of detecting 3D ob-
jects from multi-view images. Current query-based methods
rely on global 3D position embeddings (PE) to learn the ge-
ometric correspondence between images and 3D space. We
claim that directly interacting 2D image features with global
3D PE could increase the difficulty of learning view trans-
formation due to the variation of camera extrinsics. Thus
we propose a novel method based on CAmera view Position
Embedding, called CAPE. We form the 3D position embed-
dings under the local camera-view coordinate system instead
of the global coordinate system, such that 3D position em-
bedding is free of encoding camera extrinsic parameters.
Furthermore, we extend our CAPE to temporal modeling by
exploiting the object queries of previous frames and encod-
ing the ego motion for boosting 3D object detection. CAPE
achieves the state-of-the-art performance ( 61.0%NDS and
52.5%mAP) among all LiDAR-free methods on nuScenes
dataset. Codes and models are available.1
| 1. Introduction
3D perception from multi-view cameras is a promising
solution for autonomous driving due to its low cost and rich
semantic knowledge. Given multiple sensors equipped on
autonomous vehicles, how to perform end-to-end 3D percep-
tion integrating all features into a unified space is of critical
importance. In contrast to traditional perspective-view per-
ception that relies on post-processing to fuse the predictions
from each monocular view [44, 45] into the global 3D space,
perception in the bird’s-eye-view (BEV) is straightforward
and thus arises increasing attention due to its unified repre-
sentation for 3D location and scale, and easy adaptation for
downstream tasks such as motion planning.
The camera-based BEV perception is to predict 3D ge-
ometric outputs given the 2D features and thus the vital
*Equal contribution.†Corresponding author. This work is done when
Kaixin Xiong is an intern at Baidu Inc.
1Codes of Paddle3D and PyTorch Implementation.
(a)PETRv2GlobalImagePESoftmax&MatMul+MatMul+
GlobalQueryPEVKQ
(b)CAPE LocalImagePESoftmax&MatMulMatMul+MatMul
LocalQueryPEVKQKQ
ImageFeaturesDecoderEmbeddingFeatureGuidedPEFigure 1. Comparison of the network structure between PETRv2
and our proposed CAPE. (a) In PETRv2, position embedding of
queries and keys are in the global system. (b) In CAPE, position
embeddings of queries and keys are within the local system of
each view. Bilateral cross-attention is adopted to compute attention
weights in the local and global systems independently.
challenge is to learn the view transformation relationship
between 2D and 3D space. According to whether the explicit
dense BEV representation is constructed, existing BEV ap-
proaches could be divided into two categories: the explicit
BEV representation methods and the implicit BEV repre-
sentation methods. The former constructs an explicit BEV
feature map by lifting the 2D perspective-view features to 3D
space [12, 19, 34]. The latter mainly follow DETR-based [3]
approaches in an end-to-end manner. Without projection or
lift operation, those methods [24, 25, 52] implicitly encode
the 3D global information into 3D position embedding (3D
PE) to obtain 3D position-aware multi-view features, which
is shown in Figure 1(a).
Though learning the transformation from 2D images to
3D global space is straightforward, we reveal that the in-
teraction in the global space for the query embeddings and
3D position-aware multi-view features hinders performance.
The reasons are two-fold. For one thing, defining each cam-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21570
(a)ImagetoGlobal3D(b)ImagetoLocal3D
(a)Imagetoglobal3D(b)Imagetolocal3D图五
优化后优化前Figure 2. View transformation comparison. In previous methods,
view transformation is learned from the image to the global 3D
coordinate system directly. In our method, the view transformation
is learned from image to local (camera) coordinate system.
era coordinate system as the 3D local space, we find that
the view transformation couples the 2D image-to-local trans-
formation and the local-to-global transformation together.
Thus the network is forced to differentiate variant camera
extrinsics in the high-dimensional embedding space for 3D
predictions in the global system, while the local-to-global
relationship is a simple rigid transformation. For another,
we believe the view-invariant transformation paradigm from
2D image to 3D local space is easier to learn, compared to
directly transforming into 3D global space. For example,
though two vehicles in two views have similar appearances
in image features, the network is forced to learn different
view transformations, as depicted in Figure 2 (a).
To ease the difficulty in view transformation from 2D
image to global space, we propose a simple yet effective
approach based on local view position embedding, called
CAPE, which performs 3D position embedding in the lo-
cal system of each camera instead of the 3D global space.
As depicted in Figure 2 (b), our approach learns the view
transformation from 2D image to local 3D space, which
eliminates the variances of view transformation caused by
different camera extrinsics.
Specially, as for key 3D PE, we transform camera frus-
tum into 3D coordinates in the camera system using camera
intrinsics only, then encoded by a simple MLP layer. As for
query 3D PE, we convert the 3D reference points defined in
the global space into the local camera system with camera
extrinsics only, then encoded by a simple MLP layer. In-
spired by [25, 29], we obtain the 3D PE with the guidance
of image features and decoder embeddings, for keys and
queries, respectively. Given that 3D PE is in the local space
whereas the output queries are defined in the global coordi-
nate system, we adopt the bilateral attention mechanism to
avoid the mixture of embeddings in different representation
spaces, as shown in Figure 1(b).
We further extend CAPE to integrate multi-frame tempo-
ral information to boost the 3D object detection performance,named CAPE-T. Different from previous methods that either
warp the explicit BEV features using ego-motion [11, 19]
or encode the ego-motion into the position embedding [25],
we adopt separated sets of object queries for each frame and
encode the ego-motion to fuse the queries.
We summarize our key contributions as follows:
•We propose a novel multi-view 3D detection method,
called CAPE, based on camera-view position embed-
ding, which eliminates the variances of view transfor-
mation caused by different camera extrinsics.
•We further generalize our CAPE to temporal modeling,
by exploiting the object queries of previous frames and
leveraging the ego-motion explicitly for boosting 3D
object detection and velocity estimation.
•Extensive experiments on the nuScenes dataset show
the effectiveness of our proposed approach and we
achieve the state-of-the-art among all LiDAR-free meth-
ods on the challenging nuScenes benchmark.
|
Xue_IMP_Iterative_Matching_and_Pose_Estimation_With_Adaptive_Pooling_CVPR_2023 | Abstract
Previous methods solve feature matching and pose esti-
mation using a two-stage process by first finding matches
and then estimating the pose. As they ignore the geomet-
ric relationships between the two tasks, they focus on ei-
ther improving the quality of matches or filtering poten-
tial outliers, leading to limited efficiency or accuracy. In
contrast, we propose an iterative matching and pose es-
timation framework (IMP) leveraging the geometric con-
nections between the two tasks: a few good matches are
enough for a roughly accurate pose estimation; a roughly
accurate pose can be used to guide the matching by pro-
viding geometric constraints. To this end, we implement
a geometry-aware recurrent attention-based module which
jointly outputs sparse matches and camera poses. Specif-
ically, for each iteration, we first implicitly embed geo-
metric information into the module via a pose-consistency
loss, allowing it to predict geometry-aware matches pro-
gressively. Second, we introduce an efficient IMP , called
EIMP , to dynamically discard keypoints without potential
matches, avoiding redundant updating and significantly re-
ducing the quadratic time complexity of attention computa-
tion in transformers. Experiments on YFCC100m, Scannet,
and Aachen Day-Night datasets demonstrate that the pro-
posed method outperforms previous approaches in terms
of accuracy and efficiency. Code is available at https:
//github.com/feixue94/imp-release
| 1. Introduction
Feature matching and relative pose estimation are two
fundamental tasks in computer vision and especially impor-
tant to visual localization and 3D reconstruction. Tradition-
ally, the two tasks are performed in two stages separately
by first finding correspondences between keypoints ex-
tracted from two images with nearest neighbor (NN) match-
ing and then estimating the relative pose from predicted
matches with robust estimators, e.g. RANSAC [6,7,20,32].
This pipeline has been the de-facto standard framework for
decades [5]. However, due to repetitive textures/structures,
1𝟑𝟑𝟑𝟑/𝟓𝟓 .𝟑𝟑/𝟒𝟒.𝟔𝟔𝟏𝟏𝟏𝟏𝟑𝟑𝟒𝟒/𝟏𝟏𝟏𝟏𝟑𝟑𝟒𝟒
𝟑𝟑𝟔𝟔𝟑𝟑/𝟑𝟑𝟑𝟑𝟑𝟑𝟑𝟑𝟏𝟏𝟓𝟓/𝟑𝟑𝟓𝟓𝟑𝟑
𝟑𝟑𝟐𝟐𝟓𝟓/𝟑𝟑𝟓𝟓𝟑𝟑
23
4𝟗𝟗𝟑𝟑/𝟑𝟑 .𝟏𝟏/𝟏𝟏.𝟐𝟐𝟏𝟏𝟏𝟏𝟗𝟗/𝟏𝟏 .𝟏𝟏/𝟏𝟏.𝟐𝟐
𝟏𝟏𝟑𝟑𝟏𝟏/𝟏𝟏 .𝟔𝟔/𝟏𝟏.𝟑𝟑Figure 1. Process of iterative matching and pose esti-
mation . For each image pair, we report the number in-
liers/rotation/translation errors (top-left) and retained keypoints in
left/right images (top-right) at iterations from 1 to 4. In the it-
erative process, our method finds more inliers spanning almost
the whole image, estimates increasingly precise pose and discards
keypoints without true matches gradually.
changing appearances and viewpoint variations, matches
given by NN often contain a large number of outliers, lead-
ing to poor pose accuracy [36, 37]. To mitigate this prob-
lem, some works [8, 10, 16, 27, 40, 48, 49, 52] filter potential
outliers of predicted matches with neural networks to im-
prove the pose accuracy. Although they report better results,
their performance is limited by the quality of initial matches
and require extra time for filtering at test time. Alterna-
tively, advanced matchers such as SuperGlue [36] enhance
the matching quality directly by using global information
from all keypoints via transformers [45] with a fixed num-
ber ( e.g. 9) of iterations. These methods have obtained re-
markable performance. Yet, their quadratic time complex-
ity for the attention computation degrades the efficiency in
real applications. Some following works [12,38,41] explore
more efficient variations, they run faster but are significantly
less accurate (see Table 1 and 3).
In this paper, we aim to introduce an efficient and accu-
rate framework for iterative matching and pose estimation.
Our approach is built upon the following observations: (1)
a few well distributed matches ( e.g. 5) could give a roughly
accurate pose ( e.g. essential matrix); (2) in turn, a roughly
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21317
accurate pose could provide strong geometric constraints
(e.g. epipolar line) to find more accurate matches at low
cost; (3) the pose also reveals which keypoints have po-
tential correspondences, preventing redundant operations.
Based on the geometric connections of the two tasks, we
propose an iterative matching and pose estimation frame-
work (IMP), to perform matching and pose estimation it-
eratively as opposed to in two separate stages. Specifically,
we progressively augment descriptors with self and cross at-
tention as [12,36,38,41], find matches and estimate the rela-
tive pose. As descriptors get gradually more discriminative,
more correct matches can be found, leading to increasingly
more precise pose, as shown in Fig. 1. However, due to the
noise [26] and degeneration ( e.g. co-planar keypoints) [13],
not all inliers could give a good pose [4, 18]. In addition to
the classification loss mainly used by prior methods [12,36],
we apply a pose-consistency loss [49] to the matching pro-
cess, enabling the model to find matches which are not only
accurate but also able to give a good pose.
Moreover, in order to avoid redundant operations on un-
informative keypoints, we employ a sampling strategy by
combining the matching and attention scores of keypoints
and the uncertainty of predicted poses to adaptively remove
useful keypoints, as shown in Fig. 1. Compared with prior
sampling approaches [19, 44] based mainly on attention
scores, our adaptive strategy overcomes the over-sampling
problem effectively. Our framework reduces the time cost
from two aspects. First, in contrast to adopting a fixed num-
ber of iterations for all cases [12, 36, 38], it runs fewer it-
erations for easy cases with few viewpoint or appearance
changes and more for challenging cases. Second, it re-
duces the cost of each iteration, significantly reducing the
quadratic time complexity of attention computation. We
also show that discarding potential outliers increases not
only efficiency but also accuracy (see Sec. 5). The efficient
version of IMP is called EIMP. Ours contributions are as
follows:
We propose to perform geometry-aware matching and
pose estimation iteratively, allowing the two tasks to
boost each other in an iterative manner.
We adopt a robust sampling strategy to adaptively dis-
card redundant keypoints in the iteration process, sig-
nificantly decreasing the time complexity.
We apply the pose uncertainty to the sampling strat-
egy, which further improves the accuracy matching
and pose estimation.
Our experiments on relative pose estimation and large-
scale localization tasks demonstrate that our method out-
performs previous competitors and is more efficient. We
organize the rest of the paper as follows. In Sec. 2, we dis-
cuss related works. In Sec. 3, we give a detailed descriptionof our method. We test the performance of our model in
Sec. 5 and conclude the paper in Sec. 6.
|
Xie_VideoTrack_Learning_To_Track_Objects_via_Video_Transformer_CVPR_2023 | Abstract
Existing Siamese tracking methods, which are built on
pair-wise matching between two single frames, heavily rely
on additional sophisticated mechanism to exploit tempo-
ral information among successive video frames, hindering
them from efficiency and industrial deployments. In this
work, we resort to sequence-level target matching that can
encode temporal contexts into the spatial features througha neat feedforward video model. Specifically, we adapt thestandard video transformer architecture to visual trackingby enabling spatiotemporal feature learning directly fromframe-level patch sequences. To better adapt to the track-ing task, we carefully blend the spatiotemporal informationin the video clips through sequential multi-branch triplet
blocks, which formulates a video transformer backbone.
Our experimental study compares different model variants,
such as tokenization strategies, hierarchical structures, and
video attention schemes. Then, we propose a disentan-
gled dual-template mechanism that decouples static and dy-
namic appearance clues over time, and reduces temporalredundancy in video frames. Extensive experiments show
that our method, named as VideoTrack, achieves state-of-
the-art results while running in real-time.
| 1. Introduction
Visual Object Tracking (VOT) is a fundamental problem
in computer vision that aims to track an object of interest ina video given its bounding box in the first frame [ 53]. In re-
cent years, mainstream approaches formulate visual track-
ing as a target matching problem, striking a good balance
between performance and simplicity.
The philosophy of target matching is to find the object by
looking for locations in the search area whose features havethe largest similarity with those in the target template. How-
*This work was done when Fei Xie was an intern at Microsoft Research
Asia.
(PEHGGLQJ
/D\HU6SDWLRWHPSRUDO
)HDWXUH/HDUQLQJ3UHGLFWLRQ
1HWZRUN
݂௭
݂௧
݂௧ଵ
݂௧
݂௫݂௧݂݂
݂ǥ
݂௫݂݂
6HTXHQFHOHYHOPDWFKLQJLQ9LGHR7UDFNIUDPHZRUN9LGHRFOLSLQSXW
3UHGLFWLRQ
1HWZRUN
(PEHGGLQJ
1HWZRUN/D\HU
݂௫0DWFKLQJ2SHUDWRU
1HWZRUN݂௭
(PEHGGLQJ
1HWZRUN/D\HU7HPSODWH
6HDUFKIUDPH
D3DLUZLVHPDWFKLQJLQ6LDPHVHWUDFNLQJIUDPHZRUND
7HPSRUDOFRQWH[WSURSDJDWLRQ2QOLQHWHPSODWHXSGDWLQJ
DDD
Figure 1. Comparing to the pair-wise matching pipeline in
Siamese tracking shown in (a), which requires sophisticated mech-anisms (a1)/(a2) to exploit temporal contexts, our neat video trans-
former tracking (VideoTrack) framework, as shown in (b), directly
lifts the pair-wise feature matching into spatiotemporal domain.
ever, target matching methods generally adopt per-frame
object matching manner, where the rich temporal informa-
tion in the video is largely overlooked. The representa-tive methods are Siamese trackers [ 4,20,26,27,57]: pair-
wise frames are fed into Siamese network to extract fea-tures and a matching network/operator is applied for targetmatching. Although recent pure transformer-based Siamese
trackers [ 10,16,55,61] unify feature extraction and match-
ing into a single step by leveraging the Vision Transformer(ViT) [ 14,31,49], these Siamese trackers still follow a pair-
wise matching philosophy that hinders their exploitation oftemporal context.
To explore temporal information, some works have pro-
posed sophisticated yet complex temporal modelling meth-
ods to improve the robustness of pair-wise Siamese track-
ers, where online template updating [ 60,64] and tempo-
ral context propagating among frame-wise features [ 47] are
two widely-adopted paradigms. Despite their great success,
extra hand-crafted hyper-parameters and complex network
modules are inevitably introduced to the Siamese pipeline,
which have a negative impact on the efficiency and are not
friendly to embedded devices. A natural question therefore
arises: can we exploit the temporal context while still main-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22826
tain the tracking pipeline in a neat, end-to-end fashion?
To get rid of the customized updating strategies and re-
dundant temporal modelling modules, we directly expandthe pair-wise input frames into video-level (see Fig. 1), to
capture rich temporal contexts. Specifically, we resort tovideo transformers [ 1] to learn spatiotemporal features, and
establish the inter-frame temporal dependencies by simplefeedforward modelling. Compared to the popular pair-wiseSiamese matching [ 4,27], our video transformer tracking
pipeline (VideoTrack) lifts the 2D matching pipeline into
the spatiotemporal domain, allowing the direct processing
of video-level inputs. Moreover, we modify the video trans-former architecture to better adapt it to tracking task, based
on the following observations and prior knowledge:
Feature learning. A good feature representation is vital
for the downstream vision tasks. Equipped with dedicated
learning scheme in network layers, feature representations
can be effectively enhanced from the shallow to deep level.
Thus, we attempt to encode the temporal contexts at feature-
level by utilizing a video transformer backbone. To ensure
the generality and feasibility, we design our video trans-
former model with the following principles: 1) Scalabil-
ity: Transformer layer is the basic building unit that can
be stacked to construct the backbone network in different
model scales. 2) Compatibility : To avoid expensive pre-
training costs, the modified network should ideally be com-patible with the model paramaters of the image-based visionbackbone, e.g. ViT [ 14]. It not only can utilize the available
pretraining weights, but also prevents the possible perfor-mance degeneration during fine-tuning.
Appearance vs. motion clue. Video could be viewed as a
temporal evolution of a static appearance. Compared to thevideo recognition task which takes the complete frame as
input, the input frames for most trackers are locally cropped
from the online predicted target location, which weakens
the motion clue in video-clips. Thus, we focus more on uti-
lizing the appearance clues. To leverage the strong prior invideo sequences, we explicitly divide them into three cate-gories: initial frame containing strong appearance informa-tion, intermediate frames which contain the dynamic statesof the target and search frame containing the target to be
predicted. Thus, we formulate a three-branch architecture
for the VideoTrack model.
Temporal redundancy. Consecutive video frames are
highly redundant. It is vital to reduce the temporal re-
dundancy as well as effectively modelling temporal con-
texts. Thus, we evaluate three basic temporal modellingapproaches in terms of efficiency, i.e. joint space-time, tem-
poral window and message token attention. With careful
analysis, we propose a disentangled dual-template mecha-
nism (see Sec. 3.4for details) to integrate into the video
backbone which decouples the redundant video informationinto the static & dynamic templates.As shown in Fig. 2, we propose our VideoTrack frame-
work on top of ViT [ 14], formulated by interleaving a series
of building units, named as triplet-block. The triplet-blockhas three hierarchical attention layers that mix the informa-tion flow asymmetrically among three branches. Spatiotem-poral guidance from the historical frames is passed to thecurrent search frame, obtaining a compact feature represen-tation for the final target prediction.
In summary, the main contributions are as follows:
• In contrast to existing Siamese tracking methods and their
labor-intensive temporal modelling, we for the first timelift the 2D pair-wise matching to spatiotemporal domain,encoding temporal context at the feature-level via a neat
feedforward video model, i.e. video vision transformer.
• We make the first attempt to adapt video transformer to
visual tracking. A thorough ablation analysis of video
transformer tracking is conducted, including tokenisation
strategies, model variants and temporal modelling ap-
proaches. Comprehensive analysis may inspire follow-ers to solve VOT task from the perspective of video-level
modelling. Moreover, our tracker exhibits encouraging
results on multiple VOT benchmarks.
|
Vaze_GeneCIS_A_Benchmark_for_General_Conditional_Image_Similarity_CVPR_2023 | Abstract
We argue that there are many notions of ‘similarity’ and
that models, like humans, should be able to adapt to these
dynamically. This contrasts with most representation learn-
ing methods, supervised or self-supervised, which learn a
fixed embedding function and hence implicitly assume a sin-
gle notion of similarity. For instance, models trained on Im-
ageNet are biased towards object categories, while a user
might prefer the model to focus on colors, textures or spe-
cific elements in the scene. In this paper, we propose the
GeneCIS (‘genesis’) benchmark, which measures models’
ability to adapt to a range of similarity conditions. Ex-
tending prior work, our benchmark is designed for zero-
shot evaluation only, and hence considers an open-set of
similarity conditions. We find that baselines from powerful
CLIP models struggle on GeneCIS and that performance on
the benchmark is only weakly correlated with ImageNet ac-
curacy, suggesting that simply scaling existing methods is
not fruitful. We further propose a simple, scalable solution
based on automatically mining information from existing
image-caption datasets. We find our method offers a sub-
stantial boost over the baselines on GeneCIS, and further
improves zero-shot performance on related image retrieval
benchmarks. In fact, though evaluated zero-shot, our model
surpasses state-of-the-art supervised models on MIT-States.
We, the architects of the machine, must decide a-priori
what constitutes its ‘world’; what things are to be taken
as ‘similar’ or ‘equal’ — Karl Popper, 1963
| 1. Introduction
Humans understand many notions of similarity and
choose specific ones depending on the task at hand [ 21,58].
Consider the task of finding ‘similar’ images illustrated
in Figure 1. Which of the rightmost images should be con-
sidered ‘most similar’ to the reference? Given different con-
ditions , each image could be a valid answer. For instance,
we may be interested in a specific object in the scene, focus-
ing on either the ‘car’ or ‘bridge’. One could even indicate
*Work done during an internship at Meta AI Research.
With the same bridge
With a black carWith the same car
Figure 1. Given different conditions (shown as blue text), differ-
ent images on the right can be considered most ‘similar’ to the
reference on the left. We present a general way to train and evalu-
ate models which can adapt to different notions of similarity.
a ‘negative’ similarity condition, specifying a change in the
image to identify the bottom image as most similar.
Learning such similarity functions is a central goal in
discriminative deep learning [ 11–13,34,63,68,75]. Dis-
criminative models, either supervised [ 30,75] or self-
supervised [ 9,10], learn embedding functions such that
‘similar’ images are closer in feature space than ‘dissimilar’
images. However, since there are infinitely many notions of
image similarity, how do we allow our models to choose?
Almost all current approaches assume a single notion of
similarity, either by explicitly training on a specific concept
[68,75] or through an implicit assumption in the underlying
data distribution [ 9,12]. Meanwhile, prior works tackling
the conditional problem have focused on constrained do-
mains such as fashion [ 69,73] or birds [ 46], with a restricted
set of similarity conditions. This is because developing and
evaluating models that can adapt to generic notions of sim-
ilarity is extremely challenging. Specifically, curating data
to train and evaluate such models is difficult, as collecting
annotations for all concepts of similarity is impossible.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6862
In this work we study the problem of general conditional
image similarity, training on an open-set of similarity con-
ditions, and evaluating on diverse similarity notions in a
‘zero-shot’ manner. We first design a benchmark compris-
ing of four evaluation datasets for conditional image simi-
larity, setting up conditional retrieval tasks. We define these
tasks under a unified framework which spans practical use
cases, and propose the benchmark as a sparse but broad
coverage of the conditional similarity space. We propose
these datasets for zero-shot evaluation only , and suggest
that models which can perform well without fine-tuning can
flexibly adapt to general notions of similarity, as desired.
We name this benchmark GeneCIS (‘ genesis ’) for Gene ral
Conditional Image Similarity. On GeneCIS, we find that
baselines built from powerful CLIP backbones struggle and,
moreover, that performance on it is only weakly correlated
with the backbones’ ImageNet accuracy [ 17]. This is in
contrast to popular vision tasks such as segmentation [ 39]
and detection [ 45], underlining the benchmark’s utility.
We also propose a solution to training general condi-
tional similarity models, based on parsing large-scale cap-
tion datasets [ 64,66]. Rather than requiring exhaustive sim-
ilarity annotations, we find that we can automatically mine
this information from already abundant image-caption data.
We show that training in this way offers substantial gains
over the baselines, approaching (and in some cases sur-
passing) carefully designed specific solutions for each of
the GeneCIS tasks. In addition, we demonstrate that our
method scales with increasing amounts of caption data, sug-
gesting promising directions for future work. Finally, on
related benchmarks from the ‘Composed Image Retrieval’
(CIR) field [ 44,74], we find our method provides gains over
zero-shot baselines. In fact, our model outperforms state-
of-the-art on the MIT-States benchmark [ 28], despite being
evaluated zero-shot and never seeing the training data.
Contributions. (i)We present a framework for considering
conditional image similarity, an important but understudied
problem; (ii)We propose the GeneCIS benchmark to test
models’ abilities to dynamically adapt to different notions
of similarity; (iii)We show that current vision-language
models like CLIP struggle on GeneCIS, and that perfor-
mance on it is only weakly correlated with ImageNet accu-
racy; (iv)We design a scalable solution to the conditional
similarity problem based on automatically parsing large-
scale image-caption data; (v)We show our models provide
substantial gains over zero-shot CLIP baselines; (vi)We
validate our models on related CIR benchmarks, surpassing
state-of-the-art on MIT-States despite zero-shot evaluation.
|
Wang_AltFreezing_for_More_General_Video_Face_Forgery_Detection_CVPR_2023 | Abstract
Existing face forgery detection models try to discriminate
fake images by detecting only spatial artifacts ( e.g., gener-
ative artifacts, blending) or mainly temporal artifacts ( e.g.,
flickering, discontinuity). They may experience significant
performance degradation when facing out-domain artifacts.
In this paper, we propose to capture both spatial and tempo-
ral artifacts in one model for face forgery detection. A simple
idea is to leverage a spatiotemporal model (3D ConvNet).
However, we find that it may easily rely on one type of arti-
fact and ignore the other. To address this issue, we present a
novel training strategy called AltFreezing for more general
face forgery detection. The AltFreezing aims to encourage
the model to detect both spatial and temporal artifacts. It
divides the weights of a spatiotemporal network into two
groups: spatial-related and temporal-related. Then the two
groups of weights are alternately frozen during the training
process so that the model can learn spatial and temporal
features to distinguish real or fake videos. Furthermore, we
introduce various video-level data augmentation methods to
improve the generalization capability of the forgery detec-
tion model. Extensive experiments show that our framework
outperforms existing methods in terms of generalization to
unseen manipulations and datasets.
| 1. Introduction
With the recent rapid development of face generation
and manipulation techniques [30, 31, 45 –49, 56], it has be-
come very easy to modify and manipulate the identities or
attributes given a face video. This brings many important
and impressive applications for movie-making, funny video
generation, and so on. However, these techniques can also be
abused for malicious purposes, creating serious crisis of trust
*Equal contribution.
†Corresponding authors.
Temporal conv
Spatial conv
!+
Temporal conv
!
Spatial conv+Figure 1. Illustration of AltFreezing training strategy in a build-
ing block of the spatiotemporal network. The convolutional ker-
nels of the spatiotemporal network are divided into two groups:
temporal-based andspatial-based . Two groups of weights are
alternately frozen during training. With the help of the alternate
freezing ( AltFreezing ) strategy, our model can capture both spatial
and temporal artifacts to distinguish between fake and real videos.
and security in our society. Therefore, how to detect video
face forgeries has become a hot research topic recently.
To date, one successful line of research [10, 32, 34, 39,
42, 44, 50] tries to discriminate fake images by detecting
“spatial” artifacts in the generated images ( e.g., checkboard,
unnaturalness, and characteristic artifacts underlying the
generative model). While these methods achieve impressive
results in searching spatial-related artifacts, they ignore the
temporal coherence of a video and fail to capture “temporal”
artifacts like flicking and discontinuity in the video face
forgeries. Some recent works [25,43,54] notice this issue and
try to address it by leveraging temporal clues. Although they
achieve encouraging results in detecting unnatural artifacts
at the temporal level, the resulting models are not sufficiently
capable of finding spatial-related artifacts.
In this paper, we attempt to capture both spatial and tem-
poral artifacts for general video face forgery detection. Gen-
erally, a well-trained spatiotemporal network (3D ConvNet)
has the capability of searching both spatial and temporal
artifacts. However, we find that naïve training may cause
it to easily rely on spatial artifacts while ignoring temporal
artifacts to make a decision, causing a poor generalization
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4129
capability. This is because spatial artifacts are usually more
obvious than temporal incoherence, naïvely optimizing a
3D convolutional network makes it easily rely on spatial
artifacts.
So the question is how to enable the spatiotemporal net-
work to capture both spatial and temporal artifacts. To this
end, we propose a novel training strategy called AltFreez-
ing. As shown in Fig. 1, the key idea is to alternately freeze
spatial- and temporal-related weights during training. Specif-
ically, a spatiotemporal network [9] is built upon 3D res-
blocks, which consist of spatial convolution with kernel size
as1×Kh×Kwand temporal convolution with kernel size
asKt×1×1. These spatial and temporal convolutional ker-
nels are responsible for capturing spatial- and temporal-level
features, respectively. Our AltFreezing strategy encourages
the two groups of weights to be updated alternately so that
both spatial and temporal artifacts can be conquered.
Furthermore, we propose a set of video-level fake video
argumentation methods for generating fake videos for train-
ing. These methods could be divided into two groups. The
first is fake clips that only involve temporal artifacts wherein
we just randomly drop and repeat frames for real clips. The
second is clips with only spatial artifacts that are obtained
by blending a region from one real clip to another real clip.
These augmentation methods are the first to take the tempo-
ral dimension into consideration and generate spatial-only
and temporal-only fake videos. With these augmentations,
the spatiotemporal model is further encouraged to capture
both spatial and temporal artifacts.
Equipped with the above-mentioned two techniques, we
achieve state-of-the-art performance in various challenging
face forgery detection scenarios, including generalization
capability to unseen forgeries, and robustness to various
perturbations. We also provide a comprehensive analysis
of our method to verify the effectiveness of our proposed
framework.
Our main contributions are three-fold as follows.
•We propose to explore both spatial and temporal arti-
facts for video face forgery detection. To achieve this, a
novel training strategy called AltFreezing is proposed.
•We introduce video-level fake data augmentation meth-
ods to encourage the model to capture a more general
representation of different types of forgeries.
•Extensive experiments on five benchmark datasets in-
cluding both cross-manipulation and cross-dataset eval-
uations demonstrate that the proposed method sets new
state-of-the-art performance.
|
Wang_Robust_Multiview_Point_Cloud_Registration_With_Reliable_Pose_Graph_Initialization_CVPR_2023 | Abstract
In this paper, we present a new method for the multi-
view registration of point cloud. Previous multiview regis-
tration methods rely on exhaustive pairwise registration to
construct a densely-connected pose graph and apply Itera-
tively Reweighted Least Square (IRLS) on the pose graph to
compute the scan poses. However, constructing a densely-
connected graph is time-consuming and contains lots of
outlier edges, which makes the subsequent IRLS struggle to
find correct poses. To address the above problems, we first
propose to use a neural network to estimate the overlap be-
tween scan pairs, which enables us to construct a sparse
but reliable pose graph. Then, we design a novel history
reweighting function in the IRLS scheme, which has strong
robustness to outlier edges on the graph. In comparison
with existing multiview registration methods, our method
achieves 11% higher registration recall on the 3DMatch
dataset and ∼13% lower registration errors on the Scan-
Net dataset while reducing ∼70% required pairwise regis-
trations. Comprehensive ablation studies are conducted to
demonstrate the effectiveness of our designs. The source
code is available at https://github.com/WHU-
USI3DV/SGHR .
| 1. Introduction
Point cloud registration is a prerequisite for many tasks
such as 3D reconstruction [17, 25, 32] and 3D segmenta-
tion [27, 35]. Most recent registration methods [1, 7, 22, 28,
39, 45, 48, 56] mainly focus on pairwise registration of two
partial point clouds (scans), which can only reconstruct a
part of the scene. In order to get a completed scene recon-
*Both authors contribute equally to this research.
†Corresponding authors: [dongzhenwhu, bshyang]@whu.edu.cn
Figure 1. Overview . (1) Given Nunaligned partial scans, our tar-
get is to register all these scans into (4) a completed point cloud.
Our method has two contributions. (2) We learn a global feature
vector to initialize a sparse pose graph which contains much less
outliers and reduces the required number of pairwise registrations.
(3) We propose a novel IRLS scheme. In our IRLS scheme, we
initialize weights from both global features and pairwise registra-
tions. Then, we design a history reweighting function to iteratively
refine poses, which improves the robustness to outliers.
struction, all partial point clouds should be simultaneously
aligned, which is called multiview registration . Due to its
complexity, multiview point cloud registration receives less
attention recently and only few recent studies propose mul-
tiview registration methods [18, 21, 30, 53, 54].
Given Nunaligned partial point clouds, multiview reg-
istration aims to find a globally-consistent pose for every
partial point cloud. A commonly-adopted pipeline of mul-
tiview registration consists of two phases [54]. First, a
pairwise registration algorithm [28, 45, 48] is applied to ex-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9506
haustively estimate the relative poses of all N
2
scan pairs,
which forms a fully-connected pose graph. The edges of the
graph stand for the relative poses of scan pairs while nodes
represent scans. Since the dense pose graph may include in-
accurate or even incorrect relative poses (outliers) between
two irrelevant scans, in the second phase, these pairwise
poses are jointly optimized by enforcing the cycle consis-
tency [30] to reject outlier edges and improve accuracy. For
the second phase, most recent methods, including hand-
crafted methods [5, 13, 29] or learning-based [21, 30, 54]
methods, follow a scheme of Iterative Reweighting Least
Square (IRLS). In the IRLS, initial weights are assigned
to edges to indicate these edges are reliable or not. Then,
based on the weights, a synchronization algorithm is ap-
plied to compute a new relative pose on every edge. After
that, the weights on edges are updated according to the dif-
ference between the old relative poses and the new ones.
IRLS iteratively synchronize poses from edge weights and
update weights with synchronized poses.
In an ideal case, an IRLS scheme will gradually lower
the weights of the outlier edges and only consider the in-
lier edges for pose synchronization. However, the initial
densely-connected graph contains lots of outliers, which of-
ten prevents the iterative reweighting mechanism of IRLS
from finding correct edges. To improve the robustness to
outliers, many researches focus on applying advanced hand-
crafted reweighting functions [11, 29] or designing graph
network to learn reweighting functions [30, 54]. How-
ever, the handcrafted reweighting functions usually require
a good initialization to converge to the correct poses while
learning-based reweighting methods may not generalize to
unseen settings. Designing a robust IRLS algorithm still
remains an open problem.
In this paper, we show that multiview registration can be
improved from two aspects, as shown in Fig. 1. First, we
learn a good initialization of the input pose graph which
avoids exhaustive pairwise registrations and reduces the
outlier ratio. Second, we propose a novel history reweight-
ing function which enables a stable convergence to correct
poses in the IRLS scheme.
In the pose graph construction, we learn a global feature
on each point cloud and the correlation of two global feature
indicates the overlap ratio between two point clouds. Such
global features enable us to generate a sparse pose graph
with fewer but more reliable edges instead of a densely-
connected graph. After that, we only need to apply the
pairwise registration algorithm and IRLS on these sparse
edges, which greatly reduce the computation complexity
of pairwise registration from O(N2)toO(N). Mean-
while, these reliable edges contain much less outliers than
the fully-connected graph, which provides the possibility to
find more accurate and consistent global poses in IRLS.
Though the initial graph contains much less outliers, ex-
Figure 2. An example on the 3DMatch dataset. (a) The input
scans under the ground truth poses. (b) The constructed sparse
pose graph with two incorrect relative poses (#0-#2 and #0-#4),
where #0 and #4 looks very similar to each other so that the pose
graph incorrectly include this scan pair. (c) and (d) show the nor-
malized weights on the graph edges on different iterations of the
vanilla IRLS and our method respectively. Our method is able
to find the outlier edges and gradually reduce their weights while
vanilla IRLS is biased towards the outlier edge (#0-#4) after few
iterations. (e) and (f) are the multiview registration results of the
vanilla IRLS and our method respectively.
isting IRLS algorithms are still sensitive to these outliers
and can be totally biased towards these outliers in the first
few iterations. An example is shown in Fig. 2: the initial
graph only contains two outlier edges. However, the outlier
scan pair “#0-#4” looks very similar and thus is initialized
with a large weight. Such an incorrect large weight inter-
feres the subsequent pose synchronization and brings sys-
tematic errors to the synchronized poses. The vanilla IRLS
trusts all synchronized poses and is easily dominated by
these erroneous poses, which leads to incorrect convergence
as shown in Fig. 1(c). To address this problem, we propose
a simple yet effective reweighting function called the his-
tory reweighting function . In history reweighting function,
edge weights at a specific iteration not only depends on the
synchronized poses at the current iterations but also consid-
ers the historical synchronized poses in previous iterations,
which acts like a regularizer to prevent the IRLS from be-
ing dominated by outliers at the early unstable iterations as
shown in the Fig. 2 (d). Then, the edge weights in our graph
gradually stabilize in the subsequent iterative refinements,
leading to the convergence to correct poses.
9507
We evaluate our method on three widely-used bench-
marks: the 3DMatch/3DLoMatch dataset [28,58], the Scan-
Net dataset [16], and the ETH dataset [43]. With the help
of the proposed sparse graph construction and IRLS with
history reweighting, our method surpasses the current mul-
tiview registration baselines by 11.0%and6.2%in reg-
istration recall on 3DMatch and 3DLoMatch, reduces the
mean rotation and translation errors on ScanNet by 12.8%
and13.8%. Meanwhile, our method shows strong gener-
alization ability. Only trained on the indoor dataset, our
method achieves a 99.8%registration recall on the outdoor
ETH dataset. Moreover, all the above state-of-the-art per-
formances of our method only require 20%∼40% pairwise
registrations of existing multiview point cloud registration
methods, which demonstrates our computation efficiency.
|
Xu_Gaussian_Label_Distribution_Learning_for_Spherical_Image_Object_Detection_CVPR_2023 | Abstract
Spherical image object detection emerges in many appli-
cations from virtual reality to robotics and automatic driv-
ing, while many existing detectors use ln-norms loss for re-
gression of spherical bounding boxes. There are two intrin-
sic flaws for ln-norms loss, i.e., independent optimization of
parameters and inconsistency between metric (dominated
by IoU) and loss. These problems are common in planar
image detection but more significant in spherical image de-
tection. Solution for these problems has been extensively
discussed in planar image detection by using IoU loss and
related variants. However, these solutions cannot be mi-
grated to spherical image object detection due to the undif-
ferentiable of the Spherical IoU (SphIoU). In this paper, we
design a simple but effective regression loss based on Gaus-
sian Label Distribution Learning (GLDL) for spherical im-
age object detection. Besides, we observe that the scale of
the object in a spherical image varies greatly. The huge
differences among objects from different categories make
the sample selection strategy based on SphIoU challeng-
ing. Therefore, we propose GLDL-ATSS as a better training
sample selection strategy for objects of the spherical image,
which can alleviate the drawback of IoU threshold-based
strategy of scale-sample imbalance. Extensive results on
various two datasets with different baseline detectors show
the effectiveness of our approach.
| 1. Introduction
In the past few years, with the numerous development
of panoramic cameras with omnidirectional vision, the ap-
plications of spherical images and videos are also becom-
ing more extensive, such as virtual & augmented real-
ity [9,17,24], robotics [7,8,18], automatic driving [1,28,31],
etc. As these spherical data increase, the demand for spher-
*This work was done when Hang Xu and Qiang Zhao were at ICT.
†Corresponding author.
(a) (b) (c)
(a) Planar imageTop:
||.|| 1 = 8.42
IoU = 0.36
Bottom:
||.|| 1 = 8.42
IoU = 0.36Top:
||.|| 1 = 6.78
IoU = 0.38
Bottom:
||.|| 1 = 6.78
IoU = 0.24
(b) Spherical image
Figure 1. Comparison between planar image and spherical image.
(a) Moving centers of two bounding boxes in the planar image
along the y-axis does not change the distance between the two
centers, so IoU and L1 are unchanged. (b) Moving centers of two
bounding boxes to the equator along the longitude changes the
distance between two centers, which causes IoU decrease sharply
whereas the L1 value is unchanged.
ical vision analysis tasks increases, especially the object
detection task of spherical image. However, compared
with the large literature on planar image object detection
[2, 12, 16, 34, 39], research in spherical image object detec-
tion is relatively in its earlier stage, with many open prob-
lems to solve.
In spherical image object detection, a bounding box is
represented by a Bounding Field of View (BFoV) [25].
Many existing detection benchmarks [4, 5, 26, 27, 29] use
ln-norms loss for the regression of BFoVs. However, the
ln-norms loss has some intrinsic flaws. First, parameters
of the bounding box are optimized independently in the ln-
norms loss, leading to the detection accuracy sensitive to
the fitting of any of the parameters. Second, Intersection
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1033
Detection
HeadRBFoVLabel Information
Backbone with Feature PyramidGaussian Distribution Sample Sele ction Regression LossZ
XYP
O
Z
XOP
YZ
XOYP1P2+
Statistical Distance:Figure 2. Overview of our main contributions. Gaussian distributions of spherical bounding boxes are constructed, and the sample selection
strategy (GLDL-ATSS) and regression loss (GLDL loss) are designed in an alignment manner on the basis of K-L divergence. Note that
the GLDL-ATSS and GLDL loss are not involved in the inference phase. Therefore, the inference time remains unchanged.
over Union (IoU) has been the standard metric for object
detection, so ln-norms as regression loss cause the negative
impact of the inconsistency between metric and loss. These
problems are common in planar image detection but more
significant in spherical image detection. Fig. 1(b) shows the
inconsistency between IoU and L1 Loss in the spherical im-
age. Specifically, moving centers of bounding boxes to the
equator along the longitude changes the distance between
two centers, which causes IoU decrease sharply while the
L1 value is unchanged. In contrast, as shown in Fig. 1(a),
moving centers of bounding boxes in the planar image along
the y-axis does not change the distance between the two
centers, so IoU and L1 are unchanged. Solutions for these
problems of ln-norms loss have become recently popular in
planar image detection by using IoU-induced loss, such as
IoU loss [32], GIoU loss [23] and DIoU loss [37]. How-
ever, when calculating the intersection area of two spher-
ical boxes, the number of intersection points needs to be
obtained. When two spherical boxes are completely coin-
cident or one edge is coincident, the number of intersection
points will not be fixed and duplicate points will appear.
The current SphIoU uses the DFS algorithm to remove these
duplicate intersection points. To the best of our knowledge,
the DFS algorithm is undifferentiable. Therefore, Spherical
IoU (SphIoU) [5,27] is undifferentiable and these solutions
based on IoU loss cannot be migrated to spherical image ob-
ject detection. The more recent work [30] finds the key to
maintaining the consistency between metric and regression
loss lies in the trend-level consistency between regression
loss and IoU loss rather than value-level consistency, which
greatly decreases the difficulty of designing alternatives.
In this paper, we design a simple but effective regres-
sion loss based on Gaussian Label Distribution Learning
(GLDL) for spherical image object detection. Specifically,
in the training phase, we first convert tangent planes of the
predicted spherical bounding box and ground truth box intothe Gaussian distribution. Then, we devise a dynamic sam-
ple selection strategy (GLDL-ATSS) to select positive sam-
ples, which can alleviate the drawback of IoU threshold-
based strategy of scale-sample imbalance. Finally, we de-
sign a regression loss function based on GLDL for spherical
object detection task. We observe that GLDL loss achieves
a trend-level alignment with SphIoU loss. In the inference
phase, we directly obtain the output for the spherical bound-
ing box from the trained model of the parameter weights, so
the inference time of the network remains unchanged. The
entire framework of the method in this paper is shown in
Fig. 2. The highlights of this paper are as follows:
• We explore a new regression loss function based on
Gaussian Label Distribution Learning (GLDL) for
spherical object detection task. It achieves a trend-
level alignment with SphIoU loss and thus naturally
improves the model.
• We align the measurement between sample selection
and loss regression based on the GLDL, and then
construct new dynamic sample selection strategies
(GLDL-ATSS) accordingly. GLDL-ATSS can allevi-
ate the drawback of IoU threshold-based strategy (i.e.,
scale-sample imbalance).
• Extensive experimental results on two datasets and
popular spherical image detectors show the effective-
ness of our approach.
|
Xue_Stare_at_What_You_See_Masked_Image_Modeling_Without_Reconstruction_CVPR_2023 | Abstract
Masked Autoencoders (MAE) have been prevailing
paradigms for large-scale vision representation pre-
training. By reconstructing masked image patches from
a small portion of visible image regions, MAE forces the
model to infer semantic correlation within an image. Re-
cently, some approaches apply semantic-rich teacher mod-
els to extract image features as the reconstruction target,
leading to better performance. However, unlike the low-
level features such as pixel values, we argue the features
extracted by powerful teacher models already encode rich
semantic correlation across regions in an intact image. This
raises one question: is reconstruction necessary in Masked
Image Modeling (MIM) with a teacher model? In this paper,
we propose an efficient MIM paradigm named MaskAlign.
MaskAlign simply learns the consistency of visible patch
features extracted by the student model and intact image
features extracted by the teacher model. To further ad-
vance the performance and tackle the problem of input in-
consistency between the student and teacher model, we pro-
pose a Dynamic Alignment (DA) module to apply learn-
able alignment. Our experimental results demonstrate that
masked modeling does not lose effectiveness even with-
out reconstruction on masked regions. Combined with
Dynamic Alignment, MaskAlign can achieve state-of-the-
art performance with much higher efficiency. Code and
models will be available at https://github.com/
OpenPerceptionX/maskalign .
| 1. Introduction
In recent years, Vision Transformers are showing
tremendous potential in computer vision area [8, 40, 41].
Following the big success of masked modeling in the nat-
ural language processing [24], Masked Image Modeling
*This work was performed when Hongwei Xue was visiting Shanghai
AI Laboratory as a research intern.
†Corresponding authors.
Encoder
EncoderEncoderDecoder
Visible
Masked
Target(a)
(b) (c)Add Position Embedding
HeadFigure 1. Comparison with existing paradigms of masked im-
age modeling. (a) Inpainting-style : BEiT [3], MaskFeat [43],
MVP [44], BEiT V2 [33], etc. They take the whole image with
some mask token replacement as the input of Encoder. Then a
Linear head is applied to predict masked feature. (b) Decoder-
style : MAE [13], CAE [5], MCMAE [11], etc. They drop most
tokens and take the rest as the input of Encoder. Then a multi-layer
Transformer is applied to decode masked features from visible to-
kens. (c) Ours. : Our paradigm take some visible tokens as the
input of Encoder and align visible tokens with target features only.
(MIM) has demonstrated a great ability of self-supervised
learning [3, 13], while alleviating the data-hungry issue
of Transformer architectures. The visual representation
learned through MIM shows promising performance on
various downstream vision tasks, outperforming the con-
trastive learning paradigms [4, 6].
Existing Masked Image Modeling (MIM) methods aim
to hallucinate the intact image from a small portion of vis-
ible image regions. As depicted in Fig. 1, existing MIM
methods are mainly divided into two types: (a) inpainting-
style [3, 43, 46] and (b) decoder-style [5, 11, 13]. These two
types both require the model to reconstruct masked regions.
The inpainting-style models replace image regions with
learnable vectors then fill them by the interaction within the
encoder. The decoder-style models drop image regions then
decode features from masked regions’ positions based on
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22732
the visible information. Some very recent works introduce
semantic-rich teacher models like CLIP [35] into the two
paradigms by using features extracted by teacher models as
the reconstruction target [17, 33, 34, 44]. In light of the se-
mantic knowledge learned by teacher models, these works
further improve the representation after masked image mod-
eling, leading to better performance.
Reconstruction on masked regions implicitly forces the
model’s encoder to understand the semantic correlations
within an image. However, the reconstruction manner
brings much computation on masked tokens within or out-
side the encoder in inpainting-style or decoder-style, re-
spectively. This redundant computation decreases the train-
ing efficiency of the encoder thus increasing the pre-training
cost. Unlike low-level and isolated features such as normal-
ized pixel values of patches, Histogram of Oriented Gra-
dients (HOG), etc., the feature map extracted by powerful
teacher models already contains rich semantic correlations,
learned during the teacher model training stage. This dif-
ference raises one question: is reconstruction the only way
in Masked Image Modeling (MIM) with teacher models?
To answer this question, we propose a much more efficient
MIM paradigm named MaskAlign without any reconstruc-
tion on masked tokens.
On contrary of applying reconstruction on masked to-
kens, MaskAlign simply aligns the visible features ex-
tracted by the student model and intact image features ex-
tracted by the teacher model. As a consequence, MaskAlign
forces the student model to learn not only good representa-
tion of the teacher model by feature alignment, but also the
ability to hallucinate by masked modeling: feature consis-
tency between the intact image and mask view requires the
student model to infer semantics from much less informa-
tion than teacher model. We adopt multi-level features of
the teacher model as supervision to borrow richer seman-
tics. However, the input of the student model contains much
less information than the teacher model’s, leading to mis-
alignment of each layer’s features. To tackle this problem,
we enhance the student’s features with a Dynamic Align-
ment (DA) module. DA dynamically aggregates different
levels of student features and aligns with multi-level fea-
tures of the teacher model. This approach can also easily
transfer to asymmetric student-teacher structures.
From our experimental results, MaskAlign with a wide
range of mask ratio outperforms the mask ratio of 0%,
where it degenerates into Feature Distillation [45]. This
verifies that masked modeling is still necessary for our
paradigm. Meanwhile, our experiments validate the ef-
fectiveness of Dynamic Alignment by comparing different
alignment strategies and numbers of feature levels. The
combination of masked modeling and Dynamic Alignment
makes our model achieve state-of-the-art results with much
higher efficiency. For example, our model outperformsBEiT v2 [33] by 0.4% on ImageNet Finetuning Accuracy
(from 85.0% to 85.4%) with 1/3 pre-training time only.
To sum up, our work has three-fold contributions:
1. We categorize and rethink existing Masked Image
Modeling (MIM) paradigms and propose a more ef-
ficient MIM approach called MaskAlign. Even with-
outany reconstruction on masked tokens, MaskAlign
achieves new state-of-the-art performance with much
higher efficiency.
2. We propose a Dynamic Alignment (DA) module to
tackle the problem of input inconsistency between the
student and teacher model, with negligible additional
parameters and computation.
3. We conduct extensive experiments to verify the effec-
tiveness of MaskAlign and Dynamic Alignment. Be-
sides, our model shows a good ability of generalization
on downstream tasks and larger size models.
|
Wang_All_in_One_Exploring_Unified_Video-Language_Pre-Training_CVPR_2023 | Abstract
Mainstream Video-Language Pre-training (VLP) mod-
els [10, 26, 64] consist of three parts, a video encoder, a
text encoder, and a video-text fusion Transformer. They
pursue better performance via utilizing heavier unimodal
encoders or multimodal fusion Transformers, resulting in
increased parameters with lower efficiency in downstream
tasks. In this work, we for the first time introduce an end-
to-end VLP model, namely all-in-one Transformer, that em-
beds raw video and textual signals into joint representations
using a unified backbone architecture. We argue that the
unique temporal information of video data turns out to be
a key barrier hindering the design of a modality-agnostic
Transformer. To overcome the challenge, we introduce a
novel and effective token rolling operation to encode tem-
poral representations from video clips in a non-parametric
manner. The careful design enables the representation
learning of both video-text multimodal inputs and unimodal
inputs using a unified model. Our pre-trained all-in-one
Transformer is transferred to various downstream video-
text tasks after fine-tuning, including text-video retrieval,
video-question answering, multiple choice and video cap-
tioning. State-of-the-art performances with the minimal
model FLOPs on ten datasets demonstrate the superior-
ity of our method compared to the competitive counter-
parts. The code and pretrained models are available at
https://github.com/showlab/all-in-one .
| 1. Introduction
Science advances rather steadily for most of the time,
but sometimes has a disruptive episode, where “an older
paradigm is replaced in whole or in part by an incompat-
ible new one.” [24] In this regard, Video-Language Pre-
training (VLP) models have recently experienced steady
progress, where joint representations are generally pro-
duced with a multimodal fusion network after extracting
*Corresponding Author.
Figure 1. Compare to mainstream video-language pre-training
methods. (a). Conventional methods [3, 10, 26, 64] use deep fea-
tures from separate encoders before fusion. The fusion layer can
be light [3] or heavy [10,26,64]. (b). Ours All-in-one Transformer
learns video and text joint representations end-to-end from their
raw inputs. We also support fast retrieval by feeding unimodal in-
puts during inference. (c). Comparison of FLOPs and retrieval
performance on MSRVTT [53]. Our All-in-one brings excellent
results with modest computational cost.
the visual and language features through unimodal encoders
[10,26,50,64]. We are here to break it and replace them with
“an incompatible new one” that has NO unimodal encoders.
The pre-train and then fine-tune scheme, has become
a standard paradigm to learn transferable video-language
representations for a wide range of downstream video-text
tasks [6, 15, 52, 52, 53, 61]. Mainstream methods attempt to
boost the pre-training in two ways: i. adopting more expen-
sive video/text encoders to obtain more powerful unimodal
features [3, 10] ii. designing heavier fusion networks to en-
hance the association between modalities [61, 64].
Instead of following these trends, we fundamentally re-
think design decisions and develop the simplest and most
lightweight architecture that learns video-language repre-
sentations from their raw inputs in an end-to-end manner.
Our model does not need any unimodal encoders ( e.g., ob-
ject detector in [64] or ResNet visual encoder in [26]) or
complex fusion layers, but embeds visual and text signals in
a unified manner, termed as All-in-one Transformer in our
paper. Our design is inspired by recent studies [1, 21, 37]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6598
that perform multimodal pre-training under the presump-
tion that Transformer can process visual data in the same
way as it processes text. However, our work is not the
straightforward application of them. It is not trivial how to
embed videos for our unified Transformer due to the unique
challenge of modeling temporal information without adding
much computational cost.
Existing works model temporal information by de-
signing temporal attention layers [3] or using temporal-
aware visual encoders ( e.g., 3D convnets in [64] or Video
Swin [36] in [10]). We cannot simply use them in our
unified All-in-one Transformer because they are modality-
dependent and computationally too expensive. To address
this issue, we design a novel, effective, and efficient method
to model temporal information. Our model only needs three
frames per video clip, which is much lower than other mod-
els (e.g., 16 [26] or 32 [1]) but can achieve the comparable
performance to them. Nevertheless, we are still not satis-
fied with the computational cost in the self-attention layer.
To further reduce the computational cost, we propose the
temporal token rolling operation , which is a cyclic atten-
tion between small proportions of the visual tokens in each
frame (Fig. 3-right). This is much more efficient than a
naive self-attention approach on flattened tokens (Fig. 3-
bottom). Furthermore, our modality-agnostic design en-
ables us to use our pre-trained model as a powerful uni-
modal feature extractor by feeding only video or text in-
puts. This can significantly reduce the computational cost
for retrieval task because we can simply compute the co-
sine similarity of texts and videos soon after the pretraining,
eliminating the need for training additional fusion module
of projecting the disjoint text and visual features into a com-
mon space (Fig. 1-b). Taken together, our All-in-one archi-
tecture achieves much less FLOPs and better text-to-video
performance than previous work (Fig. 1-c), despite the fact
that we use the same pre-training objectives [10, 26].
Contributions. (1) We introduce the simplest, most
lightweight, and most efficient video-language model,
namely All-in-one Transformer, which is the first to capture
video-language representations from the raw visual and tex-
tual signals end-to-end in a unified backbone architecture.
(2) We elucidate and tackle the difficulties of applying a
unified and shared backbone for multimodal video and text
data, that is, how to properly process the unique temporal
information of videos. A novel temporal token rolling op-
eration is proposed to capture the temporal representations
of sparsely sampled frames without any extra parameters or
increasing time complexity. (3) We propose a success prac-
tical to overcome the slow retrieval of one-stream model and
explore how to cotrain the image and video data together in
better ways. (4) Comprehensive experiments on five down-
stream video-text tasks of eleven datasets fully demonstrate
the superiority of our pre-trained All-in-one Transformer onboth effectiveness and efficiency compared to recent main-
stream methods [3, 10, 26].
|
Wang_Neural_Koopman_Pooling_Control-Inspired_Temporal_Dynamics_Encoding_for_Skeleton-Based_Action_CVPR_2023 | Abstract
Skeleton-based human action recognition is becoming
increasingly important in a variety of fields. Most exist-
ing works train a CNN or GCN based backbone to extract
spatial-temporal features, and use temporal average/max
pooling to aggregate the information. However, these pool-
ing methods fail to capture high-order dynamics informa-
tion. To address the problem, we propose a plug-and-
play module called Koopman pooling, which is a param-
eterized high-order pooling technique based on Koopman
theory. The Koopman operator linearizes a non-linear dy-
namics system, thus providing a way to represent the com-
plex system through the dynamics matrix, which can be
used for classification. We also propose an eigenvalue nor-
malization method to encourage the learned dynamics to
be non-decaying and stable. Besides, we also show that
our Koopman pooling framework can be easily extended to
one-shot action recognition when combined with Dynamic
Mode Decomposition. The proposed method is evaluated
on three benchmark datasets, namely NTU RGB+D 60, 120
and NW-UCLA. Our experiments clearly demonstrate that
Koopman pooling significantly improves the performance
under both full-dataset and one-shot settings.
| 1. Introduction
Skeleton-based human action recognition is a crucial
task in many applications, ranging from video surveillance
to autonomous driving and human-robot interaction. With
the prevalence of deep learning, the rising of LSTM, CNN
and GCN has significantly improved the performance of ac-
tion recognition. Most existing methods [9, 13,28,46,70,73]
use a CNN or GCN based backbone to extract complex
spatial-temporal features, and use temporal average/max
pooling to aggregate the information. However, vanilla tem-
poral average/max pooling contains only first-order infor-
mation and abandons higher-order statistical information.
*Corresponding authorFeature Extraction Aggregation & Classification
…
𝑥𝑥1𝑥𝑥2 𝑥𝑥𝑇𝑇
ҧ𝑥𝑥Koopman Space𝑥𝑥1 𝑥𝑥2 𝑥𝑥𝑇𝑇
Class -wise Dynamics Matrices𝐾𝐾2𝐾𝐾1
𝐾𝐾𝑁𝑁𝐾𝐾 𝐾𝐾
Matching…
Temporal Average Pooling
Average/Sum
Koopman PoolingFeature Space
Classifier…
Figure 1. Most existing works use temporal average pooling to
aggregate the information along the temporal dimension (left), and
only first-order information is considered. Our proposed Koopman
pooling (right) instead focuses on the true latent dynamics of the
sequence in linear Koopman space, and learns a set of class-wise
Koopman dynamics matrices to represent the dynamics of each
class. The classification is achieved by dynamics matching.
To this end, recent works focus on second-order pool-
ing to capture second-order information of the feature se-
quences. Specifically, bilinear and covariance pooling is
widely used to extract second-order statistical information
from the feature sequence. Early works [14] use sim-
ple bilinear pooling to model the interaction of features
and aggregate temporal information. In recent years, re-
searchers proposed to use covariance pooling [22] to cap-
ture second-order statistical information, as covariance ma-
trix can model the interaction between features while main-
taining good geometrical structure [31]. However, the
skeleton sequence as well as the extracted feature sequence
has complex underlying dynamics in nature. Existing meth-
ods such as covariance pooling only exploit the feature in-
teraction between frames or channels, but they fail to dis-
cover the true dynamics of the sequence. Instead, we aim
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10597
to directly focus on the temporal dynamics of the sequence
and conduct sequence recognition based on class-specific
dynamics.
A critical motivating idea of this work is the applica-
tion of Koopman theory [26]. The original Koopman the-
ory aims to re-formulate a non-linear dynamical system
to be a linear one. To this end, a Koopman operator is
required to lift the original features into some possibly
infinite-dimensional Hilbert space, wherein the evolution
of the dynamics becomes linear. Such a treatment is fa-
vored in numerous applications, including time-series anal-
ysis, since an arsenal of spectral-analysis tools can facilitate
the in-depth investigation of the model (such as the tempo-
ral stability property). In practice, identifying the optimal
Koopman operator of a specific dynamic system remains a
challenge. Besides conventional dynamic mode decompo-
sition (DMD) [6], recent years also witnessed the utilization
of black-box neural network for learning the Koopman op-
erator in differentiable fashion [1, 23,38,42,61].
To our best knowledge, the proposed Koopman pooling
in this paper is the first work to leverage the power of Koop-
man theory to formulate a new high-order pooling method.
Unlike existing methods like covariance pooling which use
covariance matrix to model the temporal correlation of fea-
tures implicitly, we instead view the temporal evolution of
feature vectors as a dynamical system, and use the dynam-
ics itself to model the temporal correlation explicitly. As
shown in figure 1, the original trajectories are mapped to a
new embedding space where the temporal evolution is lin-
ear. The transition matrix of this linear system can therefore
be viewed as the signature of this sequence, which contains
rich high-order information. For the classification task,
our model learns the class-wise Koopman matrices which
represent class-specific dynamics, and conducts dynamics
matching to obtain the classification score. Based on our
observation of the learned dynamics, this paper highlight
the critical importance of the stability of learned dynamics
when tackling recognition tasks, and propose an eigenvalue
normalization technique to push the learned linear dynam-
ics to be stable and non-decaying. We also combine Koop-
man pooling and dynamic mode decomposition(DMD) to
formulate a new framework for one-shot action recognition,
which uses dynamics to match the sequence instead of the
common practice of computing distance in the embedding
space or conducting metric learning.
To verify the effectiveness of the proposed method, we
conduct extensive experiments on 3 skeleton-based action
recognition datasets, namely NTU RGB+D, NTU RGB+D
120, and NW-UCLA. The results demonstrate that Koop-
man pooling significantly improves the performance under
both full-dataset and one-shot settings.
To be summarized, the main contributions of this paper
are as follows:•We proposed Koopman pooling, which is the first work
in the literature to design a plug-and-play high-order
pooling method based on Koopman theory that allows
eigenvalue manipulation and one-shot recognition.
•We emphasize the critical importance of learning a sta-
ble and non-decaying system for recognition tasks and
accordingly design an eigenvalue normalization tech-
nique based on control theory.
•Our comprehensive experiments based on various
backbones [9, 28,68] on the commonly used bench-
mark datasets NTU RGB+D 60/120 and NW-UCLA
shows Koopman pooling significantly improves the
performance under both full-dataset and one-shot set-
ting.
|
Wang_Neural_Fields_Meet_Explicit_Geometric_Representations_for_Inverse_Rendering_of_CVPR_2023 | Abstract
Reconstruction and intrinsic decomposition of scenes
from captured imagery would enable many applications
such as relighting and virtual object insertion. Recent NeRF
based methods achieve impressive fidelity of 3D reconstruc-
tion, but bake the lighting and shadows into the radiance
field, while mesh-based methods that facilitate intrinsic de-
composition through differentiable rendering have not yet
scaled to the complexity and scale of outdoor scenes. We
present a novel inverse rendering framework for large urban
scenes capable of jointly reconstructing the scene geometry,
spatially-varying materials, and HDR lighting from a set
of posed RGB images with optional depth. Specifically, we
use a neural field to account for the primary rays, and use
an explicit mesh (reconstructed from the underlying neural
field) for modeling secondary rays that produce higher-order
lighting effects such as cast shadows. By faithfully disentan-
gling complex geometry and materials from lighting effects,
our method enables photorealistic relighting with specular
and shadow effects on several outdoor datasets. Moreover, it
supports physics-based scene manipulations such as virtual
object insertion with ray-traced shadow casting. | 1. Introduction
Reconstructing high fidelity 3D scenes from captured im-
agery is an important utility of scaleable 3D content creation.
However, for the reconstructed environments to serve as “dig-
ital twins” for downstream applications such as augmented
reality and gaming, we require that these environments are
compatible with modern graphics pipeline and can be ren-
dered with user-specified lighting. This means that we not
only need to reconstruct 3D geometry and texture but also
recover the intrinsic properties of the scene such as material
properties and lighting information. This is an ill-posed,
challenging problem oftentimes referred to as inverse render-
ing [1].
Neural radiance fields (NeRFs) [34] have recently
emerged as a powerful neural reconstruction approach that
enables photo-realistic novel-view synthesis. NeRFs can
be reconstructed from a set of posed camera images in a
matter of minutes [14, 35, 43] and have been shown to scale
to room-level scenes and beyond [45, 48, 55], making them
an attractive representation for augmented/virtual reality and
generation of digital twins. However, in NeRF, the intrinsic
properties of the scene are not separated from the effect of in-
cident light. As a result, novel views can only be synthesised
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8370
under fixed lighting conditions present in the input images,
i.e. a NeRF cannot be relighted [42].
While NeRF can be extended into a full inverse rendering
formulation [3], this requires computing the volume render-
ing integral when tracing multiple ray bounces. This quickly
becomes intractable due to the underlying volumetric rep-
resentation. Specifically, in order to estimate the secondary
rays, the volumetric density field of NeRF would have to
be queried along the path from each surface point to all the
light sources, scaling with O(nm)per point, where nde-
notes the number of samples along each ray and mis the
number of light sources or Monte Carlo (MC) samples in the
case of global illumination. To restrict the incurred computa-
tional cost, prior works have mostly focused on the single
object setting and often assume a single (known) illumina-
tion source [42]. Additionally, they forgo the volumetric
rendering of secondary rays and instead approximate the
direct/indirect lighting through a visibility MLP [42, 64].
In contrast to NeRF, the explicit mesh-based representa-
tion allows for very efficient rendering. With a known mesh
topology, the estimation of both primary and secondary rays
is carried out using ray-mesh intersection ( O(m)) queries
that can be efficiently computed using highly-optimized li-
braries such as OptiX [39]. However, inverse rendering
methods based on explicit mesh representations either as-
sume a fixed mesh topology [13], or recover the surface
mesh via an SDF defined on a volumetric grid [36] and are
thus bounded by the grid resolution. Insofar, these methods
have been shown to produce high-quality results only for the
smaller, object-centric scenes.
In this work, we combine the advantages of the neural
field (NeRF) and explicit (mesh) representations and pro-
pose FEGR1, a new hybrid-rendering pipeline for inverse
rendering of large urban scenes. Specifically, we represent
the intrinsic properties of the scene using a neural field and
estimate the primary rays (G-buffer) with volumetric render-
ing. To model the secondary rays that produce higher-order
lighting effects such as specular highlights and cast shad-
ows, we convert the neural field to an explicit representa-
tion and preform physics-based rendering. The underlying
neural field enables us to represent high-resolution details,
while ray tracing secondary rays using the explicit mesh
reduces the computational complexity. The proposed hybrid-
rendering is fully differentiable an can be embedded into an
optimization scheme that allows us to estimate 3D spatially-
varying material properties, geometry, and HDR lighting of
the scene from a set of posed camera images2. By modeling
the HDR properties of the scene, our representation is also
well suited for AR applications such as virtual object inser-
1Abbreviation FEGR is derived from neural Fields meet Explicit
Geometric Representations and is pronounced as "figure" .
2We can also integrate depth information, if available, to further con-
strain the solution space.tion that require spatially-varying lighting to cast shadows
in a physically correct way.
We summarize our contributions as follows:
We propose a novel neural field representation that decom-
poses scene into geometry, spatially varying materials,
and HDR lighting.
To achieve efficient ray-tracing within a neural scene rep-
resentation, we introduce a hybrid renderer that renders
primary rays through volumetric rendering, and models
the secondary rays using physics-based rendering. This en-
ables high-quality inverse rendering of large urban scenes.
We model the HDR lighting and material properties of the
scene, making our representation well suited for down-
stream applications such as relighting and virtual object
insertion with cast shadows.
FEGR significantly outperforms state-of-the-art in terms
of novel-view synthesis under varying lighting conditions on
the NeRF-OSR dataset [40]. We also show qualitative results
on a single-illumination capture of an urban environment,
collected by an autonomous vehicle. Moreover, we show
the intrinsic rendering results, and showcase virtual object
insertion as an application. Finally, we conduct a user study,
in which the results of our method are significantly preferred
to those of the baselines.
|
Xiao_Endpoints_Weight_Fusion_for_Class_Incremental_Semantic_Segmentation_CVPR_2023 | Abstract
Class incremental semantic segmentation (CISS) fo-
cuses on alleviating catastrophic forgetting to improve dis-
crimination. Previous work mainly exploits regularization
(e.g., knowledge distillation) to maintain previous knowl-
edge in the current model. However, distillation alone of-
ten yields limited gain to the model since only the repre-
sentations of old and new models are restricted to be con-
sistent. In this paper, we propose a simple yet effective
method to obtain a model with a strong memory of old
knowledge, named Endpoints Weight Fusion (EWF). In our
method, the model containing old knowledge is fused with
the model retaining new knowledge in a dynamic fusion
manner, strengthening the memory of old classes in ever-
changing distributions. In addition, we analyze the rela-
tionship between our fusion strategy and a popular mov-
ing average technique EMA, which reveals why our method
is more suitable for class-incremental learning. To facili-
tate parameter fusion with closer distance in the parameter
space, we use distillation to enhance the optimization pro-
cess. Furthermore, we conduct experiments on two widely
used datasets, achieving state-of-the-art performance.
| 1. Introduction
As a fundamental task, semantic segmentation plays a
key role in visual applications [10, 25]. Previous fully-
supervised works aim to segment fixed classes defined in
the training set. However, the trained segmentation model
is expected to recognize more classes in realistic applica-
tions. One straightforward solution is to re-train the model
on the entire dataset by mixing old and new data. Never-
theless, this strategy will bring huge labeling and training
costs. From the transfer learning perspective [22, 30], an-
other plain solution is to adjust the previously learned model
on the newly added data. But the model will overfit to new
*The first two authors contribute equally.
†Corresponding author (xialei@nankai.edu.cn)
CNN
(old)CNN2 CNN1 CNNn...
... output2
EnsembleCNN
(frozen)
CNN
(compressed)CompressionCNN
(new)
frozen
branchtrained
branch
merged
branchRe-parameterizationoutput1 outputn FC FC
FC
non-linear
non-linearoutput
1-CNN
(new)
CNN
(balanced)EWF
outputFigure 1. Illustration of different fusion strategies for incremen-
tal learning. Ensemble methods utilize multiple models to accu-
mulate more knowledge. Compression methods reduce the model
size and distill the knowledge into a small network. While Re-
parameterization methods use equivalent operations for model fu-
sion. Our Endpoints Weight Fusion ( EWF ) proposes model addi-
tion with a dynamic factor ( αt) with no further training.
classes quickly, while forgetting previous old classes. This
phenomenon is also known as catastrophic forgetting [35].
To alleviate the problem of catastrophic forgetting with-
out extra labeling or training cost, class incremental se-
mantic segmentation (CISS) [3, 16, 50] aims at optimiz-
ing the trade-off between maintaining discrimination for
old classes and learning knowledge of new classes. Most
works [3, 16, 17, 37] designed regularization methods to
maintain a balance between memorizing old knowledge and
learning new one. We observe that existing works can still
suffer from catastrophic forgetting, resulting in a significant
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7204
performance drop in old classes. In the scenario of CISS,
not only the previous data is not accessible due to privacy
issues or data storage limitations, but regions of old classes
in the newly added dataset are labeled as background, which
further exacerbates the model over-fitting.
Besides, training a new model from the old one and fus-
ing them to obtain the final model is a common strategy
in continual learning. As shown in Fig. 1, we roughly di-
vide them into four categories with two stages of model ex-
pansion and fusion. Some methods [27, 33, 42, 47] propose
to expand the model in incremental steps and ensemble the
old and new outputs, which have large memory and infer-
ence costs. While some works apply compression [46, 47]
to compress the old and new model to a unified model with
fewer parameters. Nevertheless, these require further train-
ing on only new data, which can lead to a bias toward new
data. Subsequently, some works [50,53] explore knowledge
decoupling and perform linear parameter fusion with re-
parameterization . However, this is an intra-module fusion
strategy, which is restricted to certain operations. As the
last category, we propose Endpoints Weight Fusion (EWF)
in the form of parameter addition between the old and new
model with a dynamic factor, which requires no further
training and re-parameterization, and maintains a constant
model size as more tasks are encountered.
In this work, we adapt weight fusion to CISS and pro-
pose the EWF strategy, which aims at utilizing weight fu-
sion to find a new balance between old and new knowledge.
During incremental training, we choose a starting point and
an ending point model of the current task training trajectory.
The starting point represents the old knowledge, while the
ending point represents the new knowledge. After learn-
ing the current task, a dynamic weight fusion is proposed
for efficient knowledge integration. We aggregate them by
taking the weighted average of the corresponding parame-
ters of the two models. Nevertheless, the training procedure
without restraints on the model would increase the param-
eter distance between the start and end points, limiting the
performance improvement brought by the EWF strategy. To
overcome this shortcoming, we further enhance the EWF
strategy with a knowledge distillation scheme [16, 17, 50],
which can largely increase the similarity of the models at
the two points and boost the efficiency of EWF.
To summarize, the main contributions of this paper are:
• We propose an Endpoints Weight Fusion strategy,
which has no cost of further training and keeps the
model size the same. It can effectively find a new
balance between old and new categories and alleviate
catastrophic forgetting.
• Our method can be easily integrated with several state-
of-the-art methods. In several CISS scenarios of long
sequences, it can boost the baseline performance bymore than 20%.
• We conduct experiments on various CISS scenarios,
which demonstrate that our method achieves the state-
of-the-art performance on both PASCAL VOC and
ADE20K.
|
Wang_Feature_Alignment_and_Uniformity_for_Test_Time_Adaptation_CVPR_2023 | Abstract
Test time adaptation (TTA) aims to adapt deep neural
networks when receiving out of distribution test domain
samples. In this setting, the model can only access online
unlabeled test samples and pre-trained models on the train-
ing domains. We first address TTA as a feature revision
problem due to the domain gap between source domains
and target domains. After that, we follow the two measure-
ments alignment anduniformity to discuss the test time fea-
ture revision. For test time feature uniformity, we propose
atest time self-distillation strategy to guarantee the consis-
tency of uniformity between representations of the current
batch and all the previous batches. For test time feature
alignment, we propose a memorized spatial local cluster-
ingstrategy to align the representations among the neigh-
borhood samples for the upcoming batch. To deal with the
common noisy label problem, we propound the entropy and
consistency filters to select and drop the possible noisy la-
bels. To prove the scalability and efficacy of our method, we
conduct experiments on four domain generalization bench-
marks and four medical image segmentation tasks with var-
ious backbones. Experiment results show that our method
not only improves baseline stably but also outperforms ex-
isting state-of-the-art test time adaptation methods.
| 1. Introduction
Deep learning has achieved great success in computer
vision tasks when training and test data are sampled from
the same distribution [ 17,26,37]. However, in real-world
applications, performance degradation usually occurs when
training (source) data and test (target) data are collected
from different distributions, i.e.domain shift . In practice,
test samples may encounter different types of variations
or corruptions. Deep learning models will be sensitive to
these variations or corruptions, which can cause perfor-
mance degradation.
To tackle this challenging but practical problem, variousworks have been proposed to adapt the model at testtime [ 5,
8,21,36,43,55,64,65,74]. Test time training (TTT) [ 36,55]
adapts model using self-supervised tasks, such as rotation
classification [ 15] during both training and test phases. This
paradigm relies on additional model modifications in both
training and test phases, which is not feasible and scalable
in the real world.
Similar to TTT, test time adaptation [ 64] (TTA) also
adapts the model i.e., updates the parameters in the test
phase. But TTA does not require any specific modifications
in training and requires only the pre-trained source model
and unlabeled target data during the test phase, which is
more practical and generalizable. In TTA, the model could
be adapted with only online unlabeled data. Hence, the
model trained on source data is incompatible with the target
data due to the possible domain gap between source data
and target data.
To deal with the above-mentioned problem, we address
TTA as a representation revision problem in this paper.
In the test phase of TTA, the accessed model has already
learned the feature representations specialized to source do-
mains and may generate inaccurate representations for the
target domain due to the large domain gap. It is necessary
to rectify the feature representations for the target domain.
To achieve better representations for the target domain, we
utilize the commonly used measurements for representation
quality that can be summarized to feature alignment and
uniformity [66,71]. Alignment refers that the similar im-
ages should have the similar representations, while unifor-
mity means that images of different classes should be dis-
tributed as uniform as possible in the latent space. Hence,
we propose to address the TTA problem from the above-
mentioned properties.
Most of the previous works about TTA can be in-
ducted from the proposed representation revision perspec-
tive. Some methods adapt the source model by conduct-
ing a feature alignment process, such as feature matching
[25,36] and predictions adjustment [ 5]. One of the repre-
sentative methods is LAME [ 5], which encourages neigh-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20050
borhood samples in the feature space to have similar pre-
dictions using Laplacian adjusted maximum-likelihood es-
timation. Moreover, other methods aim to make target
feature more uniform in feature space, including entropy
minimization [ 43,64,74], prototype adjustment [ 21], infor-
mation maximization [ 34] and batch normalization statis-
tics alignment [ 33,42,51]. One representative method is
T3A [ 21], which adjusts prototypes (class-wise centroids)
to have a more uniform representation by building a sup-
port set. However, none of the method address the TTA
problem from the representation alignment and uniformity
simultaneously. In this paper, we identify this limitation and
propose a novel method that rectifies feature representation
from both two properties. We formulate the two properties
in TTA as test time feature uniformity andtest time feature
alignment .
Test Time Feature Uniformity. Following the feature
uniformity perspective, we hope that representations of test
images from different classes should be distributed as uni-
form as possible. However, only limited test samples can be
accessed in an online manner in the TTA setting.
To better deal with all of the samples in the target do-
main, we propose to introduce historical temporal informa-
tion for every arriving test sample. A memory bank is built
to store feature representations and logits for all arriving
samples to maintain the useful information from the previ-
ous data. We then calculate the pseudo-prototypes for every
class by using the logits and features in the memory bank.
After that, to guarantee the uniformity for the current batch
of samples, the prediction distribution of prototype-based
classification and model prediction (outputs of linear clas-
sifier) should be similar, i.e., the feature distribution of the
current images of one class should be consistent with the
feature distribution of all the previous images of the same
class. This can reduce the bias of misclassified outlier sam-
ples to form a more uniform latent space.
Motivated by this, we minimize the distance between
the outputs of linear and prototype-based classifiers. This
pattern is similar to self-distillation [ 23,73] that transfers
knowledge between different layers of the same network
architecture. However, unlike typical self-distillation, our
method does not require any ground truth supervision. We
refer to this method as Test Time Self-Distillation (TSD).
Test Time Feature Alignment. Feature alignment en-
courages the images from the same class to have similar fea-
ture representations in the latent space. As for TTA, pseudo
labels generated by the source model may be noisy due to
the domain gap. Thus, instead of aligning all the positive
pairs, we propose a K-nearest feature alignment to encour-
age features from the same class to be closed or features
from different classes to be far away from each other. This
can reduce the negative impact imposed by the noisy la-
bels and maintain the alignment of images with the samesemantics. Specifically, we retrieve K-nearest features in
the memory bank for the upcoming images and add consis-
tency regularization between the representations and logits
of the images. We refer to this as Memorized Spatial Local
Clustering (MSLC). The ablation of hyperparameter Kis
shown in Table 5and Fig. 3.
Entropy Filter and Consistency Filter. During the
adaption, we use stored pseudo features and logits to com-
pute the pseudo-prototypes. However, the noisy label prob-
lem cannot be completely alleviated despite the proposed
efforts. To further reduce the impact, we adopt both entropy
andconsistency filters to filter noisy labels to boost perfor-
mance. As for the entropy filter, we filter noisy features with
high entropy when we compute prototypes because unreli-
able samples usually produce high entropy.
In addition, the predictions of prototype-based and lin-
ear classifiers of the network for reliable samples should be
consistent ideally. We use this property to filter unreliable
samples and back-propagate the gradient using only reliable
samples. We refer to this filter as the consistency filter. The
ablation study on the two proposed filters is presented in
Table 5.
Finally, we demonstrate the effectiveness of our pro-
posed approach on commonly used domain generalization
benchmarks, including PACS [ 30], VLCS [ 58], OfficeHome
[62] and DomainNet [ 47]. Furthermore, to prove the effi-
cacy of our method, we conduct more experiments on four
cross-domain medical image segmentation benchmarks, in-
cluding prostate segmentation [ 1,35], cardiac structure
segmentation [ 4,76,77] and optic cup/disc segmentation
[13,45,53]. Our method achieves the state-of-the-art per-
formance on the above benchmarks.
We summarize our contributions as follows:
• We propose a new perspective for test time adapta-
tion from the view of feature alignment and unifor-
mity. The proposed test time feature uniformity en-
courages the representations of the current batch of
samples along with the uniformity of all the previous
samples. The test time feature alignment manipulates
the representation of the test sample according to its
neighbors in the latent space to align the representa-
tions based on the pseudo label.
• Specifically, to meet the online setting and noisy la-
bel problem in TTA, we propose two complementary
strategies: unsupervised self-distillation for test time
feature uniformity and memorized spatial local cluster-
ing for test time feature alignment. We also propound
the entropy filter and consistency filter to further miti-
gate the effect of the noisy labels.
• The experiments demonstrate that our proposed
method outperforms existing test time adaptation ap-
20051
proaches on both the domain generalization bench-
marks and medical image segmentation benchmarks.
|
Wasim_Vita-CLIP_Video_and_Text_Adaptive_CLIP_via_Multimodal_Prompting_CVPR_2023 | Abstract
Adopting contrastive image-text pretrained models like
CLIP towards video classification has gained attention due
to its cost-effectiveness and competitive performance. How-
ever, recent works in this area face a trade-off. Fine-
tuning the pretrained model to achieve strong supervised
performance results in low zero-shot generalization. Sim-
ilarly, freezing the backbone to retain zero-shot capabil-
ity causes significant drop in supervised accuracy. Be-
cause of this, recent works in literature typically train sep-
arate models for supervised and zero-shot action recog-
nition. In this work, we propose a multimodal prompt
learning scheme that works to balance the supervised
and zero-shot performance under a single unified train-
ing. Our prompting approach on the vision side caters for
three aspects: 1) Global video-level prompts to model the
data distribution; 2) Local frame-level prompts to provide
per-frame discriminative conditioning; and 3) a summary
prompt to extract a condensed video representation. Ad-
ditionally, we define a prompting scheme on the text side
to augment the textual context. Through this prompting
scheme, we can achieve state-of-the-art zero-shot perfor-
mance on Kinetics-600, HMDB51 and UCF101 while re-
maining competitive in the supervised setting. By keep-
ing the pretrained backbone frozen, we optimize a much
lower number of parameters and retain the existing gen-
eral representation which helps achieve the strong zero-
shot performance. Our codes/models will be released at
https://github.com/TalalWasim/Vita-CLIP ..
| 1. Introduction
In the image classification domain, multimodal image-
text pretrained models such as CLIP [58], ALIGN [31] and
Florence [75] have shown the capability of learning gener-
alized representations. These models, trained on large-scale
language-image pairs in a contrastive manner, have remark-
able zero-shot capabilities and transfer well to a variety ofdownstream tasks. However, training a similar model for
the task of video recognition is not feasible both in terms
of gathering large-scale video-text pairs, which can suffer
from alignment problems [30], and is also exponentially
more computationally expensive due to multiple frames be-
ing processed per video. Therefore, there has been a recent
push in the research community to effectively adopt the pre-
trained image-text models for the task of video recognition,
while maintaining their zero-shot capabilities. In this re-
gard, existing methods can be divided into two categories.
Some take inspiration from recent prompt learning methods
[25, 32, 77, 81, 82] and propose a prompt learning scheme
either on the text [36] or vision [55, 70] side, along with
additional transformer layers for improved temporal learn-
ing. Others prefer an end-to-end CLIP finetuning scheme
for video tasks [51, 55, 70]. However, the problem with
these methods is that they either fail to effectively leverage
learning on both the text and vision sides [36, 55] or end
up losing the zero-shot generalization of CLIP by finetun-
ing the vision decoder [47] or the backbone [51, 55, 70]. In
summary, the existing approaches can steer the model either
towards good zero-shot generalization orbetter supervised
learning on video tasks. Since real-world tasks require both
supervised and zero-shot capabilities, our work investigates
the following question: Can we develop a unified model for
videos that performs well for both supervised learning and
zero-shot generalization tasks?
In pursuit of the aforementioned question, we propose a
multimodal prompting-based Video and text adaptive CLIP.
To effectively adapt the pretrained image-text CLIP model
to videos, we consider two important aspects. Firstly, one
needs to preserve the generalization capabilities of the orig-
inal pretrained CLIP backbone and secondly, it must be able
to effectively adapt to the video domain. In this regard, we
propose to keep the entire backbone frozen and learn addi-
tional lightweight modules to adapt the model for videos.
On this point, for the vision side, we aim to explicitly ex-
ploit the temporal information in videos which is lacking in
the frozen image model. Our approach models video infor-
mation at three levels: first via global video-level prompts
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23034
(a) Proposed Prompting Scheme76 78 80 82 8440424446485052
Vita-CLIP B/16 (OURS)
(frozen backbone)
ActionClip B/16 [70] (CORR’21)
(finetuned backbone)XCLIP B/16 [55] (ECCV’22)
(finetuned backbone)A5[36] (ECCV’22)
(frozen backbone)
Top-1 K400 SupervisedTop-1 HMDB51 Zeroshot
76 78 80 82 84556065707580
Vita-CLIP B/16 (OURS)
(frozen backbone)
ActionClip B/16 [70] (CORR’21)
(finetuned backbone)XCLIP B/16 [55] (ECCV’22)
(finetuned backbone)A5[6](ECCV’22)
(frozen backbone)
Top-1 K400 SupervisedTop-1 UCF101 Zeroshot
(b) Zero-shot accuracy (HMDB51, UCF101)
vs supervised accuracy (Kinetics-400).Figure 1. An overview of the proposed
prompting scheme (left) alongside the trade-
off which we attempt to balance between su-
pervised and zero-shot performance ( right ).
(a)Our prompting approach adds learnable pa-
rameters to learn visual and temporal infor-
mation in videos at three levels: a summary
prompt to learn a condensed representation of
the video, video-level prompts to model global
distribution shifts needed to adapt to video do-
main and frame-level prompts to enrich local
discriminative information in each frame. On
the text side, we learn prompts to adapt the
language representations for videos. (b)The
trade-off plots showing zero-shot vs. super-
vised performance comparison for ours and re-
cent CLIP-based video approaches. Note that
existing SoTA [55] trains two separate models
for zero-shot and supervised settings while our
method offers a unified model with the same
training for both settings.
that learn the overall distribution characteristics of video
data e.g., motion and dynamics; secondly, inspired by [53],
local frame-level prompts which model per frame discrimi-
native information by directly conditioning on classification
tokens of all frames; and thirdly by a summary prompt that
distills the entire video sequence response in a single con-
cise summary vector.
Additionally, to better model the textual context we pro-
pose to use a learnable context on the text encoder. The rea-
son why this is particularly important is that the textual in-
formation is quite limited in the available video datasets. In-
stead of having per-sample text descriptions, we are limited
to using class labels as text descriptions. Inspired by [82],
we propose a prompt learning method on the text side to
better model the textual context and to augment the video
class label descriptions. An overview of our method with
the trade-off it seeks to balance is presented in Fig. 1. The
main contributions of this work are as follows:
• We propose a multimodal prompting approach Vita-CLIP
for videos that learns video and text-specific context vec-
tors to efficiently adapt the image-text pretrained CLIP
model to video recognition tasks.
• On the vision side, we explicitly model the temporal in-
formation and the video data distribution. Our prompt
learning method aggregates the discriminative informa-
tion from each frame in a clip with every other frame,
while also providing per-layer learning capacity to better
capture the data distribution. On the language side, our
approach learns complimentary semantic context to bet-
ter adapt the language representations.
• We evaluate our approach on supervised as well as gener-alization tasks and demonstrate a sound balance between
both aspects using a single unified model. Specifically,
on zero-shot tasks, we obtain 4.0%, 3.0% and 2.2% gains
over the recent SoTA X-CLIP [55] on HMDB-51, UCF-
101, and Kinetics-600 datasets respectively.
|
Wang_ALTO_Alternating_Latent_Topologies_for_Implicit_3D_Reconstruction_CVPR_2023 | Abstract
This work introduces alternating latent topologies
(ALTO) for high-fidelity reconstruction of implicit 3D sur-
faces from noisy point clouds. Previous work identifies that
the spatial arrangement of latent encodings is important to
recover detail. One school of thought is to encode a la-
tent vector for each point (point latents). Another school
of thought is to project point latents into a grid (grid la-
tents) which could be a voxel grid or triplane grid. Each
school of thought has tradeoffs. Grid latents are coarse
and lose high-frequency detail. In contrast, point latents
preserve detail. However, point latents are more difficult
to decode into a surface, and quality and runtime suffer.
In this paper, we propose ALTO to sequentially alternate
between geometric representations, before converging to
an easy-to-decode latent. We find that this preserves spa-
tial expressiveness and makes decoding lightweight. We
validate ALTO on implicit 3D recovery and observe not
only a performance improvement over the state-of-the-art,
but a runtime improvement of 3-10×. Project website at
https://visual.ee.ucla.edu/alto.htm/ .
*Equal contribution. | 1. Introduction
Reconstructing surfaces from noisy point clouds is an
active problem in 3D computer vision. Today, conditional
neural fields offer a promising way to learn surfaces from
noisy point clouds. Alternatives like voxel regression or
mesh estimation are limited by cubic complexity and the
requirement of a mesh template, respectively. Recent work
has successfully used conditional neural fields to recon-
struct 3D surfaces as an occupancy function. A conditional
neural field takes as input a query coordinate and conditions
this on a latent representation, e.g., feature grids. The spa-
tial expressiveness of the latent representation impacts the
overall surface reconstruction quality.
To achieve spatial expression, a neural field is condi-
tioned on a latent space of features (latents) from the con-
ditional input. In 3D surface reconstruction the input point
cloud is transformed into latents arranged in some topolog-
ical structure. Point latents occur when each point in the
input point cloud is assigned a latent vector [4]. Triplane
latents are formed when point latents are projected into a
3-axis grid [41, 52]. The triplane latent is not as spatially
expressive as freeform points, but the lower spatial com-
plexity makes it easier to decode. Voxel latents are another
type of grid latent where latents are arranged in a feature
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
259
Figure 2. An overview of our method. Given input surface points, we obtain an implicit occupancy field with iterative alternation
between features in the forms of points and 2D or 3D grids (Sec. 3.2). Then we decode the occupancy values for query points with a
learned attention-based interpolation from neighboring grids (Sec. 3.3).
volume [52, 66].
To reconstruct detailed surfaces, recent state-of-the-art
methods try to preserve point latents as long as possi-
ble. Because point latents are spatially expressive, methods
based on point latents are considered state-of-the-art for de-
tailed surface reconstruction [4, 18]. However, using point
latents in this way has some tradeoffs. It is difficult to cor-
relate a query with the unstructured topology of a point-
based latent space, placing a burden on the decoder. Results
from POCO [4] are shown in Fig. 1where runtime and high-
quality detail like thin lampposts remain out of reach.
In this paper, we seek to blend the upside of different
latent topologies, while minimizing downside. We present
an alternating latent topology (ALTO) method. In contrast
to previous work, our method does not stay with either
point [4] or grid latents [52], but instead alternates back and
forth between point and grid latents before converging to a
final grid for ease-of-decoding.
Our method is general. We can plug-in the ALTO com-
ponent to existing grid-based conditional models [10, 52]
to boost detail recovery. While we have shown that our
method can generate occupancy fields, we expect gain of
high-fidelity details for other neural fields, such as seman-
tic or affordance fields [32, 70], where similar conditional
techniques can be adopted.
We summarize our contributions as follows:
• We introduce an iterative technique to blend the
strengths of different latent topologies for high-fidelity
conditional neural fields generation.
• We propose an attention-based decoder that replaces
naive linear interpolation of feature-grids or computa-
tionally expensive point-wise attention while keeping
compute burden in check.
• We demonstrate performance and runtime improve-
ments over the highest-quality previous method [4], as
well as performance improvements over all other base-
lines. |
Wang_Neural_Residual_Radiance_Fields_for_Streamably_Free-Viewpoint_Videos_CVPR_2023 | Abstract
The success of the Neural Radiance Fields (NeRFs) for
modeling and free-view rendering static objects has in-spired numerous attempts on dynamic scenes. Current tech-niques that utilize neural rendering for facilitating free-view videos (FVVs) are restricted to either offline render-ing or are capable of processing only brief sequences with
minimal motion. In this paper , we present a novel tech-
nique, Residual Radiance Field or ReRF , as a highly com-pact neural representation to achieve real-time FVV ren-
dering on long-duration dynamic scenes. ReRF explicitlymodels the residual information between adjacent times-
tamps in the spatial-temporal feature space, with a global
coordinate-based tiny MLP as the feature decoder . Specif-
ically, ReRF employs a compact motion grid along with aresidual feature grid to exploit inter-frame feature similar-
ities. We show such a strategy can handle large motions
without sacrificing quality. We further present a sequentialtraining scheme to maintain the smoothness and the spar-
sity of the motion/residual grids. Based on ReRF , we design
†The corresponding authors are Minye Wu (minye.wu@kuleuven.be)
and Lan Xu (xulan1@shanghaitech.edu.cn).a special FVV codec that achieves three orders of magni-
tudes compression rate and provides a companion ReRF
player to support online streaming of long-duration FVVs
of dynamic scenes. Extensive experiments demonstrate theeffectiveness of ReRF for compactly representing dynamicradiance fields, enabling an unprecedented free-viewpoint
viewing experience in speed and quality.
| 1. Introduction
Photo-realistic free-viewpoint videos (FVVs) of dy-
namic scenes, in particular, human performances, reduce
the gap between the performer and the viewer. But the goal
of producing and viewing FVVs as simple as clicking andviewing regular 2D videos on streaming platforms remains
far-reaching. The challenges range from data processing
and compression to streaming and rendering.
Geometry-based solutions reconstruct dynamic 3D
meshes or points [ 14,16], whereas image-based ones inter-
polate novel views on densely transmitted footages [ 6,83].
Both techniques rely on high-quality reconstructions thatare often vulnerable to occlusions and textureless regions.
Recent neural advances [ 44,61] bring an alternative route
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
76
that bypasses explicit geometric reconstruction. The sem-
inal work of the Neural Radiance Field (NeRF) [ 44] com-
pactly represents a static scene in a coordinate-based multi-layer perceptron (MLP) to conduct volume rendering at
photo-realism. The MLP can be viewed as an implicit fea-
ture decoder from a spatially continuous feature space to
the radiance output with RGB and density. However, us-ing even a moderately deep MLP can be too expensive forreal-time rendering. V arious extensions have hence focusedon “sculpting” the feature space using smart representationsto strike an intricate balance between computational speedand accuracy. Latest examples include explicit feature vol-umes [ 21,57,77], multi-scale hashing [ 45], codebook [ 59],
tri-planes [ 8], tensors [ 11,60], etc.
Although effective, by far nearly all methods are tailored
to handle static scenes. In contrast, streaming dynamic radi-ance fields require using a global coordinate-based MLP to
decode features from a spatial-temporally continuous fea-ture space into radiance outputs. A na ¨ıve per-frame solu-
tion would be to apply static methods [ 45,60] on a series of
independent spatial feature spaces. Such schemes discard
important temporal coherency, yielding low quality and in-
efficiency for long sequences. Recent methods attempt to
maintain a canonical feature space to reproduce features in
each live frame by temporally warping them back into thecanonical space. V arious schemes to compensate for tem-
poral motions have been proposed by employing implicitmatching [ 18,38,48,49,62] or data-driven priors such as
depth [ 73], Fourier features [ 67], optical flow [ 17,37], or
skeletal/facial motion priors [ 28,50,69,82]. However, heavy
reliance on the global canonical space makes them fragileto large motions or topology changes. The training over-
head also significantly increases according to the sequencelength. Recent work [ 34] sets out to explore feature redun-
dancy between adjacent frames but it falls short of main-taining a coherent spatial-temporal feature space.
In this paper, we present a novel neural modeling tech-
nique that we call the Residual Radiance Field or ReRF as a
highly compact representation of dynamic scenes, enablinghigh-quality FVV streaming and rendering (Fig. 1). ReRF
explicitly models the residual of the radiance field betweenadjacent timestamps in the spatial-temporal feature space.Specifically, we employ a global tiny MLP to approximateradiance output of the dynamic scene in a sequential man-ner. To maintain high efficiency in training and inference,ReRF models the feature space using an explicit grid rep-resentation analogous to [ 57]. However, ReRF only per-
forms the training on the first key frame to obtain an MLPdecoder for the whole sequence and at the same time it uses
the resulting grid volume as the initial feature volume. For
each subsequent frame, ReRF uses a compact motion grid
and a residual feature grid: the low-resolution motion gridrepresents the position offset from the current frame to theprevious whereas a sparse residual grid is used to compen-
sate for errors and newly observed regions. A major benefit
of such a design is that ReRF fully exploits feature similar-ities between adjacent frames where the complete featuregrid of the current frame can be simply obtained from thetwo while avoiding the use of a global canonical space. Inaddition, both motion and residual grids are amenable forcompression, especially for long-duration dynamic scenes.
We present a two-stage scheme to efficiently obtain the
ReRF from RGB videos via sequential training. In particu-lar, we introduce a novel motion pooling strategy to main-
tain the smoothness and compactness of the inter-frame mo-
tion grid along with sparsity regularizers to improve thecompactness of ReRF. To make ReRF practical for users,
we further design a ReRF-based codec that follows the
traditional keyframe-based strategy, achieving three ordersof magnitudes compression rate compared to per-frame-based neural representations [ 57]. Finally, we demonstrate
a companion ReRF player suitable for conducting onlinestreaming of long-duration FVVs of dynamic scenes. WithReRF, a user, for the first time, can pause, play, fast for-
ward/backward, and seek on dynamic radiance fields as
if viewing 2D videos, resulting in an unprecedented high-quality free-viewpoint viewing experience (see Fig. 2
).
To summarize, our contributions include:
• We introduce Residual Radiance Field (ReRF), a
novel neural representation, to support streamable
free-viewpoint viewing of dynamic radiance fields.
• We present tailored motion and residual grids to sup-
port sequential training and at the same time eliminate
the need for using a global canonical space notorious
for large motions. We further introduce a number oftraining strategies to achieve a high compression rate
while maintaining high rendering quality.
• We develop a ReRF-based codec and a companion
FVV player to stream dynamic radiance fields of long
sequences, with broad control functions.
|
Wang_YOLOv7_Trainable_Bag-of-Freebies_Sets_New_State-of-the-Art_for_Real-Time_Object_Detectors_CVPR_2023 | Abstract
Real-time object detection is one of the most important
research topics in computer vision. As new approaches re-
garding architecture optimization and training optimization
are continually being developed, we have found two re-
search topics that have spawned when dealing with these
latest state-of-the-art methods. To address the topics, we
propose a trainable bag-of-freebies oriented solution. We
combine the flexible and efficient training tools with the
proposed architecture and the compound scaling method.
YOLOv7 surpasses all known object detectors in both speed
and accuracy in the range from 5 FPS to 120 FPS and
has the highest accuracy 56.8% AP among all known real-
time object detectors with 30 FPS or higher on GPU V100.
Source code is released in https://github.com/
WongKinYiu/yolov7 .
| 1. Introduction
Real-time object detection is a very important topic in
computer vision, as it is often a necessary component in
computer vision systems. For example, multi-object track-
ing [90, 91], autonomous driving [17, 39], robotics [34, 55],
medical image analysis [33, 44], etc. The computing de-
vices that execute real-time object detection is usually some
mobile CPUs or GPUs, as well as various neural process-
ing units (NPUs). For example, the Apple neural engine
(Apple), the neural compute stick (Intel), Jetson AI edge
devices (Nvidia), the edge TPU (Google), the neural pro-
cessing engine (Qualcomm), the AI processing unit (Medi-
aTek), and the AI SoCs (Kneron), are all NPUs. Some of
edge devices focus on speeding up different operations such
as vanilla convolution, depth-wise convolution, or MLP op-
erations. In this paper, the real-time object detector we pro-
posed mainly hopes that it can support both mobile GPU
and GPU devices from the edge to the cloud.
In recent years, the real-time object detector is still devel-
oped for different edge devices. For example, the develop-
Figure 1. Comparison with other real-time object detectors, our
proposed methods achieve state-of-the-arts performance.
ment of MCUNet [46,47] and NanoDet [51] focused on pro-
ducing low-power single-chip and improving the inference
speed on edge CPU. As for methods such as YOLOX [20]
and YOLOR [79], they focus on improving the inference
speed of various GPUs. More recently, the development of
real-time object detector has focused on the design of ef-
ficient architecture. As for real-time object detectors that
can be used on CPU [51, 81, 82, 86], their design is mostly
based on MobileNet [26, 27, 63], ShuffleNet [52, 89], or
GhostNet [24]. Another real-time object detectors are de-
veloped for GPU [20, 79, 94], they mostly use ResNet [25],
DarkNet [60], or DLA [85], and then use the CSPNet [77]
strategy to optimize the architecture. The development di-
rection of the proposed methods in this paper are different
from that of the current real-time object detectors. In ad-
dition to architecture optimization, our proposed methods
will focus on the optimization of the training process. Our
focus will be on some optimized modules and optimization
methods which may strengthen the training cost for improv-
ing the accuracy of object detection, but without increasing
the inference cost. We call these modules and optimization
methods trainable bag-of-freebies.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7464
Recently, model re-parameterization [11, 12, 28] and dy-
namic label assignment [16, 19, 40] have become important
topics in network training and object detection. Mainly af-
ter the above new concepts are proposed, the training of
object detector evolves many new issues. In this paper, we
will present some of the new issues we have discovered and
devise effective methods to address them. For model re-
parameterization, we analyze the model re-parameterization
strategies applicable to layers in different networks with the
concept of gradient propagation path, and propose planned
re-parameterization model. In addition, when we discover
that with dynamic label assignment technology, the train-
ing of model with multiple output layers will generate new
issues. That is: “How to assign dynamic targets for the out-
puts of different branches?” For this problem, we propose
a new label assignment method called coarse-to-fine lead
guided label assignment.
The contributions of this paper are summarized as fol-
lows: (1) we design several trainable bag-of-freebies meth-
ods, so that real-time object detection can greatly improve
the detection accuracy without increasing the inference
cost; (2) for the evolution of object detection methods,
we found two new issues, namely how re-parameterization
module replaces original module, and how dynamic label
assignment strategy deals with assignment to different out-
put layers. In addition, we also propose methods to address
the difficulties arising from these issues; (3) we propose
“extend” and “compound scaling” methods for the real-time
object detector that can effectively utilize parameters and
computation; and (4) the method we proposed can effec-
tively reduce large amount of parameters and computation
of state-of-the-art real-time object detector, and has faster
inference speed and higher detection accuracy.
|
Wu_Referring_Multi-Object_Tracking_CVPR_2023 | Abstract
Existing referring understanding tasks tend to involve
the detection of a single text-referred object. In this paper,
we propose a new and general referring understanding task,
termed referring multi-object tracking (RMOT). Its core
idea is to employ a language expression as a semantic cue
to guide the prediction of multi-object tracking. To the best
of our knowledge, it is the first work to achieve an arbitrary
number of referent object predictions in videos. To push
forward RMOT, we construct one benchmark with scalable
expressions based on KITTI, named Refer-KITTI. Specifi-
cally, it provides 18 videos with 818 expressions, and each
expression in a video is annotated with an average of 10.7
objects. Further, we develop a transformer-based architec-
ture TransRMOT to tackle the new task in an online manner,
which achieves impressive detection performance and out-
performs other counterparts. The Refer-KITTI dataset and
the code are released at https://referringmot.github.io .
| 1. Introduction
Recently, referring understanding [ 5,17,33,55], inte-
grating natural language processing into scene perception,
has raised great attention in computer vision community.
It aims to localize regions of interest in images or videos
under the instruction of human language, which has many
applications, such as video editing and autonomous driving.
For referring understanding, several significant benchmarks
have been published. Flickr30k [ 53], ReferIt [ 15], and Re-
fCOCO/+/g [ 55] have greatly encouraged the development
of image-based referring tasks. More datasets ( e.g., Lin-
gual OTB99 [ 19], Cityscapes-Ref [ 42], Talk2Car [ 5], Refer-
DA VIS 17[17], and Refer-Youtube-VOS [ 38]) are further
∗Equal contribution. †Corresponding author: Jianbing Shen . This
work was supported in part by the FDCT grant SKL-IOTSC(UM)-2021-
2023, the Grant MYRG-CRG2022-00013-IOTSC-ICI, and the Start-up
Research Grant (SRG) of University of Macau (SRG2022-00023-IOTSC).
‡The work is done during the internship at MEGVII Technology.
(a) Query : the cars in the right
(b) Query : the cars which are turning
Figure 1. Representative examples from RMOT . The expression
query can refer to multiple objects of interest (a), and captures the
short-term status with accurate labels (b).
proposed to cover the application in videos.
Despite these advanced progress, previous benchmarks
have two typical limitations. First , each expression tends
to correspond to only one target. However, many objects
have the same semantics in an open world, i.e., one sin-
gle expression could refer to multiple objects. From this
side, existing datasets lack flexible simulation on the multi-
object scenarios, causing referring understanding tasks far
from satisfactory. Second , the given expression may only
describe part of frames for the video referring task, mak-
ing the correspondence inaccurate. For example, given the
expression ‘the car which is turning’, we have to predict
the overall trajectory even if the car has finished the turn-
ing action. Obviously, a single expression cannot cover all
short-term status of one target. Overall, existing datasets
fail to provide an accurate evaluation under the situations of
multiple referent targets and temporal status variances.
To address these problems, we propose a novel video un-
derstanding task guided by the language description, named
referring multi-object tracking (RMOT). Given a language
expression as a reference, it targets to ground all semanti-
cally matched objects in a video. Unlike previous tasks, our
proposed RMOT is much closer to the real environment,
as each expression can involve multiple objects. For in-
stance, the expression query ‘the cars in the right’ corre-
sponds to one object at the 20thframe but two objects at the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14633
The red cars
The pedestrian The persons in the right
The cars which are turning
The black cars which are moving
The cars which are slower than oursThe parking cars
The cars in the counter direction of ours
The cars in leftFigure 2. More examples of Refer-KITTI . It provides high-diversity scenes and high-quality annotations referred to by expressions.
40thframe (see Fig. 1(a)). The phenomenon indicates that
RMOT focuses more on finding the matched targets so that
the referent number can be flexibly changed. In addition,
the temporal status variances are also considered in RMOT.
As shown in Fig. 1(b), the given example shows the cars can
be detected only when they start the turning action, and the
tracking will be ended if they finish the activity.
To speed up the development of RMOT, we construct
a new benchmark, i.e., Refer-KITTI, concerning the traffic
scenes. It is developed from the public KITTI [ 9] dataset.
Compared to existing referring understanding datasets, it
has three distinguishing characteristics: i)High flexibility
with referent objects. The number of objects described by
each expression range from 0 to 105, with 10.7 on average.
ii)High temporal dynamics. The temporal status of targets
covers a longer time with more frames (varying in 0 ∼400
frames), and the temporal variance of targets is accurately
captured using our labeling tool. iii)Low labeling cost with
identification spread. We provide an effortless tool to anno-
tate a target tracklet using only two clicks.
Although RMOT has a more flexible referring setting,
it brings additional challenges: multi-object prediction and
cross-frame association. Towards this end, we propose an
end-to-end differentiable framework for RMOT. Our model
builds upon the recent DETR framework [ 3], enhanced by
powerful cross-modal reasoning and cross-frame conjunc-
tion. It has an encoder-decoder architecture. Specifically,
we design an early-fusion module in the encoder to densely
integrate visual and linguistic features, followed by a stack
of deformable attention layers for further refining the cross-
modal representations. In the decoder, query-based embed-
dings interact with the cross-modal features to predict ref-
erent boxes. To track multi-objects, similar to MOTR [ 57],
we decouple the object queries into track query for tracking
objects of previous frames and detect query for predicting
the bounding boxes of new-born objects.
In summary, our contributions are three-fold. First , we
propose a new task for referring multi-objects, called re-Dataset Video ImagesInstances
per-expressionTemporal ratio
per-expression
RefCOCO [ 55] - 26,711 1 1
RefCOCO+ [ 55] - 19,992 1 1
RefCOCOg [ 55] - 26,711 1 1
Talk2Car [ 5] ✓ 9,217 1 -
VID-Sentence [ 4]✓ 59,238 1 1
Refer-DA VIS 17[17]✓ 4,219 1 1
Refer-YV [ 38] ✓ 93,869 1 1
Refer-KITTI ✓ 6,650 10.7 0.49
Table 1. Comparison of Refer-KITTI with existing datasets.
Refer-YV is short for Refer-Youtube-VOS. The temporal ratio rep-
resents the average ratio of referent frames covering the entire
video sequence. ‘-’ means unavailable.
ferring multi-object tracking (RMOT). It tackles limitations
in the existing referring understanding tasks and provides
multi-object and temporally status-variant circumstances.
Second , we formulate a new benchmark, Refer-KITTI, to
help the community to explore this new field in depth. As
far as we know, it is the first dataset specializing in an ar-
bitrary number of object predictions. Third , we propose
an end-to-end framework built upon Transformer, termed
as TransRMOT. With powerful cross-modal learning, it pro-
vides impressive RMOT performance on Refer-KITTI com-
pared to hand-crafted RMOT methods.
|
Wen_BundleSDF_Neural_6-DoF_Tracking_and_3D_Reconstruction_of_Unknown_Objects_CVPR_2023 | Abstract
We present a near real-time (10Hz) method for 6-DoF
tracking of an unknown object from a monocular RGBD
video sequence, while simultaneously performing neural 3D
reconstruction of the object. Our method works for arbi-
trary rigid objects, even when visual texture is largely ab-
sent. The object is assumed to be segmented in the first
frame only. No additional information is required, and no
assumption is made about the interaction agent. Key to our
method is a Neural Object Field that is learned concur-
rently with a pose graph optimization process in order to
robustly accumulate information into a consistent 3D rep-
resentation capturing both geometry and appearance. A dy-
namic pool of posed memory frames is automatically main-
tained to facilitate communication between these threads.
Our approach handles challenging sequences with large
pose changes, partial and full occlusion, untextured sur-
faces, and specular highlights. We show results on HO3D,
YCBInEOAT, and BEHAVE datasets, demonstrating that
our method significantly outperforms existing approaches.
Project page: https://bundlesdf.github.io/
| 1. Introduction
Two fundamental (and closely related) problems in com-
puter vision are 6-DoF (“degree of freedom”) pose tracking
and 3D reconstruction of an unknown object from a monoc-
ular RGBD video. Solving these problems will unlock
a wide range of applications in areas such as augmented
reality [34], robotic manipulation [22, 70], learning-from-
demonstration [71], and sim-to-real transfer [1, 15].Prior efforts often consider these two problems sepa-
rately. For example, neural scene representations have
achieved great success in creating high quality 3D object
models from real data [3, 40, 44, 59, 68, 81]. These ap-
proaches, however, assume known camera poses and/or
ground-truth object masks. Furthermore, capturing a static
object by a dynamically moving camera prevents full 3D
reconstruction ( e.g., the bottom of the object is never seen
if resting on a table). On the other hand, instance-level
6-DoF object pose estimation and tracking methods of-
ten require a textured 3D model of the test object before-
hand [24, 28, 66, 72, 73] for pre-training and/or online tem-
plate matching. While category-level methods enable gen-
eralization to new object instances within the same cate-
gory [7,27,62,67,74], they struggle with out-of-distribution
object instances and unseen object categories.
To overcome these limitations, in this paper we pro-
pose to solve these two problems jointly. Our method as-
sumes that the object is rigid, and it requires a 2D object
mask in the first frame of the video. Apart from these
two requirements, the object can be moved freely through-
out the video, even undergoing severe occlusion. Our ap-
proach is similar in spirit to prior work in object-level
SLAM [35, 36, 50–52, 64, 85], but we relax many common
assumptions, allowing us to handle occlusion, specularity,
lack of visual texture and geometric cues, and abrupt object
motion. Key to our method is an online pose graph opti-
mization process, a concurrent Neural Object Field to re-
construct the 3D shape and appearance, and a memory pool
to facilitate communication between the two processes. The
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
606
robustness of our method is highlighted in Fig. 1.
Our contributions can be summarized as follows:
•A novel method for causal 6-DoF pose tracking and 3D
reconstruction of a novel unknown dynamic object. This
method leverages a novel co-design of concurrent track-
ing and neural reconstruction processes that run online in
near real-time while largely reducing tracking drift.
•We introduce a hybrid SDF representation to deal with
uncertain free space caused by the unique challenges in a
dynamic object-centric setting, such as noisy segmenta-
tion and external occlusions from interaction.
•Experiments on three public benchmarks demonstrate
state-of-the-art performance against leading methods.
|
Wei_Fine-Grained_Classification_With_Noisy_Labels_CVPR_2023 | Abstract
Learning with noisy labels (LNL) aims to ensure model
generalization given a label-corrupted training set. In this
work, we investigate a rarely studied scenario of LNL on
fine-grained datasets (LNL-FG), which is more practical
and challenging as large inter-class ambiguities among
fine-grained classes cause more noisy labels. We empiri-
cally show that existing methods that work well for LNL
fail to achieve satisfying performance for LNL-FG, aris-
ing the practical need of effective solutions for LNL-FG.
To this end, we propose a novel framework called stochas-
tic noise-tolerated supervised contrastive learning (SNSCL)
that confronts label noise by encouraging distinguishable
representation. Specifically, we design a noise-tolerated
supervised contrastive learning loss that incorporates a
weight-aware mechanism for noisy label correction and se-
lectively updating momentum queue lists. By this mecha-
nism, we mitigate the effects of noisy anchors and avoid
inserting noisy labels into the momentum-updated queue.
Besides, to avoid manually-defined augmentation strategies
in contrastive learning, we propose an efficient stochas-
tic module that samples feature embeddings from a gener-
ated distribution, which can also enhance the representa-
tion ability of deep models. SNSCL is general and compati-
ble with prevailing robust LNL strategies to improve their
performance for LNL-FG. Extensive experiments demon-
strate the effectiveness of SNSCL.
| 1. Introduction
Learning from noisy labels [12, 13, 18, 21, 26, 40, 55, 58]
poses great challenges for training deep models, whose per-
formance heavily relies on large-scaled labeled datasets [28,
47–49]. Annotating data with high confidence would be
resource-intensive, especially for some domains, such as
medical and remote sensing images [29, 36, 37, 41, 46].
∗denotes corresponding author
Class 1Class 2Class 1Class 2
Class 2Class 1LNL - generic classificationRandom NoiseDependent NoiseLNL-FG - fine-grained classification
Class 1Class 2
Figure 1. LNL-FG is more challenging than LNL on generic clas-
sification. denote mislabeled samples.
Thus, label noise would inevitably arise and then greatly
degrade the generalization performance of deep models.
Previous methods [1,6,7,9,18,23,38,53,54] in LNL al-
ways focus on generic classification ( e.g.CIFAR-10 & 100)
and artificially construct random label noise [21, 23, 42, 43]
and dependent label noise [9, 18, 38, 53, 55] to evaluate the
performance of their algorithms. In this work, we extend
LNL to fine-grained classification, which is a rarely studied
task. Firstly, this scenario is more realistic since annota-
tors are easier to be misguided by indistinguishable charac-
teristics among fine-grained images and give an uncertain
target. Fig. 1 illustrates comparison between two types
of noise simulated on generic and fine-grained sets. Fur-
ther, we extensively investigate the performance of prevail-
ing LNL methods on our proposed LNL-FG task. The de-
tailed results are shown in Fig. 2. Although these robust al-
gorithms lead to statistically significant improvements over
vanilla softmax cross-entropy on LNL, these gains do not
transfer to LNL-FG task. Instead, some methods degrade
the generalization performance of deep models compared
to cross-entropy. Intuitively, due to large inter-class ambi-
guity among those classes in LNL-FG, the margin between
noisy samples and the decision boundary in the fine-grained
dataset is smaller than that in the generic dataset, leading
to severe overfitting of deep models to noisy labels. De-
spite this fact, the typical method for better representation,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11651
5565
Co-teachingJoCoRMW-NetMLC
SYM
GCEConf. p Label sVanilla
62
57 72
JoCoRMW-NetMLC
GCEConf. p Label s
Vanilla60SYM6780
70
65 76
VanillaLabel s
Conf. p
GCEJoCoRSYM
Co-teachingMLC
MW-Net
Co-teaching42DivideMix DivideMix
7380
Vanilla
GCESYM
Co-teachingJoCoRMW-NetMLCDivideMix
Label s
Conf. p
66DivideMixLNL-FG accuracy (%)Figure 2. Comparison results of previous methods on four fine-grained benchmarks with 20% random label noise. Methods with same
color and shape belong to the same strategy. The X-axis denotes their performance on typical LNL tasks while the Y-axis denotes that
on LNL-FG tasks. It is obvious that not all robust methods outperform the performance of vanilla cross-entropy on LNL-FG task. .
More analysis and results can be found in Appx. A.
i.e., DivideMix, consistently achieves better performance
on both LNL and LNL-FG tasks (See Fig. 2). From this
perspective, we consider that encouraging discrimitive fea-
ture not only confronts overfitting to label noise but also
facilitates the learning of fine-grained task.
For this, contrastive learning (CL), as a powerful unsu-
pervised learning approach for generating discrimitive fea-
ture [4,8,11,14,31], has attracted our attention. CL methods
usually design objective functions as supervised learning to
perform pretext similarity measurement tasks derived from
an unlabeled dataset, which can learn effective visual repre-
sentations in downstream tasks, especially for fine-grained
classification [3]. The following work, supervised con-
trastive learning (SCL) [15], leverages label information to
further enhance representation learning, which can avoid a
vast training batch and reduce the memory cost. However,
SCL cannot be directly applied to the noisy scenario as it is
lack of noise-tolerated mechanism.
To resolve the noise-sensitivity of SCL, we propose
a novel framework named stochastic noise-tolerated su-
pervised contrastive learning (SNSCL), which contains a
noise-tolerated contrastive loss and a stochastic module.
For the noise-tolerated contrastive loss, we roughly cate-
gorize the noise-sensitive property of SCL into two parts
of noisy anchors and noisy query keys in the momentum
queue. To mitigate the negative effect introduced by noisy
anchors or query keys, we design a weight mechanism for
measuring the reliability score of each sample and give cor-
responding weight. Based on these weights, we modify the
label of noisy anchors in current training batch and selec-
tively update the momentum queue for decreasing the prob-
ability of noisy query keys. These operations are adaptive
and can achieve a progressive learning process. Besides, to
avoid manual adjustment of strong augmentation strategies
for SCL, we propose a stochastic module for more com-
plex feature transformation. In practice, this module gener-
ates the probabilistic distribution of feature embedding. By
sampling operation, SNSCL achieves better generalization
performance for LNL-FG.
Our contributions can be summarized as
• We consider a hardly studied LNL task, dubbed LNL-FG and conduct empirical investigation to show that
some existing methods in LNL cannot achieve satisfy-
ing performance for LNL-FG.
• We design a novel framework dubbed stochastic noise-
tolerated supervised contrastive learning (SNSCL),
which alters the noisy labels for anchor samples and
selectively updates the momentum queue, avoiding the
effects of noisy labels on SCL.
• We design a stochastic module to avoid manually-
defined augmentation, improving the performance of
SNSCL on representation learning.
• Our proposed SNSCL is generally applicable to pre-
vailing LNL methods and significantly improves their
performance on LNL-FG.
Extensive experiments on four fine-grained datasets and
two real-world datasets consistently demonstrate the state-
of-the-art performance of SNSCL, and further analysis ver-
ify its effectiveness.
|
Wang_Co-SLAM_Joint_Coordinate_and_Sparse_Parametric_Encodings_for_Neural_Real-Time_CVPR_2023 | Abstract
We present Co-SLAM, a neural RGB-D SLAM system
based on a hybrid representation, that performs robust cam-
era tracking and high-fidelity surface reconstruction in real
time. Co-SLAM represents the scene as a multi-resolution
hash-grid to exploit its high convergence speed and abil-
ity to represent high-frequency local features. In addi-
tion, Co-SLAM incorporates one-blob encoding, to encour-
age surface coherence and completion in unobserved ar-
eas. This joint parametric-coordinate encoding enables
real-time and robust performance by bringing the best of
both worlds: fast convergence and surface hole filling.
Moreover, our ray sampling strategy allows Co-SLAM to
perform global bundle adjustment over all keyframes in-
stead of requiring keyframe selection to maintain a small
number of active keyframes as competing neural SLAM ap-
proaches do. Experimental results show that Co-SLAM runs
at10−17Hz and achieves state-of-the-art scene reconstruc-
tion results, and competitive tracking performance in vari-
ous datasets and benchmarks (ScanNet, TUM, Replica, Syn-
thetic RGBD). Project page: https://hengyiwang.
github.io/projects/CoSLAM
⋆Indicates equal contribution. | 1. Introduction
Real-time joint camera tracking and dense surface re-
construction from RGB-D sensors has been a core problem
in computer vision and robotics for decades. Traditional
SLAM solutions exist that can robustly track the position of
the camera while fusing depth and/or color measurements
into a single high-fidelity map. However, they rely on hand-
crafted loss terms and do not exploit data-driven priors.
Recent attention has turned to learning-based models
that can exploit the ability of neural network architectures
to learn smoothness and coherence priors directly from data.
Coordinate-based networks have probably become the most
popular representation, since they can be trained to predict
the geometric and appearance properties of any point in the
scene in a self-supervised way, directly from images. The
most notable example, Neural Radiance Fields (NeRF) [14],
encodes scene density and color in the weights of a neural
network. In combination with volume rendering, NeRF is
trained to re-synthesize the input images and has a remark-
able ability to generalize to nearby unseen views.
Coordinate-based networks embed input point coordi-
nates into a high dimensional space, using sinusoidal or
other frequency embeddings, allowing them to capture
high-frequency details that are essential for high-fidelity
geometry reconstruction [1]. Combined with the smooth-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13293
COORDINATE PARAMETRIC JOINT REFERENCE
Frequency DenseGrid DenseGrid+OneBlob NICE-SLAM [42]
OneBlob HashGrid HashGrid+OneBlob (Ours) GT
Figure 2. Illustration of the effect of different encodings on completion. COORDINATE based encodings achieve hole filling but require
long training times. PARAMETRIC encodings allow fast training, but fail to complete unobserved regions. JOINT coordinate and para-
metric encoding (Ours) allows smooth scene completion and fast training. NICE-SLAM [42] uses a dense parametric encoding.
ness and coherence priors inherently encoded in the MLP
weights, they constitute a good choice for sequential track-
ing and mapping [26]. However, the weakness of MLP-
based approaches is the long training times required (some-
times hours) to learn a single scene. For that reason, re-
cent real-time capable SLAM systems built on coordinate
networks with frequency embeddings such as iMAP [26]
need to resort to strategies to sparsify ray sampling and re-
duce tracking iterations to maintain interactive operation.
This comes at the cost of loss of detail in the reconstruc-
tions which are oversmoothed (Fig. 5) and potential errors
in camera tracking.
Optimizable feature grids, also known as parametric
embeddings, have recently become a powerful alternative
scene representation to monolithic MLPs, given their ability
to represent high-fidelity local features and their extremely
fast convergence (orders of magnitude faster) [7, 10, 15, 32,
40]. Recent efforts focus on sparse alternatives to these
parametric embeddings such as octrees [28], tri-plane [2],
hash-grid [15] or sparse voxel grid [12, 13] to improve the
memory efficiency of dense grids. While these represen-
tations can be fast to train and are therefore well suited to
real-time operation, they fundamentally lack the smooth-
ness, and coherence priors inherent to MLPs and strug-
gle with hole-filling in areas without observation. NICE-
SLAM [42] is a recent example of a multi-resolution fea-
ture grid-based SLAM method. Although it does not suffer
from over-smoothness and captures local detail (as shown
in Fig. 2) it cannot perform hole-filling which might in turn
lead to drift in camera pose estimation.
Our first contribution is to design a joint coordinate and
sparse grid encoding for input points that brings together
the benefits of both worlds to the real-time SLAM frame-work. On the one hand, the smoothness and coherence pri-
ors provided by coordinate encodings (we use one-blob [16]
encoding), and on the other hand the optimization speed
and local details of sparse feature encodings (we use hash
grid [15]), resulting in more robust camera tracking and
high-fidelity maps with better completion and hole filling.
Our second contribution relates to the bundle adjustment
(BA) step in the joint optimization of the map and cam-
era poses. So far, all neural SLAM systems [26, 42] per-
form BA using rays sampled from a very small subset of
selected keyframes. Restricting the optimization to a very
small number of viewpoints results in decreased robustness
in camera tracking and increased computation due to the
need for a keyframe-selection strategy. Instead, Co-SLAM
performs global BA, sampling rays from all past keyframes,
which results in an important boost in robustness and per-
formance in pose estimation. In addition, we show that
our BA optimization requires a fraction of the iterations of
NICE-SLAM [42] to achieve similar errors. In practice, Co-
SLAM achieves SOTA performance in camera tracking and
3D reconstruction while maintaining real time performance.
Co-SLAM runs at 15-17Hz on Replica and Syn-
thetic RGB-D datasets [1], and 12-13Hz on ScanNet [5]
and TUM [25] scenes — faster than NICE-SLAM (0.1-
1Hz) [42] and iMAP [26]. We perform extensive
evaluations on various datasets (Replica [24], Synthetic
RGBD [1], ScanNet [6], TUM [25]) where we outperform
NICE-SLAM [42] and iMAP [26] in reconstruction and
achieve better or at least on-par tracking accuracy.
|
Wei_Super-Resolution_Neural_Operator_CVPR_2023 | Abstract
We propose Super-resolution Neural Operator (SRNO),
a deep operator learning framework that can resolve high-
resolution (HR) images at arbitrary scales from the low-
resolution (LR) counterparts. Treating the LR-HR image
pairs as continuous functions approximated with different
grid sizes, SRNO learns the mapping between the corre-
sponding function spaces. From the perspective of ap-
proximation theory, SRNO first embeds the LR input into
a higher-dimensional latent representation space, trying to
capture sufficient basis functions, and then iteratively ap-
proximates the implicit image function with a kernel inte-
gral mechanism, followed by a final dimensionality reduc-
tion step to generate the RGB representation at the target
coordinates. The key characteristics distinguishing SRNO
from prior continuous SR works are: 1) the kernel integral
in each layer is efficiently implemented via the Galerkin-
type attention, which possesses non-local properties in the
spatial domain and therefore benefits the grid-free contin-
uum; and 2) the multilayer attention architecture allows for
the dynamic latent basis update, which is crucial for SR
problems to “hallucinate” high-frequency information from
the LR image. Experiments show that SRNO outperforms
existing continuous SR methods in terms of both accuracy
and running time. Our code is at https://github.com/
2y7c3/Super-Resolution-Neural-Operator .
| 1. Introduction
Single image super-resolution (SR) addresses the in-
verse problem of reconstructing high-resolution (HR) im-
ages from their low-resolution (LR) counterparts. In a data-
driven way, deep neural networks (DNNs) learn the inver-
sion map from many LR-HR sample pairs and have demon-
strated appealing performances [4, 20,21,24,29,40,41].
Nevertheless, most DNNs are developed in the configura-
tion of single scaling factors, which cannot be used in sce-
*Equal contributions. This work is supported by the National Natural
Science Foundation of China (61871055).
†Corresponding author.
ℒ𝒦𝒫
Set5( ) DIV2K( ) Urban100( )ℋ(Ωℎ𝑐)
ℋ(Ωℎ𝑓)⋅⋅⋅
⋅⋅⋅
⋅⋅⋅𝒱(Ωℎ𝑓)
SRNOFigure 1. Overview of Super-Resolution Neural operator
(SRNO). SRNO is composed of three parts, L(Lifting),K(ker-
nel integrals) andP(Projection), which perform consecutively
to learn mappings between approximation spaces H(
hc)and
H(
hf)associated with grid sizes hcandhf, respectively. The
key component,K, uses test functions in the latent Hilbert space
V(
hf)to seek instance-specific basis functions.
narios requiring arbitrary SR factors [37, 38]. Recently, im-
plicit neural functions (INF) [5, 18] have been proposed to
represent images in arbitrary resolution, and paving a feasi-
ble way for continuous SR. These networks, as opposed to
storing discrete signals in grid-based formats, represent sig-
nals with evaluations of continuous functions at specified
coordinates, where the functions are generally parameter-
ized by a multi-layer perceptron (MLP). To share knowl-
edge across instances instead of fitting individual functions
for each signal, encoder-based methods [5, 15,26] are pro-
posed to retrieve latent codes for each signal, and then a de-
coding MLP is shared by all the instances to generate the
required output, where both the coordinates and the cor-
responding latent codes are taken as input. However, the
point-wise behavior of MLP in the spatial dimensions re-
sults in limited performance when decoding various objects,
particularly for high-frequency components [30, 32].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18247
Neural operator is a newly proposed neural network ar-
chitecture in the field of computational physics [17, 19,23]
for numerically efficient solvers of partial differential equa-
tions (PDE). Stemming from the operator theory, nueral op-
erators learn mappings between infinite-dimensional func-
tion spaces, which is inherently capable of continuous func-
tion evaluations and has shown promising potentials in var-
ious applications [9, 13,27]. Typically, neural operator con-
sists of three components: 1) lifting, 2) iterative kernel in-
tegral, and 3) projection. The kernel integrals operate in the
spatial domain, and thus can explicitly capture the global
relationship constraining the underlying solution function
of the PDE. The attention mechanism in transformers [36]
is a special case of kernel integral where linear transforms
are first exerted to the feature maps prior to the inner prod-
uct operations [17]. Tremendous successes of transformers
in various tasks [6, 22,34] have shown the importance of
capturing global correlations, and this is also true for SR to
improve performance. [8].
In this paper, we propose the super-resolution neural op-
erator (SRNO), a deep operator learning framework that
can resolve HR images from their LR counterparts at ar-
bitrary scales. As shown in Fig.1, SRNO learns the map-
ping between the corresponding function spaces by treating
the LR-HR image pairs as continuous functions approxi-
mated with different grid sizes. The key characteristics dis-
tinguishing SRNO from prior continuous SR works are: 1)
the kernel integral in each layer is efficiently implemented
via the Galerkin-type attention, which possesses non-local
properties in the spatial dimensions and have been proved
to be comparable to a Petrov-Galerkin projection [3]; and
2) the multilayer attention architecture allows for the dy-
namic latent basis update, which is crucial for SR problems
to “hallucinate” high-frequency information from the LR
image. When employing same encoders to capture features,
our method outperforms previous continuous SR methods
in terms of both reconstruction accuracy and running time.
In summary, our main contributions are as follows:
• We propose the methodology of super-resolution neu-
ral operator that maps between finite-dimensional
function spaces, allowing for continuous and zero-shot
super-resolution irrespective the discretization used on
the input and output spaces.
• We develop an architecture for SRNO that first ex-
plores the common latent basis for the whole training
set and subsequently refines an instance-specific basis
by the Galerkin-type attention mechanism.
• Numerically, we show that the proposed SRNO outper-
forms existing continuous SR methods with less run-
ning time, and even generates better results on the res-
olutions for which the fixed scale SR networks were
trained. |
Wu_PointConvFormer_Revenge_of_the_Point-Based_Convolution_CVPR_2023 | Abstract
We introduce PointConvFormer, a novel building block
for point cloud based deep network architectures. Inspired
by generalization theory, PointConvFormer combines ideas
from point convolution, where filter weights are only based
on relative position, and Transformers which utilize feature-
based attention. In PointConvFormer, attention computed
from feature difference between points in the neighborhood
is used to modify the convolutional weights at each point.
Hence, we preserved the invariances from point convolu-
tion, whereas attention helps to select relevant points in the
neighborhood for convolution. PointConvFormer is suitable
for multiple tasks that require details at the point level, such
as segmentation and scene flow estimation tasks. We exper-
iment on both tasks with multiple datasets including Scan-
Net, SemanticKitti, FlyingThings3D and KITTI. Our re-
sults show that PointConvFormer offers a better accuracy-
speed tradeoff than classic convolutions, regular transform-
ers, and voxelized sparse convolution approaches. Visual-
izations show that PointConvFormer performs similarly to
convolution on flat areas, whereas the neighborhood selec-
tion effect is stronger on object boundaries, showing that it
has got the best of both worlds. The code will be available.
| 1. Introduction
Depth sensors for indoor and outdoor 3D scanning have
significantly improved in terms of both performance and
affordability. Hence, their common data format, the 3D
point cloud, has drawn significant attention from academia
and industry. Understanding the 3D real world from point
clouds can be applied to many application domains, e.g.
robotics, autonomous driving, CAD, and AR/VR. However,
unlike image pixels arranged in regular grids, 3D points are
unstructured, which makes applying grid based Convolu-
tional Neural Networks (CNNs) difficult.
Various approaches have been proposed in response to
this challenge. [3, 5, 31, 35, 37, 65] introduced interesting
ways to project 3D point clouds back to 2D image space
*this work was done entirely at Apple Inc., Wenxuan Wu was an intern
at Apple Inc. when he participated in the work
Input Grid SizeMethodRuntime (ms)# Params (M)mIoU(%)10cmMinkowskiNet52.937.960.7PointConv23.45.462.6FastPointTransformer140.037.966.5PointConvFormerPointConvFormer-Lite41.929.45.51.671.470.65cmMinkowskiNet73.537.967.0PointConv36.75.468.5FastPointTransformer147.737.970.0PointConvFormerPointConvFormer-Lite59.650.55.51.973.773.32cmMinkowskiNetPointConv115.688.837.99.372.270.3FastPointTransformer312.037.972.5Stratified Transformer1689.318.874.3PointConvFormer145.59.474.5Figure 1. Performance vs. running time on ScanNet. PointCon-
vFormer achieves a state-of-the-art 74.5% mIoU while being effi-
cient with faster speed and way less learnable parameters. Larger
dot indicates more learnable parameters. All results are reported
on a single TITAN RTX GPU
and apply 2D convolution. Another line of research di-
rectly voxelizes the 3D space and apply 3D discrete con-
volution, but it induces massive computation and memory
overhead [47,62]. Sparse 3D convolution operations [9,20]
save a significant amount of computation by computing
convolution only on occupied voxels.
Some approaches directly operate on point clouds [41,
57,58,64,69,81]. [57,58] are pioneers which aggregate in-
formation on point clouds using max-pooling layers. Others
proposed to reorder the input points with a learned transfor-
mation [42], a flexible point kernel [69], and a convolutional
operation that directly work on point clouds [75, 81] which
utilizes a multi-layer perceptron (MLP) to learn convolu-
tion weights implicitly as a nonlinear transformation from
the relative positions of the local neighbourhood.
The approach to directly work on points is appealing to
us because it allows direct manipulation of the point co-
ordinates, thus being able to encode rotation/scale invari-
ance/equivariance directly into the convolution weights [41,
42, 58, 94]. These invariances serve as priors to make
the models more generalizable. Besides, point-based ap-
proaches require less parameters than voxel-based ones,
which need to keep e.g. 3×3×3convolution kernels
on all input and output channels. Finally, point-based ap-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21802
proaches can utilize k-nearest neighbors (kNN) to find the
local neighborhood, thus can adapt to variable sampling
densities in different 3D locations.
However, so far the methods with the best accuracy-
speed tradeoff have still been the sparse voxel-based ap-
proaches or a fusion between sparse voxel and point-based
models. Note that no matter the voxel-based or point-based
representation, the information from the input is exactly the
same, so it is unclear why fusion is needed. Besides, fu-
sion adds significantly to model complexity and memory
usage. This leads us to question the component that is in-
deed different between these representations: the general-
ization w.r.t. the irregular local neighbourhood. The shape
of the kNN neighbourhood in point-based approaches varies
in different parts of the point cloud. Irrelevant points from
other objects, noise and the background might be included
in the neighborhood, especially around object boundaries,
which can be detrimental to the performance and the ro-
bustness of point-based models.
To improve the robustness of models with kNN neigh-
borhoods, we refer back to the generalization theory of
CNNs, which indicates that points with significant fea-
ture correlation should be included in the same neighbor-
hood [40]. A key idea in this paper is that feature corre-
lation can be a way to filter out irrelevant neighbors in a
kNN neighborhood, which makes the subsequent convolu-
tion more generalizable. We introduce PointConvFormer,
which computes attention weights based on feature differ-
ences and uses that to reweight the points in the neighbor-
hood in a point-based convolutional model, which indirectly
“improves” the neighborhood for generalization.
The idea of using feature-based attention is not new, but
there are important differences between PointConvFormer
and other vision transformers [16,54,96]. PointConvFormer
combines features in the neighborhood with point-wise con-
volution, whereas Transformer attention models usually
adopt softmax attention in this step. In our formulation,
the positional information is outside the attention, hence
viewpoint-invariance can be introduced into the convoulu-
tional weights. We believe that invariance helps generaliz-
ing across neighborhood (size/rotation) differences between
training/testing sets, especially with a kNN neighborhood.
We evaluate PointConvFormer on two point cloud tasks,
semantic segmentation and scene flow estimation. For
semantic segmentation, experiment results on the indoor
ScanNet [11] and the outdoor SemanticKitti [2] demon-
strate superior performances over classic convolution and
transformers with a much more compact network. The per-
formance gap is the most significant at low resolutions, e.g.
on ScanNet with a 10cm resolution we achieved more than
10% improvement over MinkowskiNet with only 15% of
its parameters (Fig. 1). We also apply PointConvFormer
as the backbone of PointPWC-Net [82] for scene flow esti-mation, and observe significant improvements on FlyingTh-
ings3D [48] and KITTI scene flow 2015 [52] datasets as
well. These results show that PointConvFormer could po-
tentially compete with sparse convolution as the backbone
choice for dense prediction tasks on 3D point clouds.
|
Wu_NewsNet_A_Novel_Dataset_for_Hierarchical_Temporal_Segmentation_CVPR_2023 | Abstract
Temporal video segmentation is the get-to-go automatic
video analysis, which decomposes a long-form video into
smaller components for the following-up understanding
tasks. Recent works have studied several levels of granu-
larity to segment a video, such as shot, event, and scene.
Those segmentations can help compare the semantics in
the corresponding scales, but lack a wider view of larger
temporal spans, especially when the video is complex and
structured. Therefore, we present two abstractive levels of
temporal segmentations and study their hierarchy to the ex-
isting fine-grained levels. Accordingly, we collect NewsNet,
the largest news video dataset consisting of 1,000 videos
†Equal Contribution
Corresponding Authors: Bing Li (bing.li@kaust.edu.sa) and Ruizhi
Qiao (ruizhiqiao@tencent.com).in over 900 hours, associated with several tasks for hierar-
chical temporal video segmentation. Each news video is
a collection of stories on different topics, represented as
aligned audio, visual, and textual data, along with exten-
sive frame-wise annotations in four granularities. We assert
that the study on NewsNet can advance the understanding
of complex structured video and benefit more areas such
as short-video creation, personalized advertisement, digital
instruction, and education. Our dataset and code is pub-
licly available at https://github.com/NewsNet-
Benchmark/NewsNet .
| 1. Introduction
Temporal video segmentation is a critical problem in
video understanding, which is essential for many video ap-
plications such as video classification [10,15,37,58,59,62],
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10669
Table 1. Data comparison between the NewsNet and other related datasets. The NewsNet provides various multimodal data and hierarchical
temporal segmentation annotations. Doc: documentary, Ads: advertisement. ( Please refer to our project page for more details. )
Dataset # VideoDurationModality# Annotation(s) per VideoSource(hours) Topic Story Scene Event
A VS [75] 197 - Visual - - - 14.2 Ads
BBC [6] 11 9 Visual - - 49.7 - Doc
OVSD [48] 21 10 Visual - - 28.9 - Generic
Kinetics-GEBD [51] 54,691 152 Audio + Visual - - - 4.9 Action
MovieNet [23] † 1,100 2174 Text + Audio + Visual - - 66.0 849.1 Movie
RAI [7] 10 - Visual - - - 98.7 News
TI-News [35] 477 244 Audio + Visual - 55.6 - 530.4 News
NewsNet (Ours) 1,000 946 Text + Audio + Visual 8.5 51.6 87.9 654.4 News
† The number of annotations for Scene andShot is counted from MovieScene [47], which is a subset of MovieNet.
captioning [3, 34, 63, 71] and retrieval [5, 16, 31, 52]. Tem-
poral video segmentation aims to group successive video
frames into short segments along the temporal dimension.
With the explosive growth of long-form videos, it is desir-
able that temporal video segmentation can convert a video
into more meaningful segments for more efficient access to
the video. However, it is challenging to develop effective
temporal segmentation tools for long-form videos, since
this requires a comprehensive understanding of video struc-
ture, while long-form videos contain complex content.
Towards temporal video segmentation, existing works
explore shot, event, and scene segmentation tasks, respec-
tively. Shot segmentation [38, 55, 57] divides a video into
shots, where a shot consists of consecutive and visually
continuous frames captured by a camera without interrup-
tion [23]. Yet, shot segmentation only considers low-level
visual cues ( i.e. visual similarity), lacking semantic under-
standing. Instead, event segmentation [25, 51, 56] divides
a video by detecting the moments of changes such as ac-
tion/subject changes. To better capture the underlying struc-
ture of a video, recent works [12, 47, 65] introduce video
scene segmentation which segments a video into scene seg-
ments, each comprising successive shots semantically re-
lated to the same scene. Scene segmentation enables a
coarser and higher-level representation than shot segmen-
tation. However, compared to the rich content of mas-
sive long-form videos, scene/event is fine-grained and often
lacks a high-level summarization of video content, which is
insufficient for capturing the complex semantic structure of
many videos and briefly representing video content.
In this work, we first explore how to comprehensively
represent the complex structure of a long-form video for
temporal video segmentation. Humans can hierarchically
divide a video into segments of different granularities ac-
cording to multi-level semantic information ( e.g.scene and
topic), from the perspective of cognitive science. Natu-
ral language processing researchers have widely exploredtopic-level understanding [29, 41] for summarizing docu-
ments [8, 42], while little effort has been devoted to long-
form videos. Inspired by these observations, besides scene
and event, we propose to introduce two higher-level seman-
tics ( i.e. story and topic) into temporal video segmentation,
to provide a brief and semantic structure representation.
As a result, such hierarchical and multi-level understand-
ing brings about scalable video structure representation for
temporal video segmentation on long-form videos. That
is, a long-form video can be split into finer segments with
lower-level semantics ( e.g. scene), but also can be summa-
rized into coarser ones yet with higher-level semantics ( e.g.
topic) by recursively grouping finer segments, which com-
prehensively represents video structures from coarse to fine.
However, the community lacks high-quality datasets to
conduct this research. In particular, as shown in Table 1,
most datasets only provide temporal structure annotations
with regard to events or scenes. TI-News [35] and Movi-
eScene [47] provide two levels of annotations, but these
datasets lack topic-level ones.
To effectively break this limitation, we build a novel
large-scale dataset for hierarchical temporal segmentation,
named NewsNet. The unique properties of our NewsNet
introduce many advantages. First, it is among the largest
datasets in the news domain. We collect over 900 hours of
videos from 20 mainstream news platforms. It has a highly
diverse distribution of data. Second, we carefully anno-
tated it frame-by-frame with 4 hierarchical levels to ensure
its quality can meet our needs. Third, it is multimodal, in-
cluding textual, visual, and audio information. Due to the
nature of the news, the alignment across modalities is ac-
curate, which makes multimodal joint training of models
feasible. Finally, the videos in NewsNet provide a complete
understanding of public events. Compared with other video
datasets [4,23,26], it introduces more objective open-world
knowledge ( e.g., news introduction) while including sub-
jective factual commentary ( e.g., host comments on news
10670
events), making it more amenable to real-life application.
Based on NewsNet, we empirically highlight two
promising directions for long-span temporal segmentation:
1) Infusing Multi-Modality knowledge can significantly im-
prove the performance of long-form temporal segmentation;
2) Although story- and topic-level segmentation is challeng-
ing, it can be benefited from hierarchical modeling with the
event- and scene-level segmentation tasks.
The main contributions of this paper are as follows:
• We propose a novel large-scale dataset NewsNet for
long-form video structure understanding. This dataset
is derived from 900+ hours of video and annotated
with 4 hierarchical levels of semantics.
• NewsNet provides dense annotations and multi-modal
information, promoting diverse benchmarks: sep-
arate/hierarchical temporal video segmentation in
scene/story/topic levels, as well as other common tasks
like classification, video localization/grounding, and
highlight detection.
• We formulate a new benchmark, i.e.,hierarchical mod-
eling in the temporal segmentation task, which needs a
single model to predict segments of multiple hierarchi-
cal levels. Based on the empirical study, we bring in-
sights into how hierarchical modeling potentially ben-
efits the temporal video segmentation task, which was
almost never discussed.
|
Williams_Black-Box_Sparse_Adversarial_Attack_via_Multi-Objective_Optimisation_CVPR_2023 | Abstract
Deep neural networks (DNNs) are susceptible to adver-
sarial images, raising concerns about their reliability in
safety-critical tasks. Sparse adversarial attacks, which limit
the number of modified pixels, have shown to be highly
effective in causing DNNs to misclassify. However, exist-
ing methods often struggle to simultaneously minimize the
number of modified pixels and the size of the modifica-
tions, often requiring a large number of queries and as-
suming unrestricted access to the targeted DNN. In con-
trast, other methods that limit the number of modified pixels
often permit unbounded modifications, making them easily
detectable.To address these limitations, we propose a novel
multi-objective sparse attack algorithm that efficiently min-
imizes the number of modified pixels and their size during
the attack process. Our algorithm draws inspiration from
evolutionary computation and incorporates a mechanism
for prioritizing objectives that aligns with an attacker’s
goals. Our approach outperforms existing sparse attacks
on CIFAR-10 and ImageNet trained DNN classifiers while
requiring only a small query budget, attaining competitive
attack success rates while perturbing fewer pixels. Over-
all, our proposed attack algorithm provides a solution to
the limitations of current sparse attack methods by jointly
minimizing the number of modified pixels and their size.
Our results demonstrate the effectiveness of our approach
in restricted scenarios, highlighting its potential to enhance
DNN security.
| 1. Introduction
Although deep neural networks (DNNs) have made im-
pressive strides in computer vision tasks [17, 21, 22, 24, 28,
37, 38, 48], recent work has shown that small optimized
perturbations to input images can cause DNNs to misclas-
sify [1, 18, 26, 31, 34, 42]. As adversarial images have been
found to exist in the physical world [26, 27, 43], particu-
Proposed Method
SSIM = 0 :9977Sparse-RS
SSIM = 0 :8667
SSIM = 0 :9906
SSIM = 0 :9632Figure 1. This illustration shows adversarial images and their cor-
responding perturbations generated by two different algorithms,
the proposed method and Sparse-RS [10], both attacking an adver-
sarially trained CIFAR-10 [19] (top) and ImageNet [35] (bottom).
While both images are adversarial, the perturbation generated by
the Sparse-RS algorithm visibly distorts the image, whereas the
proposed method’s adversarial image remains more similar to the
original. This similarity is demonstrated by calculating the struc-
tural similarity (SSIM) between the adversarial images and the
original. The effectiveness of the Sparse-RS algorithm is there-
fore questionable due to the significant distortion it causes.
lar concern has been expressed on their impact on security-
critical applications [1]. To address this issue, previous
works have emphasized the importance of generating strong
adversarial images [3]. As a result, significant effort has
been devoted to developing effective attack methods that
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12291
can construct perturbations capable of causing DNN classi-
fiers to misclassify images while preserving their semantic
content.
Most adversarial attack methods in the literature formu-
late the attack as an optimization problem where a loss
function is minimized to achieve the desired misclassifica-
tion of an image. While many attack methods in the lit-
erature constrain the adversarial perturbation by its l2or
l∞norm [2, 4, 6–8, 18, 23, 26, 29, 36, 42, 45, 46] and allow
all pixels of an image to be perturbed, there is also a need
to develop sparse attack methods that constrain adversarial
perturbations by their l0norm [10,11,15,16,30,41,44,47].
Such adversarial images have been found to also exist in
the physical world and have shown to be as effective as the
more traditional l2orl∞-constrained adversarial images.
Numerous sparse attack methods have been proposed to
address both the white-box [11, 15, 16, 44, 47] and black-
box scenarios [10, 30, 41]. In the white-box scenario, the
attacker has full access to a DNN’s information, while the
black-box scenario assumes the attack only has access to
the outputted class probabilities. In this work, we focus
on the black-box scenario. While existing attack meth-
ods have shown success in generating adversarial images,
they often struggle to handle the trade-off between optimiz-
ing the loss function and minimizing the perturbations lp
norms, where p= 0,1,2or∞. Several methods only con-
strain the number of perturbed pixels [10, 11, 41], allow-
ing the size of the perturbation to be unbounded. Despite
their efficiency, the unbounded nature of the generated per-
turbations result in obvious distortions of the original im-
age, as shown in Fig. 1. On the other hand, other meth-
ods allow the l0norm to be minimized along with the loss
function [15,16,47], while constraining the perturbation by
itsl2orl∞norm. These methods either generate an ad-
versarial perturbation and then reduce its l0norm [47] or
add an l0norm penalty term to the optimized loss func-
tion [15, 16]. Despite their good performance, these meth-
ods only address the white-box scenario and assume access
to a large number of DNN queries, limiting their applicabil-
ity to query-limited scenarios [23]. Therefore, by not prop-
erly handling this trade-off, existing methods are limited in
their applicability and effectiveness in real-world scenarios.
Within the evolutionary computation field a classic ap-
proach to handling conflicting objectives is the use of a
domination relation [13] which characterises the trade-off
between objectives and is used to compare solutions within
a population-based evolutionary algorithm. While the orig-
inal approach assigns equal weight to each objective, the
domination mechanism can be adapted to reflect the at-
tacker’s preferences. In this work, our goal is to generate
sparse adversarial perturbations with low l0andl2norms in
an efficient manner. Our contributions can be summarised
as follows:• To address the challenge of generating sparse adver-
sarial perturbations, we formulate the problem as a bi-
objective optimization problem. By constraining the
perturbation to a set of discrete values, we show that
minimizing of the l2norm also minimizes the l0norm.
• We propose a new dominance relation to compare so-
lutions that gives first priority to minimizing the loss
function and then to minimizing its l2norm.
• To generate adversarial perturbations, we propose a
population-based heuristic attack method that utilizes
two distributions to generate new solutions. These dis-
tributions mimic the crossover and mutation operators
commonly used in evolutionary computation. By sam-
pling from these distributions, our approach explores
the search space in an efficient and effective manner.
• To evaluate the effectiveness of our approach, we con-
duct attacks on DNN classifiers trained on the CIFAR-
10 and ImageNet datasets, using a low-query budget
and considering both targeted and non-targeted attack
scenarios. Our empirical results demonstrate that our
proposed method outperforms state-of-the-art white-
box sparse attacks, as well as the black-box Sparse-RS
attack method [10], in terms of success rate and num-
ber of perturbed pixels.
|
Xiao_Masked_Images_Are_Counterfactual_Samples_for_Robust_Fine-Tuning_CVPR_2023 | Abstract
Deep learning models are challenged by the distribu-
tion shift between the training data and test data. Re-
cently, the large models pre-trained on diverse data have
demonstrated unprecedented robustness to various distri-
bution shifts. However, fine-tuning these models can lead
to a trade-off between in-distribution (ID) performance and
out-of-distribution (OOD) robustness. Existing methods for
tackling this trade-off do not explicitly address the OOD ro-
bustness problem. In this paper, based on causal analysis
of the aforementioned problems, we propose a novel fine-
tuning method, which uses masked images as counterfactual
samples that help improve the robustness of the fine-tuning
model. Specifically, we mask either the semantics-related or
semantics-unrelated patches of the images based on class
activation map to break the spurious correlation, and refill
the masked patches with patches from other images. The
resulting counterfactual samples are used in feature-based
distillation with the pre-trained model. Extensive experi-
ments verify that regularizing the fine-tuning with the pro-
posed masked images can achieve a better trade-off between
ID and OOD performance, surpassing previous methods on
the OOD performance. Our code is available at https:
//github.com/Coxy7/robust-finetuning .
| 1. Introduction
Deep learning has made impressive advances in various
tasks on computer vision. Despite the remarkable perfor-
mance achieved on benchmark datasets, deep models are
often challenged by the distribution shift between the train-
ing data and test data [5,15,16,33]. It is commonly assumed
that the training and test samples follow the same distribu-
tion, which may not hold in real-world applications due to
the unpredictable change of lighting conditions, viewpoints,
backgrounds, etc. Although there are attempts to improve
the robustness of deep models to the distribution shift (or
OOD robustness), it is still rather under-explored [29, 38].
*Corresponding author.
CAM -based
Masking
Pre-trained
Robust ModelVanilla Fine -tuned
ModelRobust Fine -tuned
Model (ours)What is the
semantics?
Red admiral butterfly. A bunch of flowers. Flowers.
learning fromFigure 1. Illustration of our work. Vanilla fine-tuned models tend
to learn spurious correlations that degrade the OOD robustness. To
tackle this issue, our model learns from the pre-trained model on
the counterfactual CAM-based masked images.
Recently, large-scale models pre-trained on diverse data
have demonstrated unprecedented robustness to various dis-
tribution shifts [8, 19, 32]. Fine-tuning these pre-trained
models on downstream tasks can be a promising approach
to building robust models for different applications. How-
ever, it is found that while fine-tuning improves the per-
formance on in-distribution (ID) data, it may reduce that
on out-of-distribution (OOD) data [23, 43]. To tackle this
trade-off, several methods [23, 42, 43] have been proposed
to improve both ID and OOD performance in fine-tuning.
However, they do not explicitly address the OOD robust-
ness problem; instead, they implicitly preserve the robust-
ness of the pre-trained model by constraining the distortion
of pre-trained weights or using model ensembles.
In this paper, we revisit the issue of robustness degrada-
tion in fine-tuning from a causal perspective. A large-scale
pre-trained model somewhat shows properties in causality
and stays robust to OOD samples [41]. However, when
fine-tuning on downstream tasks, a majority of the model
parameters tend to be adjusted for the downstream task in
fine-tuning due to the highly entangled representation of
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20301
images, arguably destructive to the generalizable knowl-
edge [20, 35]. In contrast, distribution shifts are usually
sparse in the underlying causal factorization of the data gen-
eration process [7, 35]. In this low-dimensional case, if we
know which variables vary with different data distributions
in this factorization ( i.e., the non-stationary factors), we can
achieve the OOD robustness by simply excluding their in-
fluence on the final predictions of the model.
Specifically, we consider a Structural Causal Model
(SCM) [31] for the object-centric image generation process,
as depicted in Fig. 2. In this SCM, images are generated
according to a non-stationary domain-relevant factor and
a stationary semantic factor. Between them is a spurious
correlation caused by a hidden non-stationary confounder
that influences how the domain-relevant factor changes with
the semantic one. To retain the OOD robustness, a fine-
tuning model should avoid mapping non-stationary domain-
relevant features to the predicted semantics.
To this end, we propose to fine-tune the models with
masked images, which serve as counterfactual samples
breaking the spurious correlation. Training on these sam-
ples helps preserve the stationary and generalizable knowl-
edge of the pre-trained model. Concretely, we either mask
the patches that contribute most to the label ( i.e., the main
object) or mask those with the least contribution ( e.g., the
context), which can be implemented based on class activa-
tion map (CAM) [9,46]. Such image masking forms an ma-
nipulation of a factual image and produces a counterfactual
sample. Since the pre-trained model can better disentan-
gle invariant features across domains, we require the fine-
tuning model to learn from the pre-trained model on these
counterfactual samples, as illustrated in Fig. 1. Further-
more, we argue that simply dropping the masked patches
may be insufficient to alleviate the risk of fitting spurious
correlations, and we propose to refill the masked patches
with those from other images.
We study different combinations of masking strategies
(e.g., masking the object or the context) and refilling strate-
gies ( e.g., filling with patches from single or multiple im-
ages). Experimental results suggest that most of the strate-
gies are applicable to the construction of counterfactual
samples that help improve the robustness in fine-tuning,
while masking the object generally achieves the best ro-
bustness. Compared with existing methods [23, 42, 43] on
fine-tuning CLIP [32] models, our approach achieves bet-
ter average accuracy on various OOD datasets without re-
lying on model ensembles or weight constraints. We also
find that, taking the weight-space ensemble of the zero-shot
model and our fine-tuned model following WiSE-FT [43],
hardly improves the trade-off between ID and OOD accu-
racy, which contradicts previous observations and implies
that our approach may produce essentially different models
in comparison with conventional fine-tuning. |
Wang_Non-Line-of-Sight_Imaging_With_Signal_Superresolution_Network_CVPR_2023 | Abstract
Non-line-of-sight (NLOS) imaging aims at reconstruct-
ing the location, shape, albedo, and surface normal of the
hidden object around the corner with measured transient
data. Due to its strong potential in various fields, it has
drawn much attention in recent years. However, long ex-
posure time is not always available for applications such
as auto-driving, which hinders the practical use of NLOS
imaging. Although scanning fewer points can reduce the
total measurement time, it also brings the problem of imag-
ing quality degradation. This paper proposes a general
learning-based pipeline for increasing imaging quality with
only a few scanning points. We tailor a neural network
to learn the operator that recovers a high spatial resolu-
tion signal. Experiments on synthetic and measured data
indicate that the proposed method provides faithful recon-
structions of the hidden scene under both confocal and non-
confocal settings. Compared with original measurements,
the acquisition of our approach is 16 times faster while
maintaining similar reconstruction quality. Besides, the
proposed pipeline can be applied directly to existing opti-
cal systems and imaging algorithms as a plug-in-and-play
module. We believe the proposed pipeline is powerful in
increasing the frame rate in NLOS video imaging.
| 1. Introduction
Non-line-of-sight (NLOS) imaging problem usually em-
ploys a high temporal resolution optical system to recover
the hidden object around the corner. As shown in Fig. 1a,
photons emitted by the laser are collected by the detector
after three diffuse reflections: the reflection at the visible
wall, the reflection at the hidden object, and the reflection
at the visible wall again. Scanning a region on the visible
wall can get measured histogram data to recover the hidden
scene.
Due to its potential applications in various fields, such
as auto-driving, disaster relief, and remote sensing, the
NLOS imaging problem has become an emerging field
since it was first proposed by Kirmani et al. [17] in 2009.
(a) NLOS scenario
(b) Scanning points
(c) Results
Figure 1. A typical NLOS scenario and reconstruction results. (a)
An illustration of a typical NLOS scenario. (b) An illustration
of the scanning points on the visible wall. Using the proposed
pipeline, only the blue circle points are needed to be illuminated,
and the signal at the yellow square points can be recovered with
the proposed pipeline. (c) Comparisons of the reconstruction re-
sults of the pyramid with different signals. The maximum intensity
projection of the albedo values along the depth direction is shown
in the upper left corner (GT). The results are reconstructed by the
original signal with spatial resolution 3232(Original), the sub-
sampled signal with spatial resolution 88(Low), and the signal
recovered by the proposed network with spatial resolution 3232
(Ours).
Many methods have been proposed to improve the prac-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17420
Figure 2. Flowchart of the proposed pipeline. The high resolution signal is recovered from the low resolution signal using a neural network.
The hidden scene is then reconstructed using state-of-the-art imaging algorithms designed for high resolution signal.
ticability [1, 3, 13, 14, 34, 35, 39] or reconstruction quality
[10, 11, 26, 41]. A back-projection method with Laplacian
of Gaussian filter (LOG-BP) [18] is introduced to improve
the reconstruction quality of the back-projection method
[38]. In 2018, O’Toole et al. [31] first apply the light-cone-
transform (LCT) method to image the hidden object with a
confocal scanning mode. Young et al. [45] then propose the
directional light-cone-transform (D-LCT) method, which
simultaneously recovers the albedo and surface normal of
the hidden object. From the perspective of wave charac-
teristics, Lindell et al . [22] introduce the fast frequency-
wavenumber migration (F-K) method to NLOS. Another re-
markable imaging method proposed by Liu et al. [25] em-
ploys the phasor field model [9, 33] and provides an effi-
cient solution for fast NLOS imaging under non-confocal
scenario [24]. Further |
Xie_MAESTER_Masked_Autoencoder_Guided_Segmentation_at_Pixel_Resolution_for_Accurate_CVPR_2023 | Abstract
Accurate segmentation of cellular images remains an
elusive task due to the intrinsic variability in morphology
of biological structures. Complete manual segmentation is
unfeasible for large datasets, and while supervised meth-
ods have been proposed to automate segmentation, they
often rely on manually generated ground truths which are
especially challenging and time consuming to generate in
biology due to the requirement of domain expertise. Fur-
thermore, these methods have limited generalization capac-
ity, requiring additional manual labels to be generated for
each dataset and use case. We introduce MAESTER (Masked
AutoEncoder guided SegmenTation at pixEl Resolution), a
self-supervised method for accurate, subcellular structure
segmentation at pixel resolution. MAESTER treats segmen-
tation as a representation learning and clustering problem.
Specifically, MAESTER learns semantically meaningful to-
ken representations of multi-pixel image patches while si-
multaneously maintaining a sufficiently large field of view
for contextual learning. We also develop a cover-and-stride
inference strategy to achieve pixel-level subcellular struc-
ture segmentation. We evaluated MAESTER on a publicly
available volumetric electron microscopy (VEM) dataset of
primary mouse pancreatic islets βcells and achieved up-
wards of 29.1%improvement over state-of-the-art under
the same evaluation criteria. Furthermore, our results are
competitive against supervised methods trained on the same
tasks, closing the gap between self-supervised and super-
vised approaches. MAESTER shows promise for alleviating
the critical bottleneck of ground truth generation for imaging
related data analysis and thereby greatly increasing the rate
of biological discovery.
Code available at https : / / github . com /
bowang-lab/MAESTER
*Equal contribution
†Project lead
‡Co-senior author | 1. Introduction
Imaging is widely used in biology to study the organi-
zation, morphology, and function of cells and subcellular
structures [13, 26, 28, 31, 32]. Segmentation of structures
and objects of interest in the acquired images is often crit-
ical for downstream analysis. Recent innovations in high
throughput imaging technology enables larger scale datasets
to be collected more quickly and cost efficiently [23, 24, 32].
Scalable and accurate segmentation hence becomes a crucial
bottleneck to overcome. For example, volumetric electron
microscopy (VEM) can generate terabytes of imaging data
in a single run, enabling biologists to uncover ultrastructural
features of cells at unprecedented resolution and scale in
3D [24]. Manual segmentation of such datasets are unfea-
sible and especially when substantial domain knowledge is
required for annotation of structures captured in the imaging
volume.
With recent advancements in the field of machine learn-
ing, automatic methods involving convolutional neural net-
works (CNNs) have been developed to aid the segmentation
process to great success [8, 18]. However, these methods
often require extensive manual labels to train in the first
place. Furthermore, supervised models often exhibit lim-
ited generalization capacity, necessitating additional ground
truth generation efforts for each new dataset or use case.
Presently, there is a dire need for a self-supervised segmenta-
tion method to bypass the initial bottleneck of manual label
generation, particularly when the cost and time of acquir-
ing training supervision far exceeds the capacity to generate
unlabelled data.
In addition to being self-supervised, the method needs to
incorporate a few inductive biases to tackle the challenges of
biological image segmentation. First, the texture of objects
from the same class often remains consistent, despite great
variability in shapes and sizes that cellular structures can
exhibit. Therefore, the model needs to learn semantically
meaningful representation of small image patches belonging
to each structure of interest and distinguish between different
textures. Second, the model needs to be capable of producing
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3292
Figure 1. MAESTER achieves self-supervised representation learning and segmentation through: (a) patchifying large sample from EM
imaging (b) learning patch-level representation through predicting randomly masked region, (c) inferring the representation for the center
voxel of each patch, (d) showing the 3D-rendered volume of our MAESTER generated segmentation, (e) demonstrating cover-and-stride
strategy.
features that precisely correspond to small regions in the
original image. Not only will this increase the resolution of
the resulting segmentation, it will also allow the method to
take advantage of the locality assumption, which posits that
small groups of adjacent pixels are more likely to belong
to the same class. Third, the model needs to be context
aware. While distinguishing between multi-pixel patches of
images alone can achieve good segmentation results [5], wehypothesize that including a greater field of view (FOV) as
context is crucial for better representation learning for the
purpose of subcellular structure segmentation.
Transformer based architectures have seen recent suc-
cesses in computer vision [3, 15, 29]. The token-wise rep-
resentation of image patches offers a natural way to inject
inductive bias into our self-supervised segmentation model.
We introduce MAESTER (Masked AutoEncoder guided Seg-
3293
menTation at pixEl Resolution), a self-supervised method
that can achieve accurate, pixel-level segmentation of sub-
cellular structures. MAESTER works in two stages. During
training, MAESTER takes as input a large FOV ( F×F
pixels) containing ample local context which is further bro-
ken down into multi-pixel patches of size P×Ppixels.
The choice of Pis sufficiently small to allow each patch
to be treated as a single class under the locality assump-
tion while achieving higher spatial precision. The attention
mechanism of a vision transformer (ViT) [3] encoder then
allows information sharing between nearby patches. Fur-
thermore, taking inspiration from the Masked Autoencoder
(MAE) [6] learning paradigm, we incorporate the surrogate
task of multi-pixel patch masking and reconstruction via a
light weight ViT decoder for each sampled FOV of a given
image to simultaneously learn semantically meaningful to-
ken representations of all patches in the FOV . During infer-
ence, we deploy the trained encoder to generate millions
of representations of unlabelled image patches via a novel
cover-and-stride inference strategy. These representations
are then clustered to produce a desired number of classes for
self-supervised segmentation, leading to the final segmenta-
tion of the given VEM dataset.
To our knowledge, we are the first to use the transformer
architecture to incorporate the inductive biases needed for
self-supervised subcellular structure segmentation. We also
repurposed and optimized the MAE learning paradigm for
generating semantically relevant token representations of
multi-pixel sized image patches for classification into bio-
logically concordant clusters for segmentation rather than
for pretraining or representation learning at the image level.
Lastly, we introduce a cover-and-stride inference strategy
to achieve pixel level segmentation of the given biological
images. We tested MAESTER on the betaSeg dataset [20],
consisting of primary mouse pancreatic islets βcells and
yielded upwards of 29.1% increase in performance compared
to prior state-of-the-art [5]. We also benchmarked against
Segmenter [27] and vanilla ViT [3], two supervised seg-
mentation models with access to all ground truth labels in
addition to the raw images used to train MAESTER. We find
MAESTER achieved competitive results for the predomi-
nant classes, closing the gap between supervised and self-
supervised segmentation models. We believe MAESTER has
the potential to drastically speed up the experimental cycle
of biological imaging experiments by alleviating the critical
bottleneck of manual label generation and greatly increasing
the rate of scientific inquiry in cell biology.
|
Xu_Constructing_Deep_Spiking_Neural_Networks_From_Artificial_Neural_Networks_With_CVPR_2023 | Abstract
Spiking neural networks (SNNs) are well-known as
brain-inspired models with high computing efficiency, due
to a key component that they utilize spikes as information
units, close to the biological neural systems. Although spik-
ing based models are energy efficient by taking advantage of
discrete spike signals, their performance is limited by cur-
rent network structures and their training methods. As dis-
crete signals, typical SNNs cannot apply the gradient de-
scent rules directly into parameter adjustment as artificial
neural networks (ANNs). Aiming at this limitation, here
we propose a novel method of constructing deep SNN mod-
els with knowledge distillation (KD) that uses ANN as the
teacher model and SNN as the student model. Through the
ANN-SNN joint training algorithm, the student SNN model
can learn rich feature information from the teacher ANN
model through the KD method, yet it avoids training SNN
from scratch when communicating with non-differentiable
spikes. Our method can not only build a more efficient deep
spiking structure feasibly and reasonably but use few time
steps to train the whole model compared to direct train-
ing or ANN to SNN methods. More importantly, it has a
superb ability of noise immunity for various types of ar-
tificial noises and natural signals. The proposed novel
method provides efficient ways to improve the performance
of SNN through constructing deeper structures in a high-
throughput fashion, with potential usage for light and effi-
cient brain-inspired computing of practical scenarios.
| 1. Introduction
By referring to the information processing mechanism
and structural characteristics of the biological nervous
system, spiking neural networks (SNNs) are remarkably
*Corresponding authors: jrshen@zju.edu.cn and gpan@zju.edu.cn.good at computational intelligence tasks [26] and suitable
for processing unstructured information, with stronger au-
tonomous learning capabilities and ultra-low power con-
sumption [2, 7, 23, 37].
Although various engineering effort has been made in
this area, such type of biological information processing
system still underperforms artificial systems (artificial neu-
ral networks, ANNs) in some common computer tasks,
such as image classification. One possible reason for this
is that typical SNNs lack deep hierarchical network struc-
tures compared to those from ANNs. Due to the non-
differentiable spikes, typical SNNs are restricted to global
training rules which lead to various of current SNNs being
just shallow fully-connected layer based [28, 34]. Limited
by training rules and structures, although SNNs can signifi-
cantly handle spatial-temporal data efficiently, it is difficult
to train a deep SNN directly as using backpropagation (BP)
in ANNs do [22].
By drawing on some key tricks from ANNs, some stud-
ies want to improve image classification accuracy which
is led by SNNs by combing structure and learning rules
those has been proven to be effective in improving model
performance in ANNs. [3, 6] proposed methods to con-
vert ANNs to SNNs by controlling the output and network
structure of ANN and SNN to be as consistent as possi-
ble. Through this way, although they can build effective
deep SNNs, these conversion methods suffer long training
time and lack some intermediate information during ANN
training period. [12,19] tried to adopt the threshold of spik-
ing neurons to make them suitable for using gradient sur-
rogate method, these models adopted too complex neuron
models to get good performance and take up large computa-
tional memory and cost. [8] did interesting work on directly
converting an adjusted ANN to an SNN using the theoret-
ical equivalence between activation and firing rate, which
achieves superior performance. [29] constructed ANN-SNN
hybrid models to improve the feature extraction, but these
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7886
hybrid models suffered a difficult training process.
Aiming at constructing efficient SNNs, this paper pro-
posed a brand-new method using knowledge distillation
(KD) to let student models (SNNs) absorb rich informa-
tion from teacher models (ANNs). KD [4] can transfer
the knowledge of one network to another network, two net-
works can be homogeneous or heterogeneous. This is done
by training a teacher network and then using the output of
the teacher network and the real tag of the data to train the
student network. KD can be used to transform a network
from a large network to a small network, retaining perfor-
mance close to that of a large network, or to transfer knowl-
edge learned from multiple networks into a single network,
making the performance of a single network close to the
results of Ensemble.
Under the guidance of teacher models, the wanted SNNs
model can be trained in a layer-wise manner [20]. Unlike
traditional ANN-SNN conversion requires the same model
structure of two models, the proposed KD conversion can
make a heterogeneous network structure of them, for ex-
ample, if the teacher ANN is larger and deeper, the student
SNN can be smaller and shallower. This kind of KD conver-
sion provides sufficient flexibility to construct any arbitrary
SNN.
In this paper, we propose a novel KD based training
method to construct deep SNNs which avoids restricting
corresponding network structures between ANN and SNN
during the training period. Through a unified ANN-SNN
loss function, we can construct the SNN model from well-
trained ANN, accelerate the training time and save mem-
ory usage. We adopt supervised gradient surrogate method
as basic student SNN training rules. We evaluated the
proposed method on several image classification datasets
(MNIST, CIFAR10, and CIFAR100) and their noisy vari-
ations. Experimental results showed that the proposed
method can get pretty good image classification perfor-
mance with a light SNN model. The main contributions
are as follows:
• This paper proposed a KD based conversion method to
construct deep SNNs from ANNs, which only takes
less training latency and allows the structure of the
SNN and ANN to be heterogeneous.
• Through the proposed ANN-SNN joint training
method, the student SNN model can absorb more in-
formation from ANN during training method, com-
pared to offline ANN-SNN conversion, the proposed
method significantly helped to improve the perfor-
mance of the student SNN model.
• We demonstrate the efficiency and effectiveness of the
proposed distilling SNN method through evaluations
of several datasets and noisy ones. Experimental re-
sults show that we can construct a more reasonableSNN which can achieve state-of-the-art performance
on experimental datasets with less training latency and
show better anti-noise ability.
|
Wang_Pixels_Regions_and_Objects_Multiple_Enhancement_for_Salient_Object_Detection_CVPR_2023 | Abstract
Salient object detection (SOD) aims to mimic the human
visual system (HVS) and cognition mechanisms to identify
and segment salient objects. However, due to the complex-
ity of these mechanisms, current methods are not perfect.
Accuracy and robustness need to be further improved, par-
ticularly in complex scenes with multiple objects and back-
ground clutter. To address this issue, we propose a novel
approach called Multiple Enhancement Network (MENet)
that adopts the boundary sensibility, content integrity, it-
erative refinement, and frequency decomposition mecha-
nisms of HVS. A multi-level hybrid loss is firstly designed
to guide the network to learn pixel-level, region-level, and
object-level features. A flexible multiscale feature enhance-
ment module (ME-Module) is then designed to gradually
aggregate and refine global or detailed features by chang-
ing the size order of the input feature sequence. An iter-
ative training strategy is used to enhance boundary fea-
tures and adaptive features in the dual-branch decoder of
MENet. Comprehensive evaluations on six challenging
benchmark datasets show that MENet achieves state-of-the-
art results. Both the codes and results are publicly available
at https://github.com/yiwangtz/MENet.
*The corresponding authors | 1. Introduction
Salient object detection (SOD) aims to identify the most
visually conspicuous regions in an image that are consis-
tent with the human visual system (HVS) and cognition
mechanisms [9,13,40]. SOD can eliminate redundant infor-
mation and improve computational performance for many
high-level computer vision tasks, such as action recognition
[4, 60], image segmentation [3, 33], image captioning [52],
object tracking [14], and video summary [58]. Fully con-
volutional networks (FCNs) [25] based SOD models have
been particularly effective at improving SOD performance
in recent years [40]. However, accurately segmenting com-
plex object boundaries remains a challenging task for SOD.
This is especially true when the geometry and/or boundaries
of these objects are complex, or when scenes are chaotic or
cluttered [9, 10], as shown in Fig. 1.
An intuitive solution for addressing this problem is to
explore the mechanisms of the human vision system (HVS)
[55] and some of which have been used to improve SOD
models, as described below. (i) A human tends to enhance
recognition by alternating between viewing the entire object
and the details of complex scenes, which has been utilized
for various visual tasks [16, 17, 22, 36]. (ii) HVS is sen-
sitive to both boundary/contour and structural information,
so dual-branch feature refinement structures have been de-
veloped to incorporate extra-edge information to enhance
salient feature learning [11, 22, 24, 43, 47, 53, 57]. Some
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10031
Figure 1. Illustration of MAE (left part) and some visual results
(right part) for the proposed MENet with some recent state-of-
the-art SOD methods: EDN [45], AADFNet [57], SAC [16], and
ICON [59]. Please refer to Sec.5 for detailed experimental set-
tings. The MENet model achieves the lowest MAE score with the
most precise and complete boundaries.
structural similarity measurements (e.g., Structural Simi-
larity Index (SSIM) [41]) and regional similarity measure-
ments (e.g., Intersection over Union (IoU) [35] and Dice
[12]) are also adopted by SOD models [32, 42, 43, 47, 59]
in the loss functions. (iii) Human vision is indeed holistic
and continuous so that it perceives objects and scenes as or-
ganized wholes [18], which are composed of parts that are
meaningful and coherent in relation to each other. ICON
[59] proposes to improve the integrity from both macro-
and micro-level perspectives by enhancing integrity infor-
mation hidden in channels of features. EDN [45] employs
a powerful down-sampling technique to learn a global view
of the whole image effectively. (iv) According to the hu-
man visual spatial frequency model [30], an image can be
decomposed into or synthesized by high-spatial frequency
and low-spatial frequency parts. As a starting point in this
work, we intend to use the mechanisms outlined above to
further improve SOD performance for complex scenes.
In this work, we propose a multi-enhancement network
(MENet) that effectively integrates the above HVS mecha-
nisms in a U-Net-like [37] encoder-decoder framework to
produce more accurate SOD for complex scenes. Foremost,
MENet employs the image frequency decomposition idea to
design a two-stream feature learning decoder for boundaries
(high frequencies) and inner body regions (low frequen-
cies). This setting is different from the existing two-branch
(or edge-aware) methods [11, 22, 47, 51, 53, 56, 57] that use
one branch for the boundary and the other one for the entire
object, such as EGNet [53] and AFNet [11]. Particularly,
there is no interaction between the intermediate features of
the two branches of MENet, so it reduces the interference of
inaccurate boundary information with global features. This
is because boundary features need to be highly discrimi-
native against the background, while global features need
consistency and robustness. Although LDF [43] also learns
internal regional features in one branch, its detailed map and
body map cannot be computed accurately and efficiently forgeometrically complex objects.
Then, we propose an iterative training strategy to pro-
gressively enhance features by alternately aggregating high-
and low-level features to mimic HVS bottom-up and top-
down refinement mechanisms. To produce high- and low-
level features flexibly, we design a multiscale feature en-
hancement module (ME-Module) as the core of each branch
by leveraging atrous spatial pyramid pooling (ASPP) [34]
and global-local attention [6].
In addition, we introduce the HVS holistic and continu-
ous mechanism to loss function design. We present a multi-
level hybrid loss, which evaluates the pixel-, region-, and
object-level similarities between predicted saliency maps
and ground-truth (GT) saliency maps. For pixel-level loss,
we also use Binary Cross Entropy (BCE) [7] loss to ensure
network accuracy and convergence speed. As for region-
level loss, we divide a saliency map into four sub-regions
of equal size and then calculate the sum of weighted re-
gional similarities through SSIM and IoU. Then, inspired
by SSIM and S-measure [5], an object-level loss is designed
by the contrast and distribution statistics of the foreground
between the GT map and the predicted map. A similar hy-
brid loss is reported in BASNet [32], but it uses a simple
combination of BCE, IoU, and SSIM for the whole saliency
map without partitioning regions. Following is a summary
of our contributions.
• We propose to leverage not only pixel-level but also
region-level and object-level similarity measures in
loss to increase prediction accuracy and integrity, and
then design a multi-level hybrid loss to implement this
proposal.
• We design a multiscale feature enhancement module
(ME-Module) to mimic HVS bottom-up and top-down
refinement mechanisms. ME-Module can gradually
propagate and produce comprehensive global or de-
tailed features by changing the size and order of the
input features.
• We propose a novel Multiple Enhancement Network
(MENet) for dealing with SOD in complex scenes by
integrating multiple HVS mechanisms into the net-
work structure and loss function. Specifically, a two-
branch decoder equipped with ME-Modules is de-
signed to incrementally refine the boundary and adap-
tive features by an iterative training strategy and the
proposed multilevel hybrid loss.
The results of quantitative and qualitative experiments
on six datasets demonstrate MENet outperforms the state-
of-the-art methods by a large margin, as shown in Fig. 1.
10032
Figure 2. Illustration of the overall architecture and the pipeline of the MENet.
|
Xu_H2ONet_Hand-Occlusion-and-Orientation-Aware_Network_for_Real-Time_3D_Hand_Mesh_Reconstruction_CVPR_2023 | Abstract
Real-time 3D hand mesh reconstruction is challenging,
especially when the hand is holding some object. Beyond
the previous methods, we design H2ONet to fully exploit
non-occluded information from multiple frames to boost the
reconstruction quality. First, we decouple hand mesh recon-
struction into two branches, one to exploit finger-level non-
occluded information and the other to exploit global hand
orientation, with lightweight structures to promote real-
time inference. Second, we propose finger-level occlusion-
aware feature fusion, leveraging predicted finger-level oc-
clusion information as guidance to fuse finger-level infor-
mation across time frames. Further, we design hand-level
occlusion-aware feature fusion to fetch non-occluded infor-
mation from nearby time frames. We conduct experiments
on the Dex-YCB and HO3D-v2 datasets with challenging
hand-object occlusion cases, manifesting that H2ONet is
able to run in real-time and achieves state-of-the-art per-
formance on both the hand mesh and pose precision. The
code will be released on GitHub.
| 1. Introduction
Estimating 3D hand meshes from RGB images is a fun-
damental task useful for many applications, e.g., augmented
reality [17, 52], behavior understanding [26, 44], etc. To
support these applications, user experience is very impor-
tant, so the reconstruction should be accurate and robust, as
well as fast, i.e., real-time. Despite the promising results
achieved by the recent works, it is still very challenging to
simultaneously meet all the requirements, particularly when
the hand is severely occluded, e.g., holding some object.
Several recent methods are proposed for 3D hand mesh
reconstruction from a single RGB image [13, 15, 31, 38–
42, 45]. To alleviate the negative effect of occlusion, some
try to extract occlusion-robust features by adopting the spa-
Figure 1. Structural comparison between our H2ONet and previ-
ous methods. We decouple 3D hand mesh reconstruction into two
branches, one to reconstruct the hand mesh at canonical pose M
and the other to regress the global hand orientation R, such that
we can fuse finger- and hand-level occlusion-aware features from
multiple frames to better exploit the non-occluded information.
tial attention mechanism applied in 3D hand pose estima-
tion [14, 65, 68]. When the amount of occlusion is small,
focusing more on non-occluded regions can help improve
the network performance. However, the performance would
largely drop when the occluded regions dominate, implying
that relying solely on the prior information of hand shape
and pose is insufficient. Besides, the attention mechanism
brings extra computation and memory overhead. Though
recent methods [8, 52] adopt lightweight frameworks for
real-time inference, the influence of occlusion is ignored.
On the other hand, some recent works [18,57] start to ex-
plore multi-frame RGB images as input for 3D hand mesh
reconstruction. SeqHAND [57] integrates LSTM as a fea-
ture extractor to memorize the hand motion over consecu-
tive frames. Liu et al. [35] constrain the smoothness of hand
shape and pose by designing inter-frame losses. Yet, they
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17048
do not have specific designs to explicitly deal with the oc-
clusion. Hasson et al. [18] leverages an optical-flow-guided
strategy to promote photometric consistency. Nevertheless,
the extra information is limited, as they use only two adja-
cent frames. Also, though multi-frame inputs provide more
information, it is non-trivial to effectively extract and fuse
multi-frame features to improve the reconstruction quality.
In this paper, we present H2ONet , aHand-Occlusion-
and-Orientation-aware Network, aiming at exploiting non-
occluding information from multi-frame images to recon-
struct the 3D hand mesh. Our goal is to meet the require-
ments of (i) effectively utilizing the inter-frame information
and (ii) explicitly alleviating the interference of occlusion.
First, as the hand orientation information and hand shape
information are mixed in feature space, it is hard to directly
fuse features from multiple frames. To better exploit use-
ful information, we decouple hand mesh reconstruction into
two tasks: one for hand mesh reconstruction at the canon-
ical pose and the other for hand orientation regression, as
shown in Fig. 1. The key advantages are that we can better
fuse multi-frame features without considering hand orien-
tation differences, and it enables us to apply strategies to
alleviate the ill-posed issue in estimating hand orientation.
Second, to handle self and object occlusions on the hand,
we propose to exploit non-occluding information spatially
across fingers and temporally across frames. For the for-
mer, we design finger-level occlusion-aware feature fusion
that leverages predicted finger-level occlusion probabilities
to guide the adaptive fusion of per-finger features from mul-
tiple frames. For the latter, we design hand-level occlusion-
aware feature fusion that catches auxiliary global informa-
tion over frames guided by the hand-level occlusions.
In summary, our main contributions are:
• We design the hand-occlusion-and-orientation-aware
network named H2ONet with a two-branch architec-
ture to efficiently and effectively exploit non-occluding
information from multiple frames.
• We formulate finger-level occlusion-aware feature fu-
sion and hand-level occlusion-aware feature fusion
modules. The former aggregates non-occluded finger-
level information from multiple frames to promote
hand shape reconstruction, whereas the latter alleviates
the ill-posed issue when estimating the global hand ori-
entation in case the hand is temporarily occluded.
• Through qualitative and quantitative comparisons on
two datasets with severe hand occlusions, we show that
H2ONet achieves state-of-the-art performance.
|
Wong_Heat_Diffusion_Based_Multi-Scale_and_Geometric_Structure-Aware_Transformer_for_Mesh_CVPR_2023 | Abstract
Triangle mesh segmentation is an important task in 3D
shape analysis, especially in applications such as digi-
tal humans and AR/VR. Transformer model is inherently
permutation-invariant to input, which makes it a suitable
candidate model for 3D mesh processing. However, two
main challenges involved in adapting Transformer from nat-
ural languages to 3D mesh are yet to be solved, such as
i) extracting the multi-scale information of mesh data in
an adaptive manner; ii) capturing geometric structures of
mesh data as the discriminative characteristics of the shape.
Current point based Transformer models fail to tackle such
challenges and thus provide inferior performance for dis-
cretized surface segmentation. In this work, heat diffusion
based method is exploited to tackle these problems. A novel
Transformer model called MeshFormer is proposed, which
i) integrates Heat Diffusion method into Multi-head Self-
Attention operation (HDMSA) to adaptively capture the fea-
tures from local neighborhood to global contexts; ii) ap-
plies a novel Heat Kernel Signature based Structure Encod-
ing (HKSSE) to embed the intrinsic geometric structures
of mesh instances into Transformer for structure-aware
processing. Extensive experiments on triangle mesh seg-
mentation validate the effectiveness of the proposed Mesh-
Former model and show significant improvements over cur-
rent state-of-the-art methods.
| 1. Introduction
Discretized surface semantic segmentation is a task to
semantically classify the labeling of each discrete element
in 3D discretized surface. Such discrete element can be tri-
angle face in mesh input [22, 24, 37] or 3D point in point
cloud input [11, 20, 26, 28, 34, 48–50, 52]. It is an essen-
tial task in many applications for 3D vision and computer
graphics, such as 3D human body analysis, digital humans
and AR/VR, etc. In this work, we mainly focus on mesh
representation as input, as point cloud representation canbe regarded as the special case of mesh which discards the
surface connectivity.
The challenges in learning mesh representation involves
its inherent characteristics such as irregularity and un-
orderedness. Following the success in NLP [7, 15, 44] and
2D computer vision domain [16, 29, 43, 46], Transformer
model such as [20, 32, 50, 52] has been adopted as an effec-
tive model for processing 3D point cloud input due to its in-
herent capability in processing unordered point sets. Along
such direction, it is natural to adapt the Transformer model
to the mesh input, which is also one of the most common
representation for 3D input modality. However, adapting
Transformer model from natural languages to mesh input,
with respect to the specific characteristics of mesh struc-
tures, involves many challenges, such as i) extracting the
multi-scale information of mesh data in an adaptive man-
ner; ii) capturing geometric structures of mesh representa-
tion as the discriminative characteristics. Sufficiently cap-
turing such two essential information is the prerequisite for
accurate mesh based semantic segmentation task. In ret-
rospect to the recent point cloud based Transformer mod-
els [20, 32, 52], it is found that all these methods did not
provide effective approaches to tackle the challenges men-
tioned above, and thus provided a limited increment in seg-
mentation accuracy for mesh input.
Recent work Swin Transformer [29] applies multiple
fixed-size windows for limiting the attention computation
along scale-varying regions, and hence provides a hierar-
chical Transformer model to extract multi-scale informa-
tion for dense prediction task. However, such approach
which uses fixed-size windows is only applicable to regu-
lar 2D images input, while it is infeasible for irregular input
such as 3D meshes with diverse shapes. In order to adapt
the Transformer model for adaptively extracting multi-scale
information for irregular mesh input, it is essential to ex-
tend its core operation, self-attention, to have capability in
progressively comparing the feature similarity from local
neighborhood to global range of the mesh input, in the form
of intrinsic geometry of surface [9]. In fact, several seminal
works [12,13] had studied on using heat diffusion method to
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4413
intrinsically communicate the interactions with the neigh-
bouring vertices on discretized surface. Such heat diffusion
method is able to compute the geodesic distance on mesh in
a stable and accurate manner, which is essential to capture
the intrinsic locality in discretized surface.
In this work, we opt for the heat diffusion method to
propose a novel self-attention operation called Heat Dif-
fusion based Multi-head Self-Attention (HDMSA), which
adaptively limits the self-attention computation within mul-
tiple heat diffusion ranges to capture the multi-scale surface
features from local neighborhood to global contexts. This
extension facilitates the construction of multi-scale Trans-
former encoder in the proposed MeshFormer model.
The second challenging issue in adapting Transformer
model for mesh input is to encode the geometric structural
information of mesh as a supplement for shape-specific in-
ductive bias of Transformer model. In Non-Euclidean do-
main, very recent works such as SAN [25], GNN-LSPE [17]
methods have investigated on injecting the eigenfunctions,
which are derived from graph Laplacian operator, into posi-
tional encoding as a kind of graph-specific inductive bias to
traditional Transformer model. This approach exploits the
spectral information of graph Laplacian to capture the struc-
ture of input modality, and thus is considered as a promis-
ing candidate for processing mesh input. However, this ap-
proach suffers from the issue of eigenfunction sign ambigu-
ity, that means eigenfunction with either positive or negative
sign still satisfies the original eigenproblem and is associ-
ated with the same eigenvalue, which lowers the discrimi-
native power in the extracted structural information. In this
work, instead of directly applying the spectral information,
a novel Heat Kernel Signature based Structure Encoding
(HKSSE) module is proposed, which effectively captures
the intrinsic geometric structural information of the mesh
while bypassing the issue of eigenfunction sign ambiguity.
Moreover, it provides a more powerful way to capture the
more advanced geometric information, i.e., the symmetry in
geometry structure, which is very common in human body
shapes. As a result, this capability brought to Transformer
model reinforces a structure-aware segmentation prediction
for mesh input.
To the best of our knowledge, the proposed model is the
first mesh based Transformer model which integrates heat
diffusion methods to tackle the discretized surface semantic
segmentation problem. The main contributions of this work
are summarized as follows:
1. With the heat diffusion extension, the proposed multi-
head self-attention operation allows intrinsic commu-
nication for vertices on mesh input from local neigh-
borhood to global context and thus is able to capture
multi-scale mesh features.
2. A novel heat kernel signature based structure encodingmodule is applied to embed the mesh intrinsic geomet-
ric structures into Transformer for providing structure-
aware segmentation output.
3. The proposed work demonstrates the feasibility on the
extension of generic Transformer model structure for
3D mesh input with heat diffusion methods.
|
Wang_Task_Difficulty_Aware_Parameter_Allocation__Regularization_for_Lifelong_Learning_CVPR_2023 | Abstract
Parameter regularization or allocation methods are ef-
fective in overcoming catastrophic forgetting in lifelong
learning. However, they solve all tasks in a sequence uni-
formly and ignore the differences in the learning difficulty of
different tasks. So parameter regularization methods face
significant forgetting when learning a new task very dif-
ferent from learned tasks, and parameter allocation meth-
ods face unnecessary parameter overhead when learning
simple tasks. In this paper, we propose the Parameter
Allocation & Regularization (PAR) , which adaptively select
an appropriate strategy for each task from parameter allo-
cation and regularization based on its learning difficulty.
A task is easy for a model that has learned tasks related
to it and vice versa. We propose a divergence estimation
method based on the Nearest-Prototype distance to mea-
sure the task relatedness using only features of the new task.
Moreover, we propose a time-efficient relatedness-aware
sampling-based architecture search strategy to reduce the
parameter overhead for allocation. Experimental results
on multiple benchmarks demonstrate that, compared with
SOTAs, our method is scalable and significantly reduces
the model’s redundancy while improving the model’s per-
formance. Further qualitative analysis indicates that PAR
obtains reasonable task-relatedness.
| 1. Introduction
Recently, the lifelong learning [9] ability of neural net-
works, i.e., learning continuously from a continuous se-
quence of tasks, has been extensively studied. It is natural
for human beings to constantly learn and accumulate knowl-
edge from tasks and then use it to facilitate future learning.
However, classical models [13, 18, 41] suffer catastrophic
forgetting [12], i.e., the model’s performance on learned
tasks deteriorates rapidly after learning a new one.
To overcome the catastrophic forgetting, many param-
*Corresponding author: Yin Zhang.
Large carnivores
Large omnivores and herbivoresReptiles
Fruit and vegetables
Parameter Regularization
for the easy new task
Current
taskLearn the first task
Parameter Allocation
for the difficult new task
?Adaptive strategy ?
Adaptive Parameter Allocation & Regularization Based on T ask Learning Difficulty
ModelLearning
StrategyLabel Task Data
5-classes
Classification T asksStrategies Model
(Related to task )
(No related tasks before)
(Related to ?)
ExpertFigure 1. PAR adaptively selects a strategy from regularization
and allocation to handle each task according to its learning diffi-
culty. The difficulty depends not only on the task itself, but also
on whether the model has previously learned tasks related to it.
eter regularization or allocation methods have been pro-
posed. Parameter regularization methods [10, 16, 19, 21,
23, 25, 27] alleviate forgetting by adding a regularization
term to the loss function and perform well when the new
task does not differ much from learned tasks. Parameter al-
location methods based on static models [7, 15, 31, 36] and
dynamic models [2,20,24,26,29,30,34,38,39,42,45,47] al-
locate different parameters to different tasks and can adapt
to new tasks quite different from learned tasks. However,
the above methods solve all tasks in a sequence uniformly,
and ignore the differences of learning difficulty of different
tasks. This leads to the significant forgetting in parameter
regularization methods when learning a new task which is
quite different from learned tasks, and also leads to unnec-
essary parameter cost in parameter allocation methods when
learning some simple tasks.
In this paper, we propose a difficulty-aware method
Parameter Allocation & Regularization (PAR). As shown
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7776
in Fig. 1, we assume that the learning difficulty of a task
in continual learning depends not only on the task itself,
but also on the accumulated knowledge in the model. A
new task is easy to adapt for a model if it has learned re-
lated tasks before and vice versa. Based on the assumption,
the PAR adaptively adopts parameter allocation for difficult
tasks and parameter regularization for easy tasks. Specif-
ically, the PAR divides tasks into task groups and assigns
each group a dedicated expert model. Given a new task, the
PAR measures the relatedness between it and existing task
groups at first. If the new task is related to one of the ex-
isting groups, it is easy for the corresponding expert. The
PAR adds the task to the related group and learns it by the
expert via the parameter regularization. Otherwise, the new
task is difficult for all existing experts, and the PAR assigns
it to a new task group and allocates a new expert to learn it.
There are two challenges in this work: the measurement
of relatedness and the parameter explosion associated with
parameter allocation. For the first one, we try to measure the
relatedness by the KL divergence between feature distribu-
tions of tasks. However, the KL divergence is intractable
and needs to be estimated since the feature distributions of
tasks are usually unknown. In addition, the constraint of
lifelong learning that only data of the current task are avail-
able exacerbates the difficulty of estimation. To solve above
problems, inspired by the divergence estimation based on
k-NN distance [44], we propose the divergence estimation
method based on prototype distance, which only depends
on the data of the current task. For the second one, we try
to reduce parameter overhead per expert by searching com-
pact architecture for it. However, the low time and memory
efficiency is an obstacle to applying architecture search for
a sequence of tasks in lifelong learning. To improve the
efficiency of architecture search, we propose a relatedness-
aware sampling-based hierarchical search. The main con-
tributions of this work are as follows:
• We propose a lifelong learning framework named Pa-
rameter Allocation & Regularization (PAR), which se-
lects an appropriate strategy from parameter allocation
and regularization for each task based on the learning
difficulty. The difficulty depends on whether the model
has learned related tasks before.
• We propose a divergence estimation method based on
prototype distance to measure the distance between the
new task and previous learned tasks with only data of
the new task. Meanwhile, we propose a relatedness-
aware sampling-based architecture search to reduce
the parameter overhead of parameter allocation.
• Experimental results on CTrL, Mixed CIFAR100 and
F-CelebA, CIFAR10-5, CIFAR100-10, CIFAR100-20
and MiniImageNet-20 demonstrate that PAR is scal-
able and significantly reduces the model redundancywhile improving the model performance. Exhaustive
ablation studies show the effectiveness of components
in PAR and the visualizations show the reasonability
of task distance in PAR.
|
Wang_DR2_Diffusion-Based_Robust_Degradation_Remover_for_Blind_Face_Restoration_CVPR_2023 | Abstract
Blind face restoration usually synthesizes degraded low-
quality data with a pre-defined degradation model for train-
ing, while more complex cases could happen in the real
world. This gap between the assumed and actual degra-
dation hurts the restoration performance where artifacts
are often observed in the output. However, it is expensive
and infeasible to include every type of degradation to cover
real-world cases in the training data. To tackle this robust-
ness issue, we propose Diffusion-based Robust Degradation
Remover (DR2) to first transform the degraded image to
a coarse but degradation-invariant prediction, then employ
an enhancement module to restore the coarse prediction to
a high-quality image. By leveraging a well-performing de-
noising diffusion probabilistic model, our DR2 diffuses in-
put images to a noisy status where various types of degrada-
tion give way to Gaussian noise, and then captures semantic
information through iterative denoising steps. As a result,
DR2 is robust against common degradation ( e.g. blur, re-
size, noise and compression) and compatible with different
†Corresponding author.
*Cooperative Medianet Innovation Center.designs of enhancement modules. Experiments in various
settings show that our framework outperforms state-of-the-
art methods on heavily degraded synthetic and real-world
datasets.
| 1. Introduction
Blind face restoration aims to restore high-quality face
images from their low-quality counterparts suffering from
unknown degradation, such as low-resolution [5, 11, 27],
blur [45], noise [23, 36], compression [10], etc. Great im-
provement in restoration quality has been witnessed over
the past few years with the exploitation of various facial pri-
ors. Geometric priors such as facial landmarks [5], parsing
maps [4, 5], and heatmaps [43] are pivotal to recovering the
shapes of facial components. Reference priors [9,25,26] of
high-quality images are used as guidance to improve details.
Recent research investigates generative priors [39, 42] and
high-quality dictionaries [14,24,48], which help to generate
photo-realistic details and textures.
Despite the great progress in visual quality, these meth-
ods lack a robust mechanism to handle degraded inputs be-
sides relying on pre-defined degradation to synthesize the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1704
training data. When applying them to images of severe or
unseen degradation, undesired results with obvious artifacts
can be observed. As shown in Fig. 1, artifacts typically ap-
pear when 1) the input image lacks high-frequency informa-
tion due to downsampling or blur ( 1strow), in which case
restoration networks can not generate adequate information,
or 2) the input image bears corrupted high-frequency in-
formation due to noise or other degradation ( 2ndrow), and
restoration networks mistakenly use the corrupted informa-
tion for restoration. The primary cause of this inadaptabil-
ity is the inconsistency between the synthetic degradation
of training data and the actual degradation in the real world.
Expanding the synthetic degradation model for training
would improve the models’ adaptability but it is apparently
difficult and expensive to simulate every possible degrada-
tion in the real world. To alleviate the dependency on syn-
thetic degradation, we leverage a well-performing denois-
ing diffusion probabilistic model (DDPM) [16, 37] to re-
move the degradation from inputs. DDPM generates im-
ages through a stochastic iterative denoising process and
Gaussian noisy images can provide guidance to the gen-
erative process [6, 29]. As shown in Fig. 2, noisy images
aredegradation-irrelevant conditions for DDPM genera-
tive process. Adding extra Gaussian noise (right) makes dif-
ferent degradation less distinguishable compared with the
original distribution (left), while DDPM can still capture
the semantic information within this noise status and re-
cover clean face images. This property of pretrained DDPM
makes it a robust degradation removal module though only
high-quality face images are used for training the DDPM.
Our overall blind face restoration framework DR2E con-
sists of the Diffusion-based Robust Degradation Remover
(DR2) and an Enhancement module. In the first stage, DR2
first transforms the degraded images into coarse, smooth,
and visually clean intermediate results, which fall into a
degradation-invariant distribution ( 4thcolumn in Fig. 1). In
the second stage, the degradation-invariant images are fur-
ther processed by the enhancement module for high-quality
details. By this design, the enhancement module is com-
patible with various designs of restoration methods in seek-
ing the best restoration quality, ensuring our DR2E achieves
both strong robustness and high quality.
We summarize the contributions as follows. (1) We pro-
pose DR2 that leverages a pretrained diffusion model to
remove degradation, achieving robustness against complex
degradation without using synthetic degradation for train-
ing. (2) Together with an enhancement module, we em-
ploy DR2 in a two-stage blind face restoration framework.
The enhancement module has great flexibility in incorporat-
ing a variety of restoration methods to achieve high restora-
tion quality. (3) Comprehensive experiments show that our
framework outperforms state-of-the-art methods on heavily
degraded synthetic and real-world datasets.
▼
Laplace
✖
Gaussian
20
+
JPEG
▲
Gaussian 10
●
Blur
Original
▼
Laplace
✖
Gaussian
20
+
JPEG
▲
Gaussian 10
●
Blur
Add Noise
σ
σ
Figure 2. Mean and standard variation of pixel-wise error dis-
tribution. (Left) the error between original degraded input yand
its ground truth low-resolution image ˆy(only bicubically down-
sampled); (Right) the error between q(y500|y)andq(ˆy500|ˆy)
sampled by Eq. (2), with extra Gaussian noise added by the dif-
fusion function.
|
Wang_Cut_and_Learn_for_Unsupervised_Object_Detection_and_Instance_Segmentation_CVPR_2023 | Abstract
We propose Cut-and- LEaRn (CutLER), a simple ap-
proach for training unsupervised object detection and seg-
mentation models. We leverage the property of self-
supervised models to ‘discover’ objects without supervision
and amplify it to train a state-of-the-art localization model
without any human labels. CutLER first uses our proposed
MaskCut approach to generate coarse masks for multiple
objects in an image, and then learns a detector on these
masks using our robust loss function. We further improve
performance by self-training the model on its predictions.
Compared to prior work, CutLER is simpler, compatible
with different detection architectures, and detects multiple
objects. CutLER is also a zero-shot unsupervised detec-
tor and improves detection performance AP 50by over 2.7 ×
on 11 benchmarks across domains like video frames, paint-
ings, sketches, etc. With finetuning, CutLER serves as a low-
shot detector surpassing MoCo-v2 by 7.3% APboxand 6.6%
APmaskon COCO when training with 5% labels.
| 1. Introduction
Object localization is a critical task in computer vision
that enables AI systems to perceive, reason, plan and act inan object-centric manner. Training models for localization
require special annotations like object boxes, masks, local-
ized points, etc. which are both difficult and resource inten-
sive to collect. Without accounting for overhead, annotating
∼164K images in the COCO dataset [32] with masks for
just 80 classes took more than 28K human hours of annota-
tion time. In this work, we study unsupervised object detec-
tion and instance segmentation models that can be trained
without any human labels. Our key insight is that simple
probing and training mechanisms can amplify the innate lo-
calization ability of self-supervised models [7], leading to
state-of-the-art unsupervised zero-shot detectors.
Our method Cut-and- LEaRn (CutLER) consists of three
simple, architecture- and data-agnostic mechanisms. Con-
sistent with prior self-supervised learning methods [7–9,
26], CutLER is trained exclusively on unlabeled ImageNet
data without needing additional training data, but contrary
to these methods, CutLER can be directly employed to per-
form complex segmentation and detection tasks over a wide
range of domains. First , we propose MaskCut that can au-
tomatically produce multiple initial coarse masks for each
image, using the pretrained self-supervised features. Sec-
ond, we propose a simple loss dropping strategy to train
detectors using the coarse masks while being robust to ob-
jects missed by MaskCut. Finally , we observe that despite
learning from these coarse masks, the detectors ‘clean’ the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3124
ground truth and produce masks (and boxes) that are bet-
ter than the coarse masks used to train them. Therefore,
we further show that multiple rounds of self-training on the
models’ own predictions allow it to evolve from capturing
the similarity of local pixels to capturing the global geome-
try of the object, thus producing finer segmentation masks.
Prior work shows that a self-supervised vision trans-
former (ViT) [15] can automatically learn patch-wise fea-
tures that detect a single salient object in an image [7,38,43,
44,50]. However, unlike CutLER, such salient object detec-
tion methods only locate a single, usually the most promi-
nent, object and cannot be used for real world images con-
taining multiple objects. While some recent methods, e.g.,
FreeSOLO [47] and DETReg [3], also aim at unsupervised
multi-object detection (or multi-object discovery), they rely
on a particular detection architecture, e.g., SOLO-v2 [48]
or DDETR [5,54]. Additionally, apart from self-supervised
features trained on ImageNet [12], the current state-of-the-
art methods FreeSOLO and MaskDistill [42] also require
‘in-domain’ unlabeled data for model training.
In contrast, CutLER works with various detection archi-
tectures and can be trained solely on ImageNet, without
requiring in-domain unlabeled data. Thus, during model
training, CutLER does not see any images from any target
dataset and yields a zero-shot model capable of detecting
and segmenting multiple objects in diverse domains.
Features of CutLER. 1) Simplicity: CutLER is simple to
train and agnostic to the choice of detection and backbone
architectures. Thus, it can be integrated effortlessly into
existing object detection and instance segmentation works.
2) Zero-shot detector: CutLER trained solely on ImageNet
shows strong zero-shot performance on 11 different bench-
marks where it outperforms prior work trained with addi-
tional in-domain data. We double the APbox
50performance
on 10 of these benchmarks, as shown in Fig. 1, and even
outperform supervised detectors on the UVO video instance
segmentation benchmark. 3) Robustness: CutLER exhibits
strong robustness against domain shifts when tested on im-
ages from different domains such as video frames, sketches,
paintings, clip arts, etc.4) Pretraining for supervised de-
tection: CutLER can also serve as a pretrained model for
training fully supervised object detection and instance seg-
mentation models and improves performance on COCO, in-
cluding on few-shot object detection benchmarks.
|
Wang_Two-Stream_Networks_for_Weakly-Supervised_Temporal_Action_Localization_With_Semantic-Aware_Mechanisms_CVPR_2023 | Abstract
Weakly-supervised temporal action localization aims to
detect action boundaries in untrimmed videos with only
video-level annotations. Most existing schemes detect tem-
poral regions that are most responsive to video-level classi-
fication, but they overlook the semantic consistency between
frames. In this paper, we hypothesize that snippets with
similar representations should be considered as the same
action class despite the absence of supervision signals on
each snippet. To this end, we devise a learnable dictionary
where entries are the class centroids of the corresponding
action categories. The representations of snippets identified
as the same action category are induced to be close to the
same class centroid, which guides the network to perceive
the semantics of frames and avoid unreasonable localiza-
tion. Besides, we propose a two-stream framework that in-
tegrates the attention mechanism and the multiple-instance
learning strategy to extract fine-grained clues and salient
features respectively. Their complementarity enables the
model to refine temporal boundaries. Finally, the developed
model is validated on the publicly available THUMOS-14
and ActivityNet-1.3 datasets, where substantial experiments
and analyses demonstrate that our model achieves remark-
able advances over existing methods.
| 1. Introduction
Temporal action localization (TAL) is committed to de-
tecting action intervals in untrimmed videos. It has re-
ceived increasing popularity recently due to its wide ap-
plication in surveillance analysis, video summarization and
retrieval [39, 44, 47], etc. Typically, fully-supervised TAL
[51, 58, 59] is prohibitively expensive and unrealistic due
to frame-level annotations, thus the weakly-supervised TAL
(WS-TAL) [32, 34, 40, 45, 55] that only video-level annota-
tions are required has been advocated recently.
Most WS-TAL methods [8, 9, 31, 43, 45, 46] transform
Figure 1. An example contains the action of “ Bicycle Motocross ”
and the background. Representations depicted in monochromatic
color are similar, which should be regarded to describe the same
action (or background) and grouped together. The color brightness
indicates the degree of similarity.
localization into classification tasks that detect temporal
regions contributing the most to video-level classification.
They divide raw videos into fixed-length non-overlapping
snippets, on which snippet-wise attention activations or
class activation sequence (CAS) is generated. Temporal re-
gions are detected by thresholding and merging these ac-
tivations along the time dimension. Specifically, multiple
instance learning (MIL) [36, 42] and the attention mecha-
nism [11, 14, 53] are typically employed. The former ag-
gregates snippets that are considered action instances with
top-k confidence. However, such a regime over-emphasizes
these snippets with top-k confidence, resulting in discard-
ing potential clues in remaining snippets with lower confi-
dence. Besides, MIL chooses the most discriminative snip-
pets ignoring the completeness of action instances, which
is incompatible with the localization task. Different from
MIL, the attention-based mechanism independently yields
class-agnostic confidence for each snippet, which is utilized
as a weight to perform temporal pooling over all snippets
and generate video-level representations for classification.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18878
Despite the usage of all snippet features for fine-grained
patterns, class-agnostic confidence is semantically ambigu-
ous and harmful to precise boundary detection. As a con-
sequence, we design a two-stream network that integrates
MIL and attention-based mechanisms to overcome their re-
spective drawbacks. Then a late-fusion operation on the
outputs of the two branches is conducted to acquire the final
classification results.
Furthermore, a crux of these works lies in accurately pre-
dicting confidence scores that each snippet belongs to the
foreground or background, which has a nontrivial impact
on the subsequent boundary regression. Since the weakly-
supervision paradigm does not provide explicit supervision
signals, this problem becomes more intractable. A com-
mon solution employs temporal class activation map [40]
(TCAM) to discover snippets that respond to the video-level
classification and assign them high confidence. Other alter-
natives [11, 14, 45, 53] attempt to mitigate this problem by
carefully formulating some attention generation and aggre-
gation mechanisms. Nevertheless, these strategies neglect
the semantic consistency between snippets. Intuitively,
snippets with similar representations should be considered
to be the same class despite the infeasibility of accessing
snippet-level annotations. An example is also illustrated in
Figure 1. We argue that it is unreasonable that there are
no constraints to guarantee such a semantic relation. To
address this intractable issue, we set a learnable dictionary
where entries are class centroids of the corresponding action
categories. The representations of snippets identified as the
same action are induced to be close to the same class cen-
troid. In this manner, the semantic relationship of snippets
is explicitly explored to encourage a reasonable localization
in the weakly-supervised paradigm.
In a nutshell, the main contributions and innovations of
this paper are summarized as follows: (1) A novel two-
stream network that absorbs the merits of MIL and atten-
tion mechanism is proposed to resolve WS-TAL. (2) To per-
ceive semantic information, a learnable dictionary with eu-
clidean constraint is designed to facilitate similar represen-
tations to be considered as the same action class. (3) Ex-
tensive experiments on THUMOS-14 and ActivityNet-1.3
benchmarks demonstrate that our model acquires remark-
able advances. Besides, substantial ablation studies also re-
veal that the proposed two-stream structure and semantic-
aware modules are of effectiveness.
|
Wang_Deep_Learning_of_Partial_Graph_Matching_via_Differentiable_Top-K_CVPR_2023 | Abstract
Graph matching (GM) aims at discovering node match-
ing between graphs, by maximizing the node- and edge-
wise affinities between the matched elements. As an NP-
hard problem, its challenge is further pronounced in the
existence of outlier nodes in both graphs which is ubiqui-
tous in practice, especially for vision problems. However,
popular affinity-maximization-based paradigms often lack
a principled scheme to suppress the false matching and
resort to handcrafted thresholding to dismiss the outliers.
This limitation is also inherited by the neural GM solvers
though they have shown superior performance in the ideal
no-outlier setting. In this paper, we propose to formulate
the partial GM problem as the top- kselection task with a
given/estimated number of inliers k. Specifically, we devise
a differentiable top- kmodule that enables effective gradi-
ent descent over the optimal-transport layer, which can be
readily plugged into SOTA deep GM pipelines including the
quadratic matching network NGMv2 as well as the linear
matching network GCAN. Meanwhile, the attention-fused
aggregation layers are developed to estimate kto enable
automatic outlier-robust matching in the wild. Last but
not least, we remake and release a new benchmark called
IMC-PT-SparseGM, originating from the IMC-PT stereo-
matching dataset. The new benchmark involves more scale-
varying graphs and partial matching instances from the
real world. Experiments show that our methods outperform
other partial matching schemes on popular benchmarks.
| 1. Introduction
The importance of graph matching (GM) is recognized
by its successful applications in machine learning [ 26],
molecule matching [ 47], and various vision tasks [ 15,30,
⇤Runzhong Wang and Ziao Guo contributed equally. Junchi Yan is
the correspondence author who is also with Shanghai AI Laboratory. The
work was in part supported by National Key Research and Development
Program of China (2020AAA0107600), NSFC (62222607, U19B2035),
Shanghai Committee Science and Technology Project (22511105100).
IMC-PT-SparseGM dataset: https://github.com/Thinklab-
SJTU/IMCPT-SparseGM-dataset ; Code: https://github.
com/Thinklab-SJTU/ThinkMatch .
Figure 1. Illustrative comparison of injective graph matching
(left) and partial graph matching (right) on our remade and re-
leased benchmark: IMC-PT-SparseGM (originated from IMC-
PT [18], see Sec. 4for details). Green for correct matches, red for
wrong matches. Without partial matching, the GM solver greed-
ily matches all nodes, leading to inferior accuracy. This paper
presents a general learning method to mitigate this issue.
49]. Existing GM methods mainly follow the optimiza-
tion formulation by maximizing the node-wise and edge-
wise affinities of the matched elements, yielding an NP-hard
problem known as Quadratic Assignment Problem [ 22].
The ubiquitous challenge of partial matching, how-
ever, is less addressed by the existing affinity-maximization
pipeline. Partial matching means only a partial set of the
nodes are matched, due to the existence of outliers on both
sides. As shown in Fig. 1, this is commonly encountered
in (visual) graph matching [ 13,16,52,53], where the exis-
tence of outliers is usually unavoidable due to the errors in
keypoint detectors and (self-)occlusion of objects.
The recent line of deep graph matching papers [ 11,45,
52] sheds light on the partial GM, whereby the higher ca-
pacity offered by deep neural networks on both the feature
stage [ 52] and the solver stage [ 27,45] will hopefully dis-
tinguish the outliers from inliers. However, there still lacks
a general, principled approach that could enable the par-
tial matching capability of existing graph matching neural
networks. Some straightforward treatments such as thresh-
olding [ 30], and adding dummy rows and columns [ 17] are
relatively inflexible because their threshold and dummy val-
ues are manually assigned.
We aim at developing a general, unified partial match-
ing handling approach, which can be readily integrated into
SOTA GM networks. However, it is still an open ques-
tion about how to determine whether or not a matching
pair should be discarded. Besides, directly discarding the
unwanted matching pairs causes non-differentiability and
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6272
6HF
6HF
*01HWZRUN
&11
$WWHQWLRQIXVHG$JJUHJDWLRQZLWK,QGLYLGXDO*UDSK0RGHOLQJ$)$,0RGXOH$)$80RGXOH$WWHQWLRQIXVHG$JJUHJDWLRQZLWK8QLILHG%LSDUWLWH*UDSK0RGHOLQJ7RS*0$OJRULWKP6HF
LPDJHSDLUV
GRXEO\VWRFKDVWLFPDWUL[
JUDSKZLWKIHDWXUHVIHDWXUHH[WUDFWRUJUDSKPDWFKLQJQHWZRUNPDWFKLQJUHVXOW
$)$8$)$,UHILQHGPDWFKLQJPDWUL[2XU3URSRVHG0RGXOHVÁDWWHQGLIIHUHQWLDEOH62)77RSUHVKDSH
Figure 2. Deep learning paradigm for partial graph matching. The input images are sent to CNN to extract features and build the input
graphs. The GM network predicts a doubly-stochastic matrix based on graphs with features, followed by a differentiable top- kalgorithm.
The number of inliers kis estimated by our proposed attention-fused aggregation networks.
truncated gradient. We resort to the doubly-stochastic ma-
trix available in most SOTA GM pipelines [ 17,44,45],
whereby the values in the doubly-stochastic matrix could be
viewed as the matching confidence. The matchings with the
top-kconfidence values should be preserved, assuming that
the number of inliers kis known. To this end, we present a
top-kdeep graph matching approach to address the partial
matching challenge, whereby the differentiable top- kfor-
mulation [ 48] is followed to enable the end-to-end training.
Another challenge that naturally arises is how to estimate
the value of k(i.e. number of inliers) from scratch. We
identify the connection between the k-estimation problem
and the graph-level similarity learning problem, whereby
some prior successful models [ 1,2] could be further ex-
ploited. In this paper, we present two networks to ef-
ficiently reuse the mid-layers that are available in most
deep GM pipelines, namely Attention- Fused Aggregation
(AFA) modules, based on the similarity of graphs aggre-
gated from either individual graph features (Sec. 3.2.1 )
or the doubly-stochastic similarities on a unified bipartite
graph (Sec. 3.2.2 ). The AFA modules are further integrated
into the learning pipeline.
Besides, the severe partial matching issue that usually
exists in downstream vision tasks is less addressed by exist-
ing graph matching evaluation protocols. We are thus moti-
vated to collect and relabel a new benchmark, namely IMC-
PT-SparseGM, which is originated from the stereo matching
task in Image Matching Challenge PhotoTourism (IMC-PT)
2020 [ 18]. As summarized in Tbl. 1, the new benchmark ad-
dresses the severe partial matching challenge, and its graphs
are of larger sizes than existing benchmarks.
The main contributions of the paper are as follows.
1) We formulate the partial (graph) matching problem
as a top- kscheme, i.e. in the presence of outliers in bothTable 1. Visual GM datasets. “partial rate” means the mean per-
centage of occluded keypoints w.r.t. the universe of all keypoints.
dataset name # visual graphs avg # nodes # universe partial rate
Willow Object Class [ 6] 404 10 10 0.0%
Pascal VOC Keypoint [ 5] 8702 9.07 6 to 23 28.5%
IMC-PT-SparseGM-50 (ours) 25765 21.36 50 57.3%
IMC-PT-SparseGM-100 (ours) 25765 44.48 100 55.5%
graphs. The scheme can be integrated into SOTA GM neu-
ral networks e.g. [ 17,45] as a plugin.
2) Based on the top- kformulation, we devise an end-to-
end and outlier-aware neural pipeline for partial GM learn-
ing and show its effectiveness. In contrast, we show that di-
rectly combining the top- kscheme with either a traditional
solver like RRWM [ 7] or an outlier-blind neural solver
e.g. NGMv2 [ 45] still produce poor results (see Tbl. 6), sug-
gesting the necessity of our integrated learning paradigm.
3) To estimate the number of inliers kwhich is often un-
known in practice, we devise an attention-based supervised
graph neural network whose input consists of similarity in-
formation available in GM networks. With this estimation
module, we enable a fully automatic neural solver for partial
GM which to our best knowledge is new in the literature.
4) Our approach outperforms existing methods on pop-
ular GM benchmarks notably in the setting of partial GM.
Last but not least, we remake and release a new benchmark
to advance the research in GM by introducing larger graphs
for matching and more partial matching instances.
|
Wang_LipFormer_High-Fidelity_and_Generalizable_Talking_Face_Generation_With_a_Pre-Learned_CVPR_2023 | Abstract
Generating a talking face video from the input audio
sequence is a practical yet challenging task. Most existing
methods either fail to capture fine facial details or need
to train a specific model for each identity. We argue that
a codebook pre-learned on high-quality face images can
serve as a useful prior that facilitates high-fidelity and
generalizable talking head synthesis. Thanks to the strong
capability of the codebook in representing face textures,
we simplify the talking face generation task as finding
proper lip-codes to characterize the variation of lips during
portrait talking. To this end, we propose LipFormer , a
Transformer-based framework to model the audio-visual
coherence and predict the lip-codes sequence based on
input audio features. We further introduce an adaptive
face warping module, which helps warp the reference
face to the target pose in the feature space, to alleviate
the difficulty of lip-code prediction under different poses.
By this means, LipFormer can make better use of pre-
learned priors in images and is robust to posture change.
Extensive experiments show that LipFormer can producemore realistic talking face videos compared to previous
methods and faithfully generalize to unseen identities.
| 1. Introduction
As an ongoing research topic, talking face generation
aims to build a cross-modal mapping from an audio se-
quence to a face video while maintaining natural speaking
styles and audio-visual coherence. It has received growing
attention in recent years due to its potential in digital
humans, film-making, virtual video conferences, and online
education [3, 10, 37–39, 44, 45, 47, 48, 50, 53].
A high-fidelity and generalizable talking face generation
model relies heavily on high-quality (HQ) video data with
vast identities. However, the existing datasets still suffer
from two limitations: (1) low resolution and qualities, e.g.,
LRW [5] and LRS2 [1], leading to the learned model an
unsatisfying synthesis quality; (2) a limited number of
identities despite the clear videos, e.g., Obama [17, 31]
and privately recorded data [28], which requires training a
specific model for each person and it is hard to generalize
to unseen portraits. These two drawbacks limit their
practical applications, and it is also a challenge to collect
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13844
a mass of such high-quality videos because they should
simultaneously meet the phoneme balance and audio-visual
synchronization demands. In contrast, we notice that there
are many publicly available datasets of high-resolution
face images, e.g., the FFHQ [15] dataset contains 70,000
identities with 1024×1024 resolutions. It helps raise a
question: could these image datasets benefit the generation
of a talking portrait?
Fortunately, the answer is a big yes. In this work, we
confirm that a high-quality pre-learned facial codebook can
serve as a strong prior and facilitate talking head synthesis
from the perspectives of visual fidelity and generalizability.
The codebook, which is learned with the objective to
reconstruct 2D faces, is capable of representing diverse
face details, and hence takes most of the responsibilities to
synthesize the appearance of the talking face. That way, the
only thing left is to characterize the variation of lips when
people talk [24, 25].We can therefore reformulate the task
of talking face generation as a simpler problem, namely,
finding proper lip-codes for the input face.
To this end, we propose LipFormer , a Transformer-
based framework for high-fidelity talking face synthesis.
In particular, LipFormer first encodes the target face us-
ing the pre-learned codebook, and then replaces the lip-
region features with a new sequence of lip-codes that are
predicted by aligning with the reference audio. Before lip-
code prediction, we introduce an Adaptive Face Warping
Module, which helps warp the reference face to the target
pose in the feature ( i.e., codebook) space, to alleviate
the texture mismatch problem caused by different poses.
Last but not least, LipFormer is trained in an end-to-end
way to make different modules collaborate fully, boosting
the final performance. Experiments on multiple datasets
confirm the superiority of our approach over state-of-the-art
methods [24,51] from the perspectives of both video quality
and generalizability to unseen identities.
|
Wang_Sharpness-Aware_Gradient_Matching_for_Domain_Generalization_CVPR_2023 | Abstract
The goal of domain generalization (DG) is to enhance
the generalization capability of the model learned from a
source domain to other unseen domains. The recently devel-
oped Sharpness-Aware Minimization (SAM) method aims
to achieve this goal by minimizing the sharpness measure
of the loss landscape. Though SAM and its variants have
demonstrated impressive DG performance, they may not al-
ways converge to the desired flat region with a small loss
value. In this paper, we present two conditions to ensure
that the model could converge to a flat minimum with a
small loss, and present an algorithm, named Sharpness-
Aware Gradient Matching (SAGM), to meet the two condi-
tions for improving model generalization capability. Specif-
ically, the optimization objective of SAGM will simultane-
ously minimize the empirical risk, the perturbed loss (i.e.,
the maximum loss within a neighborhood in the parameter
space), and the gap between them. By implicitly aligning
the gradient directions between the empirical risk and the
perturbed loss, SAGM improves the generalization capabil-
ity over SAM and its variants without increasing the com-
putational cost. Extensive experimental results show that
our proposed SAGM method consistently outperforms the
state-of-the-art methods on five DG benchmarks, including
PACS, VLCS, OfficeHome, TerraIncognita, and DomainNet.
Codes are available at https://github.com/Wang-
pengfei/SAGM .
| 1. Introduction
Deep learning methods have achieved remarkable suc-
cess in various computer vision tasks when the source data
and target data are independently and identically distributed
(i.i.d). However, the performance of deep models trained
in a source domain can drop significantly when applied to
unseen target domains. Domain generalization (DG) aims
*Corresponding authorto train a model from a set of source data such that it
can be well generalized to new domains without retrain-
ing. Over the past decade, research on DG has led to
a plethora of methods, including those based on domain
alignment [35, 16, 32, 2, 52], meta-learning [29, 3, 11, 51],
and data augmentation [53, 43, 7]. Though numerous DG
approaches have been proposed, a recent study called Do-
mainBed [18] reveals that under a fair evaluation proto-
col, the naive empirical risk minimization (ERM) method
can even outperform most existing DG methods. Unfortu-
nately, simply minimizing the empirical loss on a complex
and non-convex loss landscape is typically insufficient to
achieve a good generalization capability [23, 17, 22, 15].
As a result, ERM tends to overfit the training data set and
converge to sharp local minima [15].
Recent studies such as Sharpness-Aware Minimization
(SAM) [15] try to improve the model generalization ability
by minimizing the sharpness measure of the loss landscape.
Denote byL()the loss to be minimized ( e.g., the cross-
entropy loss for classification), where is the parameters
of the neural network. SAM first adversarially computes
a weight perturbation that maximizes the empirical risk
L()and then minimizes the loss of the perturbed network,
i.e.,L(+). Specifically, the objective of SAM is to mini-
mize the maximum loss around the model parameter . Due
to the high complexity of this min-max optimization prob-
lem, SAM chooses to minimize an approximation of L(),
denoted byLp()(perturbed loss, see Section 3 for details).
However, minimizing Lp()is not guaranteed to converge
to flat minimum regions [55].
WhileLp()may not characterize well the sharpness
of loss surface, the surrogate gap h(),Lp() L()
can better describe it. Intuitively, the loss surface will be-
come flatter when h()is closer to zero because the pertur-
bation parameters around will have a similar loss value.
Zhuang et al. [55] proved that h()is an equivalent measure
of the dominant eigenvalue of Hessian (which is the mea-
sure of sharpness) at a local minimum. Therefore, we can
seek flat minima for better generalization ability by mini-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3769
mizing the surrogate gap h(). Unlike SAM which only
optimizes the perturbation loss Lp(), GSAM [55] jointly
minimizesLp()and the surrogate gap h(). However,
GSAM minimizes h()by increasing the loss L(), which
will reduce the generalization performance. Zhang et al .
[55] have shown that when the perturbation amplitude is
sufficiently small, the surrogate gap is always non-negative,
i.e.,Lp()L();8. Therefore, increasing L()will also
increase the difficulty of optimizing Lp(;D).
We propose two conditions that should be met to obtain
a model with good generalization performance. (i) First, the
loss within a neighborhood of the desired minimum should
be sufficiently low. (ii) Second, the minimum is within a flat
loss surface. More specifically, condition (i) implies a low
training loss and represents good performance on the source
training data, while condition (ii) reduces the performance
gap of the model on training and testing data.
Based on the above analysis, we propose a new
DG method, namely Sharpness-Aware Gradient Matching
(SAGM), to simultaneously minimize three objectives, the
empirical riskL(), the perturbed loss Lp()and the surro-
gate gaph(). By minimizingL()andLp(), we search
for a region with low loss, which satisfies condition (i). By
minimizingh(), we avoid steep valleys, which meets con-
dition (ii). However, optimizing these three objectives si-
multaneously is difficult due to the inevitable gradient con-
flicts during training. Fortunately, we find that when the
gradient directions of L()andLp()are consistent, the
gradient direction of h()is also consistent with them, and
hence the gradient descent can be effectively applied to all
the three losses. Therefore, we transform the optimiza-
tion objective into minimizing L(),Lp(), and the angle
between their gradients, and achieve this goal by implic-
itly aligning the gradient directions between the L()and
Lp(). The proposed SAGM improves the generalization
performance of the model by facilitating the model to con-
verge to a flat region with a small loss value. Compared
with SAM, our SAGM does not increase the computational
cost. In addition, SAGM can be combined with previous
data augmentation methods, such as Mixstyle [54] for fur-
ther performance improvements.
The contributions of this work are summarized as fol-
lows. First, we analyze the limitations of SAM-like meth-
ods and propose two conditions to ensure the model conver-
gence to a flat region with a small loss. Second, we propose
the SAGM algorithm to improve the DG capability of deep
models. Finally, we demonstrate the superior performance
of SAGM to state-of-the-arts on five DG benchmarks.
|
Xie_High-Fidelity_3D_GAN_Inversion_by_Pseudo-Multi-View_Optimization_CVPR_2023 | Abstract
We present a high-fidelity 3D generative adversarial net-
work (GAN) inversion framework that can synthesize photo-
realistic novel views while preserving specific details of the
input image. High-fidelity 3D GAN inversion is inherently
challenging due to the geometry-texture trade-off, where
overfitting to a single view input image often damages the
estimated geometry during the latent optimization. To solve
this challenge, we propose a novel pipeline that builds on
the pseudo-multi-view estimation with visibility analysis.
We keep the original textures for the visible parts and uti-
lize generative priors for the occluded parts. Extensive ex-
periments show that our approach achieves advantageous
reconstruction and novel view synthesis quality over prior
work, even for images with out-of-distribution textures. The
proposed pipeline also enables image attribute editing with
the inverted latent code and 3D-aware texture modifica-
tion. Our approach enables high-fidelity 3D rendering from
a single image, which is promising for various applica-
tions of AI-generated 3D content. The source code is at
https://github.com/jiaxinxie97/HFGI3D/ .
| 1. Introduction
Real-world 3D-aware editing with a single 2D image fas-
cinates various essential applications in computer graphics,
such as virtual reality ( VR), augmented reality ( AR), and
immersive meetings. Recent advancement in 3D GANs [12,
13,21,45] has achieved photo-realistic 3D-consistent image
generation. With the GAN inversion approaches [19,51,72],
which can map the images to the latent space of the pre-
trained 3D-aware model, high-fidelity 3D-aware editing be-
comes promising.
High-fidelity 3D-aware inversion aims to generate novel
views with high-quality reconstruction and 3D consistency,
but existing methods can hardly meet these two goals si-
multaneously. Although current GAN inversion methods
based on 2D GANs [1, 53, 54, 69] can perform 3D-related
attributes (e.g., head pose) editing, the generated view is
*Joint first authors
†Joint corresponding authorsinconsistent due to the lack of the underlying 3D represen-
tation. When applying the existing optimization-based in-
version approaches on 3D-aware GANs [12,36], we can re-
trieve high-fidelity reconstruction by overfitting to a single
input image. However, different from 2D GAN inversion,
the reconstruction quality of 3D GAN depends not only on
the input view’s faithfulness but also on the quality of the
synthesized novel views. During the optimization process,
the obvious artifacts in the synthesized novel views occur
with the appearance of high-fidelity details in the input view
as analyzed in Sec. 3. As only a single image is available
in the optimization process, the reconstruction suffers from
extreme ambiguity: infinite combinations of color and den-
sity can reconstruct the single input image, especially with
out-of-distribution textures.
Based on the above observation, we propose our 3D-
aware inversion pipeline by optimizing the reconstruction
not only on the input image but also on a set of pseudo-
multi-views. The pseudo views provide additional regular-
ization, and thus the ambiguity is greatly reduced. Estimat-
ing the pseudo views is non-trivial as it requires maintaining
the texture details while also generating the occluded parts
in a plausible manner, based on the input view. We first esti-
mate an initial geometry and conduct a visibility analysis to
solve these challenges. We directly utilize the textures from
the input image for the visible parts to preserve the texture
details. For the occluded parts, we use a pretrained gen-
erator to synthesize the reasonable inpainted regions. With
the additional supervision from the pseudo-multi-views, our
approach achieves high-fidelity reconstruction results with
the correct 3D geometry.
Our approach enables two types of editing: latent at-
tributes editing and 3D-aware texture modification. We fol-
low the previous work [55] and calculate the attribute direc-
tion in the latent code space. By modifying the inverted la-
tent code in a specific direction, we can control the general
attribute (e.g., smile, ages for portraits as in Figure 1(b)).
Since the proposed pipeline enables inversion with out-of-
distribution textures, we can achieve compelling 3D con-
sistent editing by only modifying the textures of the input
images (e.g., stylization or adding a tattoo in Figure 1(c)).
In summary, we propose a high-fidelity 3D GAN in-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
321
Input image Reconstruction Novel view 1 Novel view 2 Novel view 3
(a) High-fidelity 3D GAN inversionSmile+
Age+
(b) Latent Attributes Editing
(c) 3D-aware Textures Modification
Figure 1. High-fidelity 3D GAN inversion results on real-world images with two types of editing ability. Our method preserves compelling
details and achieves high 3D consistency.
version method by pseudo-multi-view optimization given
an input image. Our approach can synthesize compelling
3D-consistent novel views that are visually and geometri-
cally consistent with the input image. We perform exten-
sive quantitative and qualitative experiments, which demon-
strate that our 3D GAN inversion approach outperforms
other 2D/3D GAN inversion baselines in both photorealism
and faithfulness.
|
Wang_PET-NeuS_Positional_Encoding_Tri-Planes_for_Neural_Surfaces_CVPR_2023 | Abstract
A signed distance function (SDF) parametrized by an
MLP is a common ingredient of neural surface reconstruc-
tion. We build on the successful recent method NeuS to ex-
tend it by three new components. The first component is
to borrow the tri-plane representation from EG3D and rep-
resent signed distance fields as a mixture of tri-planes and
MLPs instead of representing it with MLPs only. Using tri-
planes leads to a more expressive data structure but will
also introduce noise in the reconstructed surface. The sec-
ond component is to use a new type of positional encoding
with learnable weights to combat noise in the reconstruc-
tion process. We divide the features in the tri-plane into
multiple frequency scales and modulate them with sin and
cos functions of different frequencies. The third component
is to use learnable convolution operations on the tri-plane
features using self-attention convolution to produce features
with different frequency bands. The experiments show that
PET-NeuS achieves high-fidelity surface reconstruction on
standard datasets. Following previous work and using the
Chamfer metric as the most important way to measure sur-
face reconstruction quality, we are able to improve upon the
NeuS baseline by 57% on Nerf-synthetic (0.84 compared to
1.97) and by 15.5% on DTU (0.71 compared to 0.84). The
qualitative evaluation reveals how our method can better
control the interference of high-frequency noise.
| 1. Introduction
Implicit neural functions, or neural fields, have received
a lot of attention in recent research. The seminal paper
NeRF [25] combines neural fields with volume rendering,
enabling high-quality novel view synthesis. Inspired by
NeRF, NeuS [41] and V olSDF [44] introduce a signed dis-
tance function (SDF) into the volume rendering equation
and regularize the SDF, so that smooth surface models can
be reconstructed. However, these methods use pure MLP
networks to encode SDFs. Although these two methods can
reconstruct smooth surfaces, they both leave room for im-
provement when it comes to reconstructing surface details.
One research direction ( [5, 6, 26, 33, 46]) explores datastructures such as tri-planes or voxel grids that are suitable
to improve the NeRF framework, in terms of speed or recon-
struction quality. However, data structures that are success-
ful for novel view synthesis may not bring immediate suc-
cess when employed for surface reconstruction as shown in
the third column of Fig. 1. While a greater expressiveness to
encode local details is useful to better fit the input data, there
is also less inductive bias towards a smooth surface. There-
fore, noise during image acquisition, high-frequency shad-
ing, or high-frequency texture variations are more likely to
result in a noisy reconstructed surface.
In our work, we explore how to increase expressiveness
to encode local features while at the same time reducing the
impact of noise interference. We choose to build on the tri-
plane data structure since it consumes less memory and can
be easier scaled to higher resolutions.
In our work, we build on EG3D and NeuS to propose
a novel framework, called PET-NeuS. First, we propose a
method to integrate the tri-plane data structure into a sur-
face reconstruction framework in order to be able to model
an SDF with more local details. Second, since the features
between tri-plane pixels do not share learnable parameters,
we use positional encoding to modulate the tri-plane fea-
tures, thereby enhancing the smoothness of the learnable
features. Third, the positional encoding involves functions
of different frequencies. In order to better match differ-
ent frequencies, we propose to use multi-scale self-attention
convolution kernels with different window sizes to perform
convolution in the spatial domain to generate features of dif-
ferent frequency bands. This further increases the fidelity of
the surface reconstruction while suppressing noise.
We experiment on two datasets to verify the effectiveness
of our method, the DTU dataset and the NeRF-Synthetic
dataset. Since the DTU dataset contains non-Lambertian
surfaces, the ability of the network to resist noise interfer-
ence can be verified. The NeRF-Synthetic dataset has many
sharp features, which can verify that our framework can ef-
fectively utilize its improved local expressiveness to better
reconstruct local details. We show superior performance
compared to state-of-the-art methods on both datasets.
In summary, our contributions are as follows:
• We propose to train neural implicit surfaces with a tri-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12598
Figure 1. The challenge of using the tri-plane representation directly. First column: reference image. Second to the fifth column: NeuS,
Learning SDF using tri-planes, OURS without self-attention convolution, and OURS.
plane architecture to enable the reconstructed surfaces
to better preserve fine-grained local features.
• We derive a novel positional encoding strategy to be
used in conjunction with tri-plane features in order to
reduce noise interference.
• We utilize self-attention convolution to produce tri-
plane features with different frequency bands to match
the positional encoding of different frequencies, fur-
ther improving the fidelity of surface reconstruction.
|
Wei_iCLIP_Bridging_Image_Classification_and_Contrastive_Language-Image_Pre-Training_for_Visual_CVPR_2023 | Abstract
This paper presents a method that effectively com-
bines two prevalent visual recognition methods, i.e., image
classification and contrastive language-image pre-training,
dubbed iCLIP . Instead of na ¨ıve multi-task learning that use
two separate heads for each task, we fuse the two tasks
in a deep fashion that adapts the image classification to
share the same formula and the same model weights with
the language-image pre-training. To further bridge these
two tasks, we propose to enhance the category names in
image classification tasks using external knowledge, such
as their descriptions in dictionaries. Extensive experiments
show that the proposed method combines the advantages
of two tasks well: the strong discrimination ability in im-
age classification tasks due to the clean category labels,
and the good zero-shot ability in CLIP tasks ascribed to
the richer semantics in the text descriptions. In particu-
lar, it reaches 82.9% top-1 accuracy on IN-1K, and mean-
while surpasses CLIP by 1.8%, with similar model size, on
zero-shot recognition of Kornblith 12-dataset benchmark.
The code and models are publicly available at https:
//github.com/weiyx16/iCLIP .
| 1. Introduction
Image classification is a classic visual problem whose
goal is to classify images into a fixed set of pre-defined cat-
egories. For example, the widely used ImageNet dataset [8]
carefully annotated 14 million images and categorize them
into 21,841 categories chosen from the WordNet [36]. For
image classification, each category provides a clear taxon-
omy that groups images of the same category together and
separates images from different categories, and thus endows
the learnt representation with strong discriminant ability.
However, this classification ability is limited to a fixed set
of categories [8, 29, 51].
*Corresponding Author. The work is done when Yixuan Wei, Zhuliang
Yao, and Zhenda Xie are interns at Microsoft Research Asia.
Alt-texts
Dictionary
enhanced
classesText
Encoder Visual
Encoder Contrastive
Loss
Classes Dictionary
night
bird a photo of a
night bird , any
bird associated
with night, like
owl... Dictionary
Idx: 585 Figure 1. An illustration of the proposed iCLIP framework. The
iCLIP framework can take two types of annotations for training:
classes and alt-texts. It converts the conventional image classifi-
cation formula to share the same text encoder and the same co-
sine classifier as that used in the contrastive language-image pre-
training (CLIP). It also uses a dictionary-enhanced approach to
enrich the original class names in the image classification prob-
lem with external information involved in dictionaries. The deep
fusion and knowledge-enriched classes both greatly improve the
performance compared to na ¨ıve multi-task learning or performing
one of the two tasks alone.
Recently, the method that learns to contrast image-text
pairs, known as contrastive language-image pre-training
(abbr. CLIP), has well made up such shortage of the con-
ventional image classification methods to achieve strong
zero-shot recognition ability [24, 44]. These methods em-
ploy a contrastive learning framework, where images and
their corresponding alt-texts are treated as positive pairs,
while images with all other alt-texts are treated as negative
pairs. Thanks to the rich semantics involved in the alt-texts,
the images can be weakly connected to almost arbitrary cat-
egories that already appear in the alt-texts, resulting in its
zero-shot ability. A drawback is that the image-text pairs
are usually crawled from the internet without human label-
ing, leading to their noisy and ambiguous nature. Thus the
learnt representations are often not conceptual compact, and
may lack certain discriminative ability.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2776
This paper explores how to effectively combine these
two powerful visual recognition and representation learn-
ing methods, to take advantages of both methods and data
sources while relieving their shortages. We first try a na ¨ıve
multi-task learning framework that applies the original head
networks of the two tasks on top of a shared visual encoder,
and jointly learn the network with separate losses of the two
tasks. This na ¨ıve multi-task learning approach has been able
to benefit each individual tasks, but the effect is marginal.
We thus seek to fuse the two tasks more deeply, so that the
advantages of the two tasks can be more effectively joined
for better visual recognition, as well as for better transfer-
able representations.
To this end, our first technique is to deeply unify the
formulations of image classification and CLIP learning.
By examining their formulations, we found there are two
main differences: 1) Different classification losses. Image
classification tasks typically use a linear classification loss
which has better fitting ability due to the non-normalized
nature, while the CLIP-based methods adopt a cosine clas-
sifier which has better transferability for new domains and
categories [2, 6, 9, 18, 38, 57]. 2) Different parameteriza-
tion methods for classifier weights. Image classification
tasks usually directly optimize the parametric classification
weights without a need to process text semantics in class
names. The CLIP method can be regarded as generating
classifier weights through a text encoder and learns the text
encoder instead. The text-encoder-based classifier allows
sharing between alt-texts as well as modeling their relation-
ships, which enables the ability to tackle any classes.
Although the linear classifier and direct classifier weight
parameterization have been common practice in image clas-
sification for many years, it is interesting to find that chang-
ing the old formulation as that in the CLIP approach has
almost no performance degradation for pure image classifi-
cation problems. This indicates that we can directly adapt
the image classification formulation to the cosine classifier
and the text encoder parameterization used by CLIP, with
almost no loss. This also allows us to further share the text
encoder for both class names and alt-texts. Our experiments
show that this deep fusion approach performs much better
than the na ¨ıve multi-task method for both in-domain/zero-
shot classification and multi-modal retrieval tasks learning
(see 3).
Another gap between the image classification and CLIP
lies in the different text richness. Class names are usually
in short, i.e., one or a few words, and sometimes are even
ambiguous and polysemous in referring to specific seman-
tics, for example, “ night bird ” can represents either “ owl”
or “nightingale ”. On the contrary, alt-texts in CLIP are usu-
ally full sentences containing rich information. To further
bridge the gap between the image classification and CLIP,
we propose a second technique that leverages the knowledgebase to enhance the original class names, such as the expla-
nations in dictionaries. In our implementation, knowledge
is simply encoded as a prefix/suffix prompt, as illustrated in
Fig 1. Although simple, dictionary enhanced method shows
to maintain the accuracy for pure image classification prob-
lem (see Table 1), while greatly improve the zero-shot and
multi-modal retrieval performance as shown in Table 2 and
3. Note the process is just like human beings who learn new
words or concepts through both real examples and explana-
tions in dictionaries.
By these techniques, we present a framework that deeply
fuses the two important tasks of image classification and
contrastive language-image pre-training, dubbed iCLIP. Ex-
tensive experiments using different combinations of image
classification and image-text pair datasets show that the
iCLIP method can take advantages of both the discrimina-
tive power of image classification tasks and the zero-shot
ability in CLIP-like tasks, and perform significantly bet-
ter than conducting each task alone or the na ¨ıve multi-task
learning in both the in-domain/zero-shot classification and
multi-modal retrieval problems. The iCLIP method also
shows that learning a stronger transferable representation
than using each of the two tasks alone, verified on a vari-
ety of downstream tasks, including ADE20K semantic seg-
mentation [68], LVIS long-tail detection [17], and video ac-
tion recognition [26], as well as different evaluation settings
of few-shot and fine-tuning. Our contributions are summa-
rized as follows:
• We combined two important vision tasks of im-
age classification and contrastive language-image pre-
training into a single framework.
• We found that the original image classification for-
mulation can be adapted to CLIP approach with al-
most no performance degradation. With this finding,
we present a deep fusion approach in which the two
tasks share the same text encoder and the same clas-
sifier type, whose effectiveness is extensively verified
on benchmarks.
• We proposed a simple yet effective method to intro-
duce knowledge bases into image classification, ad-
dressing the ambiguous and polysemous issue of the
originally short image names as well as further bridges
the gap between classes and alt-texts. It also provides
the first showcase of applying knowledge bases into
computer vision problems.
|
Wei_LEGO-Net_Learning_Regular_Rearrangements_of_Objects_in_Rooms_CVPR_2023 | Abstract
Humans universally dislike the task of cleaning up a
messy room. If machines were to help us with this task,
they must understand human criteria for regular arrange-
ments, such as several types of symmetry, co-linearity
or co-circularity, spacing uniformity in linear or circu-
lar patterns, and further inter-object relationships that re-
late to style and functionality. Previous approaches for
this task relied on human input to explicitly specify goal
state, or synthesized scenes from scratch – but such meth-
ods do not address the rearrangement of existing messy
scenes without providing a goal state. In this paper, we
present LEGO-Net, a data-driven transformer-based it-
erative method for LEarning re Gular rearrangement of
Objects in messy rooms. LEGO-Net is partly inspired by
diffusion models – it starts with an initial messy state and
iteratively “de-noises” the position and orientation of ob-
jects to a regular state while reducing distance traveled.
Given randomly perturbed object positions and orientations
∗Core contribution.in an existing dataset of professionally-arranged scenes,
our method is trained to recover a regular re-arrangement.
Results demonstrate that our method is able to reliably re-
arrange room scenes and outperform other methods. We
additionally propose a metric for evaluating regularity in
room arrangements using number-theoretic machinery.
| 1. Introduction
What makes the arrangement of furniture and objects in
a room appear regular? While exact preferences may vary,
humans have by-and-large universally shared criteria of reg-
ular room arrangements: for instance, heavy cabinets are
arranged to align with walls, chairs are positioned evenly
around a table in linear or circular configurations, or night
stands are placed symmetrically on the two sides of a bed.
Humans also share a common dislike of physically perform-
ing the task of rearranging a messy room. To build auto-
mated robotic systems that can guide or actually rearrange
objects in a room, we first need methods that understand the
shared human criteria for regular room rearrangements and
respect the physical constraints of rearrangements.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19037
Human criteria for regular rearrangements can be sub-
tle and complex, including geometric rules of reflexional,
translational, or rotational symmetry, linear or circular
alignments, and spacing uniformity. Functional and stylistic
inter-object relationships are also important: for example, a
TV tends to be in front of and facing a sofa, chairs are next
to a table, etc. Many of these criteria interact and, at times,
conflict with one another. As a result, in general, there is
more than one desirable clean arrangement for any given
messy arrangement. In our setting, we further desire that
the clean rearrangement we create to be informed by the
initial messy arrangement – and not be entirely different –
for multiple reasons. First, there may have been a particular
clean arrangement that gave rise to the messy one – and it
may be desirable to recover a similar arrangement. Second,
we want to minimize the motion of objects as much as pos-
sible to respect the physical constraints and effort involved
– especially the motion of big and heavy furniture. Unfortu-
nately, extant methods fail to capture these criteria: methods
for scene synthesis from scratch [25, 29, 37, 69, 70, 72] ig-
nore the initial state of objects in a room, and rearrangement
methods often require scene-specific human input in the
form of a goal state [1, 46] or language description [27, 47].
In this paper, we present LEGO-Net, a method for
LEarning re Gular rearrangement of Objects in rooms di-
rectly from data. Different from work that focuses on ar-
ranging new objects from scratch or requires goal state
specification, we focus on rearranging existing objects
without any additional input at inference time. We take as
input the position, orientation, class label, and extents of
room objects in a specific arrangement, and output a room
with the same objects but regularly re-arranged. LEGO-Net
uses a transformer-based architecture [53] that is, in part,
motivated by recent denoising diffusion probabilistic mod-
els that learn a reverse diffusion process for generative mod-
eling [16, 49, 50]. We learn human criteria for regular rear-
rangements from a dataset of professionally designed clean
(regular) scenes [15], and represent each scene as a collec-
tion of objects and a floor plan. Prior to training, we perturb
the regular scenes to generate noisy configurations. During
training, our transformer learns to predict the original, de-
noised arrangement from the perturbed scene and its floor
plan. During inference, instead of directly re-arranging
scenes with our model, which would amount to na ¨ıve re-
gression, we run a Langevin dynamics-like reverse process
to iteratively denoise object positions and orientations. This
iterative process retains the flavor of original room state,
while limiting object movement during re-arrangement.
We conduct extensive experiments on public datasets to
show that our approach realistically rearranges noisy scene
arrangements, while respecting initial object positions. We
also demonstrate that our method is able to generalize to
previously unseen collection of objects in a wide variety offloor plans. Furthermore, we include extensive experimen-
tal results (e.g., Fig. 1 and Fig. 4), including a new metric to
evaluate regularity of re-arrangements, aimed at measuring
the presence of sparse linear integer relationships among
object positions in the final state (using the PSLQ algo-
rithm [13]). To sum up, we contribute:
• A generalizable, data-driven method that learns to reg-
ularly re-arrange the position and orientation of ob-
jects in various kinds of messy rooms.
• An iterative approach to re-arrangement at inference
time that retains flavor of the original arrangement and
minimizes object travel distance.
• An in-depth analysis of the performance and charac-
teristics of the denoising-based scene rearrangement.
• A new metric to measure the regularity of object ar-
rangements based on integer relation algorithms.
|
Wang_Few-Shot_Learning_With_Visual_Distribution_Calibration_and_Cross-Modal_Distribution_Alignment_CVPR_2023 | Abstract
Pre-trained vision-language models have inspired much
research on few-shot learning. However, with only a
few training images, there exist two crucial problems:
(1) the visual feature distributions are easily distracted
by class-irrelevant information in images, and (2) the
alignment between the visual and language feature distri-
butions is difficult. To deal with the distraction problem,
we propose a Selective Attack module, which consists of
trainable adapters that generate spatial attention maps
of images to guide the attacks on class-irrelevant image
areas. By messing up these areas, the critical features are
captured and the visual distributions of image features are
calibrated. To better align the visual and language feature
distributions that describe the same object class, we pro-
pose a cross-modal distribution alignment module, in which
we introduce a vision-language prototype for each class
to align the distributions, and adopt the Earth Mover’s
Distance (EMD) to optimize the prototypes. For efficient
computation, the upper bound of EMD is derived. In
addition, we propose an augmentation strategy to increase
the diversity of the images and the text prompts, which
can reduce overfitting to the few-shot training images.
Extensive experiments on 11 datasets demonstrate that
our method consistently outperforms prior arts in few-shot
learning. The implementation code will be available at
https://gitee.com/mindspore/models/tree/master/research/cv
/SADA.
| 1. Introduction
Thanks to the availability of large-scale datasets and
well-designed training strategies, the performances of many
computer vision tasks have been greatly improved. Re-
cent progress in vision-language models (VLMs), such as
*Co-first author.
†Corresponding author.CLIP [29] and ALIGN [17], provides a promising way
towards utilizing human language to address downstream
recognition tasks efficiently. As vision and language usually
contain complementary information, joint learning of image
and text representations has proven quite effective. Al-
though CLIP has demonstrated impressive zero-shot learn-
ing capability, it is still challenging to better adapt it to
downstream tasks. Naively fine-tuning CLIP on down-
stream datasets has limited effect, since it may destroy the
prior learned from the massive data during pre-training.
Therefore, effective transfer methods are needed to boost
the downstream performances of CLIP. In order to main-
tain the capability of pre-trained VLMs and further boost
downstream performances, different approaches have been
proposed to fine-tune a small proportion of additional pa-
rameters while keeping the pre-trained parameters frozen.
Among these approaches, prompt learning [42, 43] and vi-
sual adapters [13, 41] are two common approaches. How-
ever, the lack of training samples in few-shot settings
increases the risk of overfitting the trained prompts or
adapters. The class-irrelevant features ( e.g., the cluttered
image backgrounds) drive the image features far away from
their true distributions of the same category. Besides, VLMs
such as CLIP have such a problem that the distributions of
the image and text features are not really aligned [30], and
the problem becomes more challenging in few-shot settings.
Therefore, the visual distributions should be calibrated by
reducing class-irrelevant image contents, and the distribu-
tions of image and text features should be further aligned,
so as to promote the model’s learning of class-relevant crit-
ical features. The purpose of this paper is to develop an ef-
fective VLM transfer strategy for few-shot learning to solve
the above problems with Selective Attack (SA) and Cross-
Modal Distribution Alignment (CMDA).
Images often contain class-irrelevant information, which
is also embedded into the image representations. With only
a few samples, the model can easily learn these cluttered
representations, resulting in overfitting. This seriously hin-
ders the learning of critical features that help the model rec-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23445
EncodingEncodingSelectiveAttackCross-ModalDistributionAlignmentHistogram of features(a)(b)(c)(d)
ClassClassClassClass…p1pMp1p1p1……pMpMpMLearnable promptsFigure 1. (a) The t-SNE [35] visualization of the image feature distribution before Selective Attack, where the features are obtained by
the CLIP image encoder on the CIFAR10 dataset. The dots in different colors represent different classes of the image features. (b) After
Selective Attack, the intra-class distribution is significantly more compact. (c) The distribution histograms of image features and text
features of the same class (‘bird’) on CIFAR10 before CMDA, where the horizontal axis denotes the value of each element of the feature
vectors, and the vertical axis denotes the number of elements. (d) After CMDA, the difference between the two distributions is significantly
reduced.
ognize unseen samples. To solve this problem, we propose
the SA module, which consists of two trainable adapters
that generate a kernelized attention map to locate the class-
irrelevant areas of the images. The attention is adopted to
guide Gaussian perturbations to attack images before they
are fed into the image encoder. By messing up these class-
irrelevant image contents through SA, we facilitate the
model’s learning of truly critical features that can be trans-
ferred to recognize new samples within the same category.
As an example in Figs. 1 (a) and (b), after Selective Attack
(SA), the distributions of the image features are calibrated,
and the intra-class features become obviously more clus-
tered.
Another challenge is that the distributions of the image
and the text representations of the same class are not truly
aligned in CLIP [30] as shown in Fig. 1(c). The unaligned
distributions lead to inaccurate similarity calculations be-
tween image features and text features during inference, re-
sulting in incorrect predictions. The lack of samples in few-
shot settings further makes the problem even more serious.
To address it, we propose a CMDA module, in which we
construct a Vision-Language Prototype (VLP) for each class
to promote the cross-modal distribution alignment. Specif-
ically, the element values of VLP are initialized by aver-
aging all the image representations from the corresponding
class. During training, each VLP is optimized by reducing
its distance to the language prototype (defined in Sec. 3.4)
of the same class, thus promoting the cross-modal distri-
bution alignment. The Earth Mover’s Distance (EMD) is a
suitable metric for the alignment, which can not only reflect
the similarity between two distributions but also represent
the minimal transmission cost [40]. We derive a concise
upper bound of the EMD distance, which can balance theperformance and computational consumption. As shown
in Figs. 1 (c) and (d), the effect of Cross-Modal Distribu-
tion Alignment (CMDA) is obvious that the difference be-
tween the image and text feature distributions is effectively
reduced. In this way, the image features after CMDA can
be better predicted by the text features.
Automatic prompt learning for pre-trained VLMs has
been proposed to reduce the expensive cost of hand-crafted
prompt engineering [43]. However, the learned prompts
may suffer from more overfitting than manual prompts [42].
Therefore, instead of learning one soft prompt, we learn a
distribution over a collection of prompts, as in ProDA [23].
Moreover, we introduce an augmentation strategy to in-
crease the diversity of the images and the prompts. Specifi-
cally, we search for the four best augmentations from a col-
lection of predefined ones. Using these operations, each
image is augmented into four different forms. The collec-
tion of prompts is also divided into four groups, with each
group trained by images in the corresponding augmentation
form. Through the strategy, we improve the diversity of the
images and the prompts, and fully excavate the semantic in-
formation in the prompts. The framework of our method
is shown in Fig. 2. Our contributions are summarized as
follows:
• We conduct Selective Attack on the class-irrelevant
regions of images with the guidance of the attention
generated by two trainable adapters to facilitate the
model’s learning of class-related features, which cal-
ibrates the visual distributions.
• We propose Cross-Modal Distribution Alignment op-
timized by an EMD loss. The upper bound of EMD
for Gaussian distribution is further derived for compu-
tation efficiency.
23446
+TextEncoderImageEncoderImagefeatureTextfeatureSelective AttackEMD(image,text)TrainingImageEncoder+InferenceImagefeatureTextfeatureClass 1Class 2Class n…
∗+
TextEncoderAssemble
…1 2 nsimilarity∗𝑆JCMDA𝑆
𝑆VLPs
VLPsClassClassClassClassJ…p1pMp1p1p1……pMpMpM
ClassClassClassClassJ…p1pMp1p1p1……pMpMpMTrainable promptsFigure 2. Overview of our framework. We introduce a Selective Attack module to reduce the intra-class distances of image features
during training. We also design a Cross-Modal Distribution Alignment (CMDA) module to align the distributions of image and text
representations. During training, the trainable parameters are denoted in orange and the encoders of CLIP are frozen. J: the number of
augmentations; Ⓢ: cosine similarity computation; ⊛: element-wise product.
• We present an augmentation strategy to reduce overfit-
ting and increase the diversity of images and prompts.
• Our method outperforms prior arts in few-shot learning
on 11 benchmarks.
|
Weng_3D_Human_Keypoints_Estimation_From_Point_Clouds_in_the_Wild_CVPR_2023 | Abstract
Training a 3D human keypoint detector from point
clouds in a supervised manner requires large volumes of
high quality labels. While it is relatively easy to capture
large amounts of human point clouds, annotating 3D key-
points is expensive, subjective, error prone and especially
difficult for long-tail cases (pedestrians with rare poses,
scooterists, etc.). In this work, we propose GC-KPL -
Geometry Consistency inspired Key Point Leaning, an ap-
proach for learning 3D human joint locations from point
clouds without human labels. We achieve this by our novel
unsupervised loss formulations that account for the struc-
ture and movement of the human body. We show that by
training on a large training set from Waymo Open Dataset
[21] without any human annotated keypoints, we are able
to achieve reasonable performance as compared to the fully
supervised approach. Further, the backbone benefits from
the unsupervised training and is useful in downstream few-
shot learning of keypoints, where fine-tuning on only 10 per-
cent of the labeled training data gives comparable perfor-
mance to fine-tuning on the entire set. We demonstrated that
GC-KPL outperforms by a large margin over SoTA when
trained on entire dataset and efficiently leverages large vol-
umes of unlabeled data.
| 1. Introduction
Estimation of human pose in 3D is an important prob-
lem in computer vision and it has a wide range of appli-
cations including AR/VR, AI-assisted healthcare, and au-
tonomous driving [4, 29, 32]. For autonomous systems, be-
ing able to perceive human poses from sensor data ( e.g. Li-
DAR point clouds) is particularly essential to reason about
the surrounding environment and make safe maneuvers.
Despite the high level of interest in human pose estima-
tion in the wild, only few papers approached outdoor 3D
keypoint detection using point cloud. A main reason is that
*Work done as an intern at Waymo.
Transformer
Encoder Keypoint locations learned with
Unsupervised Losses
Downstream Fine-tuning
Small amount
of labeled
point clouds
Pre-trained
Backbone
MLP
In-the-wild
Point Cloud
3D keypoints
Figure 1. We present GC-KPL, a novel method for learning 3D human
keypoints from in-the-wild point clouds without any human labels. We
propose to learn keypoint locations using unsupervised losses that account
for the structure and movement of the human body. The backbone learns
useful semantics from unsupervised learning and can be used in down-
stream fine-tuning tasks to boost the performance of 3D keypoint estima-
tion.
training a pedestrian pose estimation model requires large
amount of high quality in-the-wild data with ground truth
labels. Annotating 3D human keypoints on point cloud data
is expensive, time consuming and error prone. Although
there are a few existing point cloud datasets with ground
truth human poses [11, 13, 21], they are limited in terms
of the quantity of the 3D annotations and diversity of the
data. Therefore, fully-supervised human keypoint detectors
trained on such datasets do not generalize well for long tail
cases. For this reason, previous approaches on pedestrian
3D keypoint estimation have mainly focused on utilizing 2D
weak supervision [4, 32] which is easier to obtain, or lever-
aging signals from others modalities ( e.g. RGB, depth) [29].
Nonetheless, there is a lot of useful information in the large
amount of unlabeled LiDAR data that previous works on
human pose estimation have not made an effort to utilize.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1158
In this work, we propose a novel and effective method for
learning 3D human keypoints from in-the-wild point clouds
without using any manual labeled 3D keypoints. Our ap-
proach is built on top of the key observation that human
skeletons are roughly centered within approximately rigid
body parts and that the location and movement of the sur-
face points should explain the movement of the skeleton and
vice versa. To that end, we design novel unsupervised loss
terms for learning locations of the 3D keypoints/skeleton
within human point clouds which correspond to 3D loca-
tions of major joints of human body.
In the proposed method, we first train a transformer-
based regression model for predicting keypoints and a se-
mantic segmentation model for localizing body parts on a
synthetic data constructed from randomly posed SMPL hu-
man body model [15]. Then, we train on the entire Waymo
Open Dataset [21] without using any 3D ground-truth anno-
tation of human keypoints. Through unsupervised training,
keypoint predictions are refined and the backbone learns
useful information from large amount of unannotated data.
In summary, we make the following contributions:
• We present GC-KPL, a method for learning human
3D keypoints for in-the-wild point clouds without any
manual keypoint annotations.
• Drawing insight from the structure and movement of
the human body, we propose three effective and novel
unsupervised losses for refining keypoints. We show
that the proposed losses are effective for unsupervised
keypoint learning on Waymo Open Dataset.
• Through downstream fine-tuning/few-shot experi-
ments, we demonstrate that GC-KPL can be used as
unsupervised representation learning for human point
clouds, which opens up the possibility to utilize a prac-
tically infinite amounts of sensor data to improve hu-
man pose understanding in autonomous driving.
|
Xu_Learning_Imbalanced_Data_With_Vision_Transformers_CVPR_2023 | Abstract
The real-world data tends to be heavily imbalanced and
severely skew the data-driven deep neural networks, which
makes Long-Tailed Recognition (LTR) a massive challeng-
ing task. Existing LTR methods seldom train Vision Trans-
formers (ViTs) with Long-Tailed (LT) data, while the off-
the-shelf pretrain weight of ViTs always leads to unfair
comparisons. In this paper, we systematically investigate
the ViTs’ performance in LTR and propose LiVT to train
ViTs from scratch only with LT data. With the observa-
tion that ViTs suffer more severe LTR problems, we con-
duct Masked Generative Pretraining (MGP) to learn gener-
alized features. With ample and solid evidence, we show
that MGP is more robust than supervised manners. Al-
though Binary Cross Entropy (BCE) loss performs well with
ViTs, it struggles on the LTR tasks. We further propose
the balanced BCE to ameliorate it with strong theoreti-
cal groundings. Specially, we derive the unbiased exten-
sion of Sigmoid and compensate extra logit margins for de-
ploying it. Our Bal-BCE contributes to the quick conver-
gence of ViTs in just a few epochs. Extensive experiments
demonstrate that with MGP and Bal-BCE, LiVT success-
fully trains ViTs well without any additional data and out-
performs comparable state-of-the-art methods significantly,
e.g., our ViT-B achieves 81.0% Top-1 accuracy in iNatural-
ist 2018 without bells and whistles. Code is available at
https://github.com/XuZhengzhuo/LiVT .
| 1. Introduction
With the vast success in the computer vision field, Vision
Transformers (ViTs) [15, 43] get increasingly popular and
have been widely used in visual recognition [15], detec-
tion [5], and video analysis [16]. These models are heavily
dependent on large-scale and balanced data to avoid overfit-
ting [39,52,82]. However, real-world data usually confronts
severe class-imbalance problems, i.e., most labels (tail) are
associated with limited instances while a few categories
(head) occupy dominant samples. The models simply clas-
sify images into head classes for lower error because the
0 20 40 60 80 100 280 300 320303540455055606570Top-1 Accuracy (%)
Parameters (M)Baseline
NCL
RIDE
DeiT
MAE
LiVT(Ours)
LiVT/384(Ours)ViT-Tiny ViT-Small ViT-Base ViT-Large
R50RIDE-4E
R50NCL
R50×3[13]
[34]
[61]
[55]
[18]
Figure 1. Top-1 Acc v.s. Model Size on ImageNet-LT dataset.
We choose the Tiny / Small / Base / Large ViT and multi-expert
approaches. R50 represents the ResNet50 model. ViT-Base gets
lower Acc than ResNet50 when trained in a supervised manner.
head always overwhelms tail ones in LTR. The data paucity
also results in the model overfitting on the tail with unac-
cepted generalization. The aforementioned problems make
LongTailRecognization (LTR) a challenging task.
Numerous papers [4,13,22,34,35,44,70] handle the LTR
problem with traditional supervised cross-entropy learning
based on ResNet [20] or its derivatives [68]. Some meth-
ods use ViTs with pretrained weights on ImageNet [52] (or
larger datasets), which leads to unfair comparisons with ad-
ditional data, e.g.on ImageNet-LT (a subset of ImageNet-
1K) benchmark. Moreover, there are still limited explo-
rations on the utilization of Long-Tailed (LT) data to train
ViTs effectively. Therefore, in this paper, we try to train
ViTs from scratch with LT data . We observe that it is par-
ticularly difficult to train ViT with LT labels’ supervision.
As Tab. 1 shows, ViTs degrade heavily when training data
become skewed. ViT-B is much worse than ResNet50 with
the same CE training manner ( c.f. Fig. 1). One reason-
able explanation is that ViTs require longer training to learn
the inductive bias, while CNNs offer the built-in translation
invariance implicitly. Yet another one lies in the label sta-
tistical bias in the LTR datasets, which confuses models to
make predictions with an inherent bias to the head [12, 47].
The well-trained ViTs have to overcome the above plights
simultaneously to avoid falling into dilemmas.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15793
Inspired by decoupling [29], many methods [9, 12, 60,
80, 83] attempt to enhance feature extraction in supervised
manners like mixup [74] / remix [9], or Self-Supervised
Learning (SSL) like Contrastive Learning (CL) [7,19]. Liu
et al. [41] claim that SSL representations are more robust to
class imbalance than supervised ones, which inspires us to
train ViTs with SSL. However, CL is quite challenging for
extensive memory requisition and converge difficulties [8],
where more explorations are required to work well with
ViTs in LTR. In contrast, we propose to Learn imbalanced
data with ViTs (LiVT) by Masked Generative Pretraining
(MGP) and Balanced FineTuning (BFT).
Firstly, LiVT adopts MGP to enhance ViTs’ feature ex-
traction, which has been proven effective on BeiT [2] and
MAE [18]. It reconstructs the masked region of images
with an extra lightweight decoder. We observe that MGP
is stable with ViTs and robust enough to LT data with em-
pirical evidence. Despite the label distribution, the compa-
rable number of training images will bring similar feature
extraction ability, which greatly alleviates the toxic effect
of LT labels [26]. Meanwhile, the training is accelerated by
masked tokens with acceptable memory requisition.
Secondly, LiVT trains the downstream head with rebal-
ancing strategies to utilize annotation information, which
is consistent with [29, 35, 80]. Generally, Binary Cross-
Entropy (BCE) loss performs better than Cross-Entropy
loss when collaborating with ViTs [55]. However, it fails
to catch up with widely adopted Balanced Cross- Entropy
(Bal-CE) loss and shows severe training instability in LTR.
We propose the Balanced BCE (Bal-BCE) loss to revise the
mismatch margins given by Bal-CE. Detailed and solid the-
oretical derivations are provided from Bayesian theory. Our
Bal-BCE ameliorates BCE by a large margin and achieves
state-of-the-art (SOTA) performance with ViTs.
Extensive experiments show that LiVT learns LT data
more efficiently and outperforms vanilla ViT [15], DeiT
III [55], and MAE [18] remarkably. As detailed compar-
isons in Fig. 1, LiVT achieves SOTA on ImageNet-LT with
affordable parameters, despite that ImageNet-LT is a rel-
atively small dataset for ViTs. The ViT-Small [55] also
achieves outstanding performance compared to ResNet50.
Our key contributions are summarized as follows.
• To our best knowledge, we are the first to investigate
training ViTs from scratch with LT data systematically.
• We pinpoint that the masked generative pretraining is
robust to LT data, which avoids the toxic influence of
imbalanced labels on feature learning.
• With a solid theoretical grounding, we propose the bal-
anced version of BCE loss (Bal-BCE), which improves
the vanilla BCE by a large margin in LTR.
• We propose LiVT recipe to train ViTs from scratch,
and the performance of LiVT achieves state-of-the-art
across various benchmarks for long-tailed recognition.Table 1. Top-1 accuracy (%) of different recipes to train ViT-B-16
from scratch on ImageNet-LT/BAL. All perform much worse on
LT than BAL. See descriptions of LT & BAL in section 5.1.
Dataset ViT ∆ DeiT III ∆ MAE ∆
ImageNet-BAL 38.7 - 67.2 - 69.2 -
ImageNet-LT 31.6 -7.0 48.4 -18.8 54.5 -14.7
|
Xu_EqMotion_Equivariant_Multi-Agent_Motion_Prediction_With_Invariant_Interaction_Reasoning_CVPR_2023 | Abstract
Learning to predict agent motions with relationship rea-
soning is important for many applications. In motion pre-
diction tasks, maintaining motion equivariance under Eu-
clidean geometric transformations and invariance of agent
interaction is a critical and fundamental principle. However,
such equivariance and invariance properties are overlooked
by most existing methods. To fill this gap, we propose Eq-
Motion, an efficient equivariant motion prediction model
with invariant interaction reasoning. To achieve motion
equivariance, we propose an equivariant geometric feature
learning module to learn a Euclidean transformable feature
through dedicated designs of equivariant operations. To
reason agent’s interactions, we propose an invariant interac-
tion reasoning module to achieve a more stable interaction
modeling. To further promote more comprehensive motion
features, we propose an invariant pattern feature learning
module to learn an invariant pattern feature, which coop-
erates with the equivariant geometric feature to enhance
network expressiveness. We conduct experiments for the
proposed model on four distinct scenarios: particle dynam-
ics, molecule dynamics, human skeleton motion prediction
and pedestrian trajectory prediction. Experimental results
show that our method is not only generally applicable, but
also achieves state-of-the-art prediction performances on
all the four tasks, improving by 24.0/30.1/8.6/9.2%. Code
is available at https://github.com/MediaBrain-
SJTU/EqMotion .
| 1. Introduction
Motion prediction aims to predict future trajectories of
multiple interacting agents given their historical observations.
It is widely studied in many applications like physics [3, 28],
molecule dynamics [7], autonomous driving [35] and human-
robot interaction [38,68]. In the task of motion prediction, an
*Corresponding author.
(a) Particle dynamics
(b) Molecule dynamics
(c) Human skeleton motion(d) Pedestrian trajectories
SpringPast motionFuture motion(equivariant)StickInteraction(invariant)
Single bondPast motionFuture motion (equivariant)Double bondInteraction(invariant)
Bone connectionPast motionFuture motion(equivariant)Interaction(invariant)Past motionFuture motion (equivariant)MeetingInteraction(invariant)Euclidean transformationEuclidean transformation
EuclideantransformationEuclideantransformation
Figure 1. Motion equivariance and interaction invariance under
the Euclidean geometric transformation is a fundamental principle
for a prediction model, but this principle is often overlooked by
previous works. In this work, we propose EqMotion to fill this gap.
often-overlooked yet fundamental principle is that a predic-
tion model is required to be equivariant under the Euclidean
geometric transformation (including translation, rotation and
reflection), and at the same time maintain the interaction
relationships invariant. Motion equivariance here means that
if an input motion is transformed under a Euclidean trans-
formation, the output motion must be equally transformed
under the same transformation. Interaction invariance means
that the way agents interact remains unchanged under the
input’s transformation. Figure 1 shows real-world examples
of motion equivariance and interaction invariance.
Employing this principle in a network design brings at
least two benefits. First, the network will be robust to arbi-
trary Euclidean transformations. Second, the network will
have the capability of being generalizable over rotations and
translations of the data. This capability makes the network
more compact, reducing the network’s learning burden and
contributing to a more accurate prediction.
Despite the motion equivariance property being important
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1410
and fundamental, it is often neglected and not guaranteed by
most existing motion prediction methods. The main reason
is that these methods transform the input motion sequence
directly into abstract feature vectors, where the geometric
transformations are not traceable, causing the geometric
relationships between agents to be irretrievable. Random
augmentation will ease the equivariance problem, but it is
still unable to guarantee the equivariance property. [30] uses
non-parametric pre and post coordinate processing to achieve
equivariance, but its parametric network structures do not
satisfy equivariance. Some methods propose equivariant
parametric network structures utilizing the higher-order rep-
resentations of spherical harmonics [14, 61] or proposing
an equivariant message passing [58], but they focus on the
state-to-state prediction. This means that they use only one
historical timestamp to predict one future timestamp. Conse-
quently, these methods have limitations on utilizing motion’s
temporal information and modeling interaction relationships
since a single-state observation is insufficient for both inter-
action modeling and temporal dependency modeling.
In this paper, we propose EqMotion, the first motion pre-
diction model that is theoretically equivariant to the input
motion under Euclidean geometric transformations based
on the parametric network. The proposed EqMotion has
three novel designs: equivariant geometric feature learning,
invariant pattern feature learning and invariant interaction
reasoning. To ensure motion equivariance, we propose an
equivariant geometric feature learning module to learn a Eu-
clidean transformable geometric feature through dedicated
designs of equivariant operations. The geometric feature
preserves motion attributes that are relevant to Euclidean
transformations. To promote more comprehensive repre-
sentation power, we introduce an invariant pattern feature
learning module to complement the network with motion
attributes that are independent of Euclidean transformations.
The pattern features, cooperated into the geometric features,
provide expressive motion representations by exploiting mo-
tions’ spatial-temporal dependencies.
To further infer the interactions during motion prediction,
we propose an invariant interaction reasoning module, which
ensures that the captured interaction relationships are invari-
ant to the input motion under Euclidean transformations.
The module infers an invariant interaction graph by utilizing
invariant factors in motions. The edge weights in the inter-
action graph categorize agents’ interactions into different
types, leading to better interaction representation.
We conduct extensive experiments on four different sce-
narios to evaluate our method’s effectiveness: particle dy-
namics, molecule dynamics, 3D human skeleton motion and
pedestrian trajectories. Comparing to many task-specific
motion prediction methods, our method is generally appli-
cable and achieves state-of-the-art performance in all these
tasks by reducing the prediction error by 24.0/30.1/8.6/9.2 %respectively. We also present that EqMotion is lightweight,
and has a model size less than 30 %of many other models’
sizes. We show that EqMotion using only 5 %data can
achieve a comparable performance with other methods that
take full data. As a summary, here are our contributions:
•We propose EqMotion, the first motion prediction
model that theoretically ensures sequence-to-sequence mo-
tion equivariance based on the parametric network. With
equivariance, EqMotion promotes more generalization abil-
ity of motion feature learning, leading to more robust and
accurate prediction.
•We propose a novel invariant interaction reasoning mod-
ule, in which the captured interactions between agents are
invariant to the input motion under Euclidean geometric
transformations. With this, EqMotion achieves more gener-
alization ability and stability in the interaction reasoning.
•We conduct experiments on four types of scenarios
and find that EqMotion is applicable to all these different
tasks, and importantly outperforms existing state-of-the-art
methods on all the tasks.
|
Wang_Open-Set_Fine-Grained_Retrieval_via_Prompting_Vision-Language_Evaluator_CVPR_2023 | Abstract
Open-set fine-grained retrieval is an emerging challenge
that requires an extra capability to retrieve unknown sub-
categories during evaluation. However, current works focus
on close-set visual concepts, where all the subcategories
are pre-defined, and make it hard to capture discrimina-
tive knowledge from unknown subcategories, consequently
failing to handle unknown subcategories in open-world sce-
narios. In this work, we propose a novel Prompting vision-
Language Evaluator (PLEor) framework based on the re-
cently introduced contrastive language-image pretraining
(CLIP) model, for open-set fine-grained retrieval. PLEor
could leverage pre-trained CLIP model to infer the discrep-
ancies encompassing both pre-defined and unknown subcat-
egories, called category-specific discrepancies, and trans-
fer them to the backbone network trained in the close-set
scenarios. To make pre-trained CLIP model sensitive to
category-specific discrepancies, we design a dual prompt
scheme to learn a vision prompt specifying the category-
specific discrepancies, and turn random vectors with cate-
gory names in a text prompt into category-specific discrep-
ancy descriptions. Moreover, a vision-language evaluator
is proposed to semantically align the vision and text prompts
based on CLIP model, and reinforce each other. In addi-
tion, we propose an open-set knowledge transfer to transfer
the category-specific discrepancies into the backbone net-
work using knowledge distillation mechanism. Quantitative
and qualitative experiments show that our PLEor achieves
promising performance on open-set fine-grained datasets.
| 1. Introduction
Open-set fine-grained retrieval (OSFR) attempts to build
a well-generalized embedding space where the visual dis-
crepancies among unknown subcategories are clearly re-
flected. It plays a vital role in numerous vision applica-
*Corresponding author: hjli@dlut.edu.cn
ℱ𝐶𝑁𝑁
0.15
0.02
0.94
⋮
0.230
0
1
⋮
0ℒ𝐶𝐸
𝑪𝑳𝑨𝑺𝑺𝑨(a) a classification-based evaluator
(e.g., [34, 43, 52, 64])
ℱ𝐶𝑁𝑁 𝑪𝑳𝑨𝑺𝑺𝑨
𝑪𝑳𝑨𝑺𝑺𝑩ℒ𝐷𝑖𝑠𝑡𝑎𝑛𝑐𝑒
𝑪𝑳𝑨𝑺𝑺𝑨(b) a metric-based evaluator
(e.g., [3, 20, 39, 46, 51, 63])
𝑪𝑳𝑨𝑺𝑺𝑨ℱ𝐶𝑁𝑁
𝐶𝐿𝐼𝑃
+𝑪𝑳𝑨𝑺𝑺𝑨
ℒ𝐾𝑛𝑜𝑤𝑙𝑒𝑑𝑔𝑒 𝑇𝑟𝑎𝑛𝑠𝑓𝑒𝑟
𝑫𝒐𝒈
𝑪𝒂𝒕⋯
𝑪𝒂𝒓
𝑭𝒐𝒙
𝑷𝒊𝒈
𝑻𝒐𝒚⋯
𝑷𝒐𝒕
𝑨𝒊𝒓Open -world Knowledge
A brown dog is sitting on the grass.
Two striped cats are lying on bed.
Tibetan fox walks on the grasslandOpen -world Knowledge
Tuned
Frozen
𝑪𝑳𝑨𝑺𝑺𝑨
Vison Prompt
(Visual discrepancies )
Text Prompt (Category -specific descriptions)
(c) a vision-language evaluator with open-world knowledge
Figure 1. Comparison on existing evaluators in open-set fine-
grained retrieval. Although our PLEor (c) is trained in a close-
set scenarios, similar with previous works (a) (b), it could mine
the category-specific discrepancies using pre-trained CLIP model
aided by vision and text prompts, and transfer the discrepancies
encompassing both pre-defined and unknown subcategories to our
model. This enables our model to procure in-depth understanding
for unknown subcategories owing to distilling the knowledge with
open-world visual concepts from CLIP model, improving retrieval
performance eventually in open-set scenarios.
tions from fashion industry, e.g., retrieval of diverse types of
clothes [1, 31], to environmental conservation, e.g., retriev-
ing endangered species [7,49,50]. As shown in Fig. 1(a)(b),
existing works follow a close-set learning setting, where all
the subcategories are pre-defined, and evaluate embeddings
identifying the visually similar objects of pre-defined sub-
categories. However, such evaluation focuses on closed-set
visual concepts, limiting the model to a pre-defined list of
subcategories, and is not generalizable when it comes to un-
known subcategories unseen during training.
Fortunately, recent works [66, 67] using large-scale con-
trastive language-image pretraining (CLIP) model [37] have
shown great potentials in alleviating this limitation. As
shown in Fig. 1(c), CLIP model is pretrained from scratch
on a dataset of 400 moillion image-text pairs, which are au-
tomatically collected from the publicly available sources on
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19381
the Internet. Based on this, CLIP model could associate
much wider range of visual concepts in the images with
their text descriptions, rather than a fixed set of pre-defined
categories. Therefore, one question naturally arises: is it
possible that we can effectively exploit the open-set visual
concepts in CLIP model to solve OSFR task? It is already
answered yes by recent studies exploring how to transfer
the knowledge from CLIP model to other downstream tasks
via prompt techniques [10, 16, 26, 37, 56, 66, 67]. However,
their prompt strategies are tailored for capturing category-
level semantic ( e.g., dog and cat) rather than more detailed
visual discrepancies for distinguishing fine-grained objects
(e.g., different breeds of dogs). Therefore, how to effec-
tively make pre-trained CLIP model sensitive to the visual
discrepancies encompassing both pre-defined and unknown
subcategories (termed as category-specific discrepancies),
and transfer these discrepancies to the model trained in
closed-set scenarios is worthy of investigation.
To this end, we design a novel prompting vision-
language evaluator (PLEor) for OSFR, based on the power
of recently introduced CLIP model. Technically, to make
pre-trained CLIP model sensitive to category-specific dis-
crepancy, we design a dual prompt scheme composed of
vision prompt and text prompt for explicitly highlighting
the category-specific discrepancies from the input perspec-
tive. Concretely, the vision prompt specifies the category-
specific discrepancies via parsing semantic features inferred
by the backbone network. And the text prompt turns ran-
dom vectors with category names into category-specific dis-
crepancy descriptions. Meanwhile, a vision-language eval-
uator is proposed to encourage pre-trained CLIP model to
locate the category-specific descriptions in vision prompt
and generate the category-specific visual semantics into text
prompt. In this way, the OSFR task aided by the designed
prompts is close to the solved task of pre-training CLIP
model, thus making the CLIP model sensitive to category-
specific discrepancy. Nevertheless, a non-negligible prob-
lem is that the corporation of CLIP model and backbone
network is quite complex, leading to very time consuming
and memory demanding during evaluation. Thereby, we
propose an open-set knowledge transfer module to transfer
the category-specific discrepancies from CLIP model to the
backbone network using knowledge distillation mechanism.
Our contributions are summarized as follows:
• A prompting vision-language evaluator, i.e., PLEor, is
proposed. It can distill the knowledge with open-world
visual concepts from CLIP model to alleviate the prob-
lems behind open-set scenarios. To our best knowl-
edge, we are the first to regard CLIP model as an eval-
uator specifically for OSFR task.
• PLEor provides timely insights into the adaptation of
pre-trained CLIP model adopting prompt learning, andcrucially, demonstrates the effectiveness of a simple
modification for inputs of CLIP model in OSFR.
• PLEor achieves new state-of-the-art results compared
with classification-based and metric-based evaluators,
which is significant gains of 8.0% average retrieval ac-
curacy on three widely-used OSFR datasets.
|
Wen_Learnable_Skeleton-Aware_3D_Point_Cloud_Sampling_CVPR_2023 | Abstract
Point cloud sampling is crucial for efficient large-scale
point cloud analysis, where learning-to-sample methods
have recently received increasing attention from the com-
munity for jointly training with downstream tasks. However,
the above-mentioned task-specific sampling methods usu-
ally fail to explore the geometries of objects in an explicit
manner. In this paper, we introduce a new skeleton-aware
learning-to-sample method by learning object skeletons as
the prior knowledge to preserve the object geometry and
topology information during sampling. Specifically, with-
out labor-intensive annotations per object category, we first
learn category-agnostic object skeletons via the medial axis
transform definition in an unsupervised manner. With ob-
ject skeleton, we then evaluate the histogram of the local
feature size as the prior knowledge to formulate skeleton-
aware sampling from a probabilistic perspective. Addition-
ally, the proposed skeleton-aware sampling pipeline with
the task network is thus end-to-end trainable by explor-
ing the reparameterization trick. Extensive experiments on
three popular downstream tasks, point cloud classification,
retrieval, and reconstruction, demonstrate the effectiveness
of the proposed method for efficient point cloud analysis.
| 1. Introduction
With the development of 3D sensing technologies, ac-
quiring 3D data becomes more accessible than before, and
there are a growing number of data repositories available
online, such as ModelNet [62], ShapeNet [6], ScanNet [11],
and KITTI [18]. Among popular 3D shape representa-
tions such as point cloud [41], voxel [72], mesh [55], and
multi-view images [51], the point cloud is becoming in-
creasingly popular as the first-hand data captured by Li-
DAR or depth camera (e.g., Kinect), which has been widely
applied in various applications such as scene reconstruc-
tion [19, 26], autonomous driving navigation [35], and vir-
tual reality (VR) [59]. Though a high-resolution point cloud
*Equal contribution
(a)
(b)
(c)
(d)
Figure 1. (a) Point cloud; (b) Object skeleton; (c) Random sam-
pling; (d) Skeleton-aware sampling. Compared with random sam-
pling, skeleton-aware sampling tends to preserve the object geom-
etry and topology information.
can accurately capture the geometry and topology details of
complex objects, it remains challenging for those devices
with limited computation and storage resources. Therefore,
point cloud sampling, aiming to find a small set of points to
represent the object shape and topology effectively, is usu-
ally indispensable for efficient large-scale point cloud anal-
ysis [24, 29, 32, 33, 41, 42, 57, 61, 71].
Traditional point cloud sampling methods such as ran-
dom sampling (RS) and farthest point sampling (FPS) [15,
42] usually select a subset of points directly using the raw
data information [7, 42, 43, 47, 68]. Specifically, RS is very
efficient but may miss sparse regions, while FPS has better
coverage on the entire point set but suffers from the latency
bottleneck in parallel computation. To improve the per-
formances on downstream tasks, learn-to-sample methods
have been recently proposed to jointly optimize the sam-
pling algorithm and each specific downstream task [10, 14,
20,27,56]. Though considerable progress has been achieved
in downstream tasks such as point cloud classification and
reconstruction, one critical issue remains poorly investi-
gated: as objects usually have complex topology structures
and irregular surface morphology, it is challenging to pre-
serve the object’s geometrical information during the point
cloud sampling process.
Skeleton is an efficient representation to capture the un-
derlying object shape structures, which has been widely
used as the structural abstraction in many visual understand-
ing tasks [31, 37, 53]. Inspired by this, we build our point
cloud sampling strategy on top of the object skeleton to
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17671
preserve different objects’ intrinsic geometrical and topo-
logical natures. Here we illustrate a comparison between
random sampling and skeleton-aware sampling in Fig. 1.
However, the skeleton extraction is usually non-trivial due
to the following reasons [30, 52]: 1) given the diversity of
object topological structures, it is difficult to have a con-
sistent category-agnostic skeleton definition at the semantic
level, where existing datasets usually consider only single
or a few known object categories, such as human skele-
ton; 2) topological methods are usually category-agnostic
by emphasizing geometrical and topological properties of
the shape, such as its connectivity, topology, length, direc-
tion, and width. Nevertheless, they usually require a sub-
stantial amount of time for geometrical processing and are
also notoriously sensitive to surface noise. Therefore, we
resort to the medial axis transform (MAT) definition of ob-
ject skeleton and deep neural networks to learn effective
skeletal representations in an unsupervised manner.
With the learned object skeleton, we then formulate the
skeleton-aware point cloud sampling pipeline from a proba-
bilistic perspective. Specifically, we first calculate the local
feature size (LFS) [2] for each point to measure the object’s
size near a particular point. We then explore the LFS distri-
bution as the prior sampling probability on each point and
use the LFS histogram in practice since per-point LFS is
usually sensitive to point cloud noise. By learning object
skeletons with deep neural networks, we have the skeleton-
aware prior probability for sampling each point. To adapt
skeleton-aware sampling for downstream tasks, we jointly
optimize the posterior sampling probability on each point in
an end-to-end manner. Notably, by exploring the categori-
cal reparameterization with Gumbel-softmax [22], the cate-
gorical sampling based on LFS histogram is differentiable.
Therefore, both sampling and task networks are end-to-end
learnable for task-specific point cloud sampling. In this pa-
per, our main contributions can be summarized as follows:
1. We propose a new skeleton-aware point cloud sam-
pling method to preserve the object geometry and
topology information during sampling.
2. We introduce the category-agnostic skeleton learning
in an unsupervised manner to provide the prior knowl-
edge for skeleton-aware point cloud sampling.
3. We conduct extensive experiments on three important
downstream tasks, point cloud classification, retrieval,
and reconstruction, to evaluate the proposed approach.
|
Wang_METransformer_Radiology_Report_Generation_by_Transformer_With_Multiple_Learnable_Expert_CVPR_2023 | Abstract
In clinical scenarios, multi-specialist consultation could
significantly benefit the diagnosis, especially for intricate
cases. This inspires us to explore a “multi-expert joint
diagnosis” mechanism to upgrade the existing “single ex-
pert” framework commonly seen in the current literature.
To this end, we propose METransformer, a method to real-
ize this idea with a transformer-based backbone. The key
design of our method is the introduction of multiple learn-
able “expert” tokens into both the transformer encoder and
decoder. In the encoder, each expert token interacts with
both vision tokens and other expert tokens to learn to attend
different image regions for image representation. These ex-
pert tokens are encouraged to capture complementary in-
formation by an orthogonal loss that minimizes their over-
lap. In the decoder, each attended expert token guides the
cross-attention between input words and visual tokens, thus
influencing the generated report. A metrics-based expert
voting strategy is further developed to generate the final re-
port. By the multi-experts concept, our model enjoys the
merits of an ensemble-based approach but through a man-
ner that is computationally more efficient and supports more
sophisticated interactions among experts. Experimental re-
sults demonstrate the promising performance of our pro-
posed model on two widely used benchmarks. Last but not
least, the framework-level innovation makes our work ready
to incorporate advances on existing “single-expert” models
to further improve its performance.
| 1. Introduction
Interpreting radiology images (e.g., chest X-ray) and
writing diagnostic reports are essential operations in clinical
practice and normally require a considerable manual work-load. Therefore, radiology report generation, which aims
to automatically generate a free-text description based on a
radiograph, is highly desired to ease the burden of radiolo-
gists while maintaining the quality of health care. Recently,
substantial progress has been made towards research on au-
tomated radiology report generation models [4,5,15–17,21,
22,33–35,41,43,44,47]. Most existing studies adopt a con-
ventional encoder-decoder architecture following the image
captioning paradigm [6, 25, 32, 37, 48] and resort to opti-
mizing network structure or introducing external or prior
information to aid report generation. These methods, in this
paper, are collectively referred to as “single-expert” based
diagnostic captioning methods.
However, diagnostic report generation is a very challeng-
ing task as disease anomalies usually only occupy a small
portion of the whole image and could appear at arbitrary lo-
cations. Due to the fine-grained nature of radiology images,
it is hard to focus on the correct image regions throughout
the report generation procedure despite different attentions
developed in recent works [16, 21]. Meanwhile, it is no-
ticed that in clinic scenarios, multi-specialist consultation is
especially beneficial for those intricate diagnostic cases that
challenge a single specialist for a comprehensive and accu-
rate diagnosis. The above observations led us to think, could
we design a model to simulate the multi-specialist consul-
tation scenario? Based on this motivation, we propose a
new diagnostic captioning framework, METransformer, to
mimic the “multi-expert joint diagnosis” process. Built
upon a transformer backbone, METransformer introduces
multiple “expert tokens”, representing multiple experts, into
both the transformer encoder and decoder. Each expert to-
ken learns to attend distinct image regions and interacts with
other expert tokens to capture reliable and complementary
visual information and produce a diagnosis report in paral-
lel. The optimal report is selected through an expert voting
strategy to produce the final report. Our design is based on
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11558
the assumption that it would be easier for multiple experts
than a single one to capture visual patterns of clinical im-
portance, which is verified by our experimental results.
Specifically, we feed both the expert tokens (learnable
embeddings) and the visual tokens (image patches embed-
dings) into the Expert Transformer encoder which is com-
prised of a vision transformer (ViT) encoder and a bilinear
transformer encoder. In ViT encoder, each expert token in-
teracts not only with the visual tokens but also with the other
expert tokens. Further, in the bilinear transformer encoder,
to enable each “expert” to capture fine-grained image in-
formation, we compute higher-order attention between ex-
pert tokens and visual tokens, which has proved to be effec-
tive in fine-grained classification tasks [20]. It is notewor-
thy that the expert tokens in the encoder are encouraged to
learn complementary representations by an orthogonal loss
so that they attend differently to different image regions.
With these carefully learned expert token embeddings, in
the decoder, we take them as a guide to regulate the learn-
ing of word embeddings and visual token embedding in the
report generation process. This results in M different word
and visual token embeddings, thus producing M candidate
reports, where M is the number of experts. We further pro-
pose a metric-based expert voting strategy to generate the
final report from the M candidates.
By utilizing multiple experts, our model, to some extent,
is analogous to ensemble-based approaches, while each ex-
pert token corresponds to an individual model. While it en-
joys the merits of ensemble-based approaches, our model is
designed in a manner that is computationally more efficient
and supports more sophisticated interactions among ex-
perts. Therefore, it can scale up with only a trivial increase
of model parameters and achieves better performance, as
demonstrated in our experimental study.
Our main contributions are summarized as follows.
First, we propose a new diagnostic captioning frame-
work, METransformer, which is conceptually “multi-expert
joint diagnosis” for radiology report generation, by intro-
ducing learnable expert tokens and encouraging them to
learn complementary representations using both linear and
non-linear attentions.
Second, our model enjoys the benefits of an ensemble
approach. Thanks to the carefully designed network struc-
ture and the end-to-end training manner, our model can
achieve better results than common ensemble approaches
while greatly reducing training parameters and improving
training efficiency.
Third, our approach shows promising performance on
two widely used benchmarks IU-Xray and MIMIC-CXR
over multiple state-of-the-art methods. The clinic relevance
of the generated reports is also analyzed. |
Weng_Event-Based_Blurry_Frame_Interpolation_Under_Blind_Exposure_CVPR_2023 | Abstract
Restoring sharp high frame-rate videos from low frame-
rate blurry videos is a challenging problem. Existing
blurry frame interpolation methods assume a predefined
and known exposure time, which suffer from severe perfor-
mance drop when applied to videos captured in the wild. In
this paper, we study the problem of blurry frame interpola-
tion under blind exposure with the assistance of an event
camera. The high temporal resolution of the event camera
is beneficial to obtain the exposure prior that is lost during
the imaging process. Besides, sharp frames can be restored
using event streams and blurry frames relying on the mu-
tual constraint among them. Therefore, we first propose an
exposure estimation strategy guided by event streams to es-
timate the lost exposure prior, transforming the blind expo-
sure problem well-posed. Second, we propose to model the
mutual constraint with a temporal-exposure control strat-
egy through iterative residual learning. Our blurry frame
interpolation method achieves a distinct performance boost
over existing methods on both synthetic and self-collected
real-world datasets under blind exposure.
| 1. Introduction
Blurry frame interpolation (BFI) [23, 46, 73] aims to re-
store sharp high frame-rate videos from blurry low frame-
rate videos, which is highly desirable for a wide range of
applications, such as novel view interpolation synthesis [7],
frame rate conversion [32], slow motion [21] and inter-
frame video compression [64]. Compared with the cascade
scheme, i.e., combining frame deblurring [20,25,33,39,40,
51, 53, 59] with frame interpolation [2, 21, 31, 32, 35, 37, 38,
68], the joint method [46,73] is more effective, which over-
comes the error accumulation problem and ambiguity of the
temporal scope [46].
Despite remarkable improvement, prior works [23, 46]
*Corresponding author. This work was supported in part by the Na-
tional Natural Science Foundation of China under Grants 62131003 and
62021001.
non- blind exposure
readoutvariable and
unknown
exposure exposureshutter period shutter periodblind exposure
predefined and
known
+
exposure
estimation
mutual constraint
… …
(a) (b)
…Figure 1. (a) Non-blind exposure setting and blind exposure set-
ting. (b) Illustrative flow of our solution to blurry frame interpola-
tion under blind exposure.
assume that the exposure time is predefined or known as
the shutter period, which we call non-blind exposure as
shown in the left part of Fig. 1 (a). However, the com-
plicated video capturing in the wild gives rise to variable
and unknown exposure time, which we call blind exposure .
The right part of Fig. 1 (a) presents that the shutter period
is the summation of exposure time and data readout time.
For better imaging, the exposure time is variable to fit the
changing light conditions in the real imaging environments,
especially when the auto-exposure function turns on [73].
Challenges and Motivation. In this paper, we focus on
the problem of BFI under blind exposure, which is prac-
tical yet has barely been investigated explicitly. The first
challenge of this problem can be attributed to the intrinsic
imaging mechanism of frame-based cameras. As can be
seen from Fig. 1 (a), the accumulation operation of frame-
based cameras inevitably results in the loss of motion infor-
mation during the exposure time. Particularly, the variable
exposure time further leads to the temporal jittering, which
degrades the performance of the video enhancement algo-
rithms with constant/predefined exposure time assumption.
Another challenge is that there misses a decent model as the
effective guidance. Conventional frame-based methods for
BFI are vulnerable to the artifacts introduced by flow-based
warping or straight forward prediction.
To overcome the above challenges, we attempt to pro-
vide a new perspective, by resorting to a novel sensor.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1588
Event cameras [8, 69], also known as neuromorphic sen-
sors, are bio-inspired visual sensors that output events by
detecting spatiotemporal brightness changes. We demon-
strate that event cameras are suitable for BFI under blind
exposure from two points as shown in Fig. 1 (b). First,
the high temporal resolution property of event cameras is
able to compensate for the exposure prior that is lost dur-
ing the imaging process of frame-based cameras. To this
end, we propose an exposure estimation strategy guided by
event streams to estimate the lost exposure prior. In such a
way, the blind exposure problem can be made well-posed,
which eases the difficulty of video restoration. Second, the
event stream is a natural constraint between blurry frames
and sharp frames, providing a physical model for video
restoration. To effectively model this physical correlation,
we propose a temporal-exposure control strategy that takes
timestamps and exposure priors as inputs for interpolation
through iterative residual learning. By exploiting the pro-
posed strategies of exposure estimation and time-exposure
control, we are able to perform arbitrary-time interpolation
from blurry frames under blind exposure. To evaluate our
method in real-world scenarios, we collect a real blurry
video dataset using a DA VIS-346 color event camera, which
includes multiple exposure assumptions. Experimental re-
sults on synthetic and self-collected real datasets demon-
strate our superior performance over existing methods.
The contributions of this paper can be summarized as
follows: 1) we provide a decent solution for BFI under blind
exposure by using event cameras, for the first time; 2) we
propose an exposure estimation strategy guided by event
streams, which makes the blind exposure problem well-
posed; 3) we propose a temporal-exposure control strategy,
which enables BFI at an arbitrary timestamp under blind
exposure; 4) we achieve superior performance over existing
state-of-the-art methods on both synthetic and self-collected
real-world datasets.
|
Wang_Learning_Bottleneck_Concepts_in_Image_Classification_CVPR_2023 | Abstract
Interpreting and explaining the behavior of deep neural
networks is critical for many tasks. Explainable AI pro-vides a way to address this challenge, mostly by provid-ing per-pixel relevance to the decision. Yet, interpretingsuch explanations may require expert knowledge. Some re-cent attempts toward interpretability adopt a concept-basedframework, giving a higher-level relationship between someconcepts and model decisions. This paper proposes Bot-
tleneck Concept Learner (BotCL), which represents an im-age solely by the presence/absence of concepts learnedthrough training over the target task without explicit super-vision over the concepts. It uses self-supervision and tai-lored regularizers so that learned concepts can be human-
understandable. Using some image classification tasks as
our testbed, we demonstrate BotCL’s potential to rebuildneural networks for better interpretability
1.
| 1. Introduction
Understanding the behavior of deep neural networks
(DNNs) is a major challenge in the explainable AI (XAI)community, especially for medical applications [19,38], foridentifying biases in DNNs [2, 18, 42], etc. Tremendous re-
search efforts have been devoted to the post-hoc paradigm
for a posteriori explanation [29, 33]. This paradigm pro-
duces a relevance map to spot regions in the input image thatinteract with the model’s decision. Yet the relevance maponly tells low-level (or per-pixel) relationships and does not
explicitly convey any semantics behind the decision. Inter-
pretation of relevance maps may require expert knowledge.
The concept-based framework [22, 37, 50] is inspired by
the human capacity to learn a new concept by (subcon-sciously) finding finer-grained concepts and reuse them indifferent ways for better recognition [24]. Instead of giv-ing per-pixel relevance, this framework offers higher-level
*Corresponding author.
1Code is avaliable at https://github.com/wbw520/BotCL and a simple
demo is available at https://botcl.liangzhili.com/.
*OQVUJNBHF $PODFQUCPUUMFOFDLSFQSFTFOUBUJPO
%JTDPWFSFEDPODFQUT%BUBTFUCpt.1
Cpt.2 Cpt.3 Cpt.4 Cpt.5 Cpt.6
Figure 1. Examples of concepts discovered by BotCL in Ima-
geNet [10] and concepts in the input image. BotCL automatically
discovers a set of concepts optimized for the target task and repre-sents an image solely with the presence/absence of concepts.
relationships between the image and decision mediated by
a limited number of concepts . That is, the decision is ex-
plained by giving a set of concepts found in the image. Theinterpretation of the decision is thus straightforward oncethe interpretation of each concept is established.
Some works use concepts for the post-hoc paradigm for
better interpretation of the decision [14, 50], while the linkbetween the decision and concepts in the image is not ob-vious. The concept bottleneck structure [23] uses the pres-
ence/absence of concepts as image representation (referredto as concept activation ). The classifier has access only to
the concept activation, so the decision is strongly tied to the
concepts. This bottleneck structure has become the main-stream of the concept-based framework [5, 20, 28, 31].
A major difficulty in this framework is designing a set of
concepts that suits the target task. A promising approach ishandcrafting them [4, 21, 48], which inherently offers bet-ter interpretability at the cost of extra annotations on theconcepts. Recent attempts automatically discover concepts[1, 13, 14, 46]. Such concepts may not always be consis-tent with how humans (or models) see the world [25, 47]and may require some effort to interpret them, but conceptdiscovery without supervision is a significant advantage.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10962
Inspired by these works, we propose bottleneck c oncept
learner (BotCL) for simultaneously discovering concepts
and learning the classifier. BotCL optimizes concepts forthe given target image classification task without supervi-sion for the concepts. An image is represented solely by
the existence of concepts and is classified using them. Weadopt a slot attention-based mechanism [26, 27] to spot the
region in which each concept is found. This gives an extrasignal for interpreting the decision since one can easily seewhat each learned concept represents by collectively show-ing training images with the detected concepts. Figure 1
shows examples from ImageNet [10]. BotCL discovers apredefined number of concepts in the dataset, which are ex-
emplified by several images with attention maps. An imageof
Great White Shark is represented by the right part of
mouth ( Cpt.1 ) and fins ( Cpt.3 ). BotCL uses a single
fully-connected (FC) layer as a classifier, which is simplebut enough to encode the co-occurrence of each concept andeach class.
Contribution . For better concept discovery, we propose
to use self-supervision over concepts , inspired by the re-
cent success in representation learning [9, 16]. Our ablationstudy demonstrates that self-supervision by contrastive lossis the key. We also try several constraints on concepts them-selves, i.e.,individual consistency to make a concept more
selective and mutual distinctiveness for better coverage of
various visual elements. These additional constraints regu-lar the training process and help the model learn conceptsof higher quality.
|
Wang_Decoupling-and-Aggregating_for_Image_Exposure_Correction_CVPR_2023 | Abstract
The images captured under improper exposure condi-
tions often suffer from contrast degradation and detail dis-
tortion. Contrast degradation will destroy the statisti-
cal properties of low-frequency components, while detail
distortion will disturb the structural properties of high-
frequency components, leading to the low-frequency and
high-frequency components being mixed and inseparable.
This will limit the statistical and structural modeling ca-
pacity for exposure correction. To address this issue, this
paper proposes to decouple the contrast enhancement and
detail restoration within each convolution process. It is
based on the observation that, in the local regions covered
by convolution kernels, the feature response of low-/high-
frequency can be decoupled by addition/difference opera-
tion. To this end, we inject the addition/difference operation
into the convolution process and devise a Contrast Aware
(CA) unit and a Detail Aware (DA) unit to facilitate the
statistical and structural regularities modeling. The pro-
posed CA and DA can be plugged into existing CNN-based
exposure correction networks to substitute the Traditional
Convolution (TConv) to improve the performance. Fur-
thermore, to maintain the computational costs of the net-
work without changing, we aggregate two units into a single
TConv kernel using structural re-parameterization. Evalu-
ations of nine methods and five benchmark datasets demon-
strate that our proposed method can comprehensively im-
prove the performance of existing methods without intro-
ducing extra computational costs compared with the origi-
nal networks. The codes will be publicly available.
| 1. Introduction
Images captured under improper exposure conditions of-
ten suffer from under-exposure or over-exposure problems
[2, 14, 16]. Improper exposure will change the statistical
distribution of image brightness, resulting in contrast degra-
*Co-first author.†Corresponding author.
Figure 1. (a, b) The PSNR and SSIM comparison of TConv and
our DAConv on the ME dataset. (c, d) The PSNR and SSIM com-
parison of TConv and our DAConv on the SICE dataset. Under
the boosting of DAConv, the performance of existing methods has
been comprehensively improved, reaching a new SOTA perfor-
mance without introducing extra computational costs. Complete
results can be found in Table 2.
dation. Besides, improper exposure will also destroy the
image’s structural property and result in detail distortion.
The contrast degradation and detail distortion will cause the
low-frequency and high-frequency components to mix and
inseparable across the image, making the image exposure
correction extremely challenging [2, 4, 9, 31, 33, 39].
In practice, one solution for this problem is to design an
end-to-end architecture for learning contrast enhancement
and detail restoration in shared feature space [14,16]. How-
ever, the contrast-relevant features are primarily distributed
in low-frequency components, while the detail-relevant fea-
tures are primarily distributed in high-frequency compo-
nents. Since low-frequency components are statistically
dominant over high-frequency components, these methods
mainly focus on contrast enhancement and cannot guarantee
that the high-frequency details can be efficiently restored.
To achieve better contrast enhancement and detail
restoration, some researchers propose to decompose and re-
store the input image’s lightness and structure components,
respectively [2, 20, 41]. For example, some researchers de-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18115
compose images into illumination and reflectance compo-
nents by utilizing Retinex theory and then design a spe-
cific network for each component [19, 20, 24]. Other re-
searchers propose to decompose the input image into multi-
scale components and adopt the coarse-to-fine strategy to
progressively recover the lightness and fine-scale structures
[2]. However, the decomposition operation inevitably de-
stroys the relationship between brightness and detail and
cannot balance the contrast and detail enhancement, leading
to over-smooth problems or artifacts for enhanced results.
To address the above issues, this paper proposes to de-
couple the contrast enhancement and detail restoration dur-
ing the convolution process. This method is based on
statistical observations that the feature response in local
regions can be decomposed into low-frequency compo-
nents and high-frequency components by a difference oper-
ation. Based on this, we introduce a novel Contrast Aware
(CA) unit in parallel with a Detail Aware (DA) unit to
guide the contrast and detail modeling, termed Decoupling-
and-Aggregating Convolution (DAConv). Different from
TConv, we inject the addition/difference operation into the
convolution process, which can guide the contrast and detail
modeling in an explicit manner. Furthermore, to balance the
contrast enhancement and detail restoration, we introduce a
dynamic coefficient for each branch to adjust the amplitude
of the feature response. Our proposed DAConv can be used
as a general unit to substitute the TConv kernel in existing
CNN-based exposure correction networks to facilitate con-
trast enhancement and detail restoration.
To reduce the computational costs, the CA, DA, and dy-
namic coefficients are aggregated into a single TConv ker-
nel by structural re-parameterization in the inference phase.
The aggregation is conducted before the activation func-
tion, and the linear superposition can reduce computational
costs without changing the function of DAConv. After
that, the performance of networks can be significantly im-
proved without introducing extra computational costs com-
pared with the original network. Evaluations of nine meth-
ods and five benchmark datasets demonstrate the effective-
ness of our proposed method, as shown in Fig. 1.
The contribution can be summarized as follows:
(1) We propose a novel decoupling-and-aggregating
scheme for image exposure correction, in which two par-
allel convolution processes are decoupled for contrast en-
hancement and detail restoration, respectively, and then ag-
gregated into a single branch without additional computa-
tion compared with the original convolution scheme.
(2) To facilitate the contrast and detail relevant features
extraction, a novel CA and DA unit are devised by injecting
the addition and difference operation into the convolution
process. Compared with traditional convolution kernels,
our proposed CA and DA can explicitly model the contrast
and detail relevant properties.(3) Evaluations on the five prevailing benchmark datasets
and nine SOTA image exposure correction methods demon-
strate our proposed DC can comprehensively improve the
contrast enhancement and detail restoration performances
without introducing extra computational costs.
|
Wang_Are_We_Ready_for_Vision-Centric_Driving_Streaming_Perception_The_ASAP_CVPR_2023 | Abstract
In recent years, vision-centric perception has flourished
in various autonomous driving tasks, including 3D detec-
tion, semantic map construction, motion forecasting, and
depth estimation. Nevertheless, the latency of vision-centric
approaches is too high for practical deployment ( e.g., most
camera-based 3D detectors have a runtime greater than
300ms). To bridge the gap between ideal researches and
real-world applications, it is necessary to quantify the
trade-off between performance and efficiency. Tradition-
ally, autonomous-driving perception benchmarks perform
theoffline evaluation, neglecting the inference time de-
lay. To mitigate the problem, we propose the Autonomous-
driving StreAming Perception (ASAP) benchmark, which
is the first benchmark to evaluate the online performance
of vision-centric perception in autonomous driving. On
the basis of the 2Hz annotated nuScenes dataset, we first
propose an annotation-extending pipeline to generate high-
frame-rate labels for the 12Hz raw images. Referring to
the practical deployment, the Streaming Perception Under
const Rained-computation (SPUR) evaluation protocol is
further constructed, where the 12Hz inputs are utilized
for streaming evaluation under the constraints of differ-
ent computational resources. In the ASAP benchmark,
comprehensive experiment results reveal that the model
rank alters under different constraints, suggesting that the
model latency and computation budget should be consid-
ered as design choices to optimize the practical deployment.
To facilitate further research, we establish baselines for
camera-based streaming 3D detection, which consistently
enhance the streaming performance across various hard-
ware. ASAP project page: https://github.com/
JeffWang987/ASAP .
| 1. Introduction
Vision-centric perception in autonomous driving has
drawn extensive attention recently, as it can obtain richer
1Corresponding author, xingang.wang@ia.ac.cn
inf TFLOPS
@Offline35.6TFLOPS
@RTX30909.1TFLOPS
@RTX2070S4.4TFLOPS
@GTX1060
Computation platform0.10.20.30.4mAP-S
FCOS3D
PGD
BEVDet
BEVDet4D
BEVFormer
PETR
BEVDepth
BEVDepth-Sv (Ours)Figure 1. Comparison of streaming performances on the ASAP
benchmark, where the model rank changes under different com-
putational resources. Note that our baseline BEVDepth-Sv (built
upon [30]) consistently improves the streaming performance on
different platforms.
semantic information from images with a desirable budget,
compared to LiDAR-based perception. Notably, the past
years have witnessed the blooming of vision-centric per-
ception in various autonomous driving tasks ( e.g., 3D de-
tection [21,22,29–31,33,34,41,56,61], semantic map con-
struction [28, 40, 43, 45, 64, 72], motion forecasting [1, 20],
and depth estimation [16, 17, 55, 57, 58, 60]).
Despite the growing research interest in vision-centric
approaches, the high latency of these methods still prevents
the practical deployment. Specifically, in the fundamen-
tal task of autonomous-driving perception ( e.g., 3D detec-
tion), the inference time of most camera-based 3D detec-
tors [21,29–31,34,61,70] is longer than 300ms (on the pow-
erful NVIDIA RTX3090), which is ∼6×longer (see Tab. 1)
than the LiDAR-based counterparts [25, 62, 66]. To enable
practical vision-centric perception in autonomous driving, a
quantitative metric is in an urgent need to balance the ac-
curacy and latency. However, previous autonomous-driving
benchmarks [3, 4, 7, 12, 13, 23, 38, 46, 49, 59, 67] mainly fo-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9600
Table 1. Comparison between autonomous-driving perception
dataset, where L&C represents LiDAR and camera, # sensors de-
notes number of sensors, Ann. frequency is the annotation fre-
quency, and Model speed denotes the typical inference speed of
the model on RTX3090. For 2D detectors [2, 11, 15, 44, 50], they
achieve ∼45mAP@30FPS on COCO [32]. For LiDAR-based 3D
detectors [25, 62, 66], they achieve ∼70mAP@20FPS on Waymo
[49]. For camera-based 3D detectors [21, 30, 31, 70], they achieve
∼40mAP@3FPS on nuScenes [4], which is 6 ×∼10×slower than
the above two tasks.
Dataset Stream. Modality #sensors TaskAnn. Model
frequency speed
KITTI [12] % L&C - Multi-task - -
Argoverse [59] % L&C - Multi-task - -
Waymo [49] % L&C - Multi-task - -
nuScenes [4] % L&C - Multi-task - -
Argoverse-HD [27] ! C 1 2D det. 30Hz ∼30FPS
Waymo [18] ! L 1 L-3D det. 10Hz ∼20FPS
nuScenes-H ! C 6 C-3D det. 12Hz ∼3FPS
cus on the offline performance metrics ( e.g., Average Pre-
cision (AP), Truth Positive (TP)), and the model latency
has not been particularly studied. Although [18, 27] lever-
age the streaming perception paradigm [27] to measure the
accuracy-latency trade-off, these benchmarks are designed
for 2D detection or LiDAR-based 3D detection.
To address the aforementioned problem, this paper
proposes the Autonomous-driving StreAming Perception
(ASAP) benchmark. To the best of our knowledge, this
is the first benchmark to evaluate the online performance
of vision-centric perception in autonomous driving. The
ASAP benchmark is instantiated on the camera-based 3D
detection, which is the core task of vision-centric percep-
tion in autonomous driving. To enable the streaming evalua-
tion of 3D detectors, an annotation-extending pipeline is de-
vised to increase the annotation frame rate of the nuScenes
dataset [4] from 2Hz to 12Hz. The extended dataset,
termed nuScenes-H (High-frame-rate annotation), is uti-
lized to evaluate 3D detectors with 12Hz streaming inputs.
Referring to the practical deployment, we delve into the
problem of ASAP under different computational resources.
Specifically, the Streaming Perception Under const Rained-
computation (SPUR) evaluation protocol is constructed: (1)
To compare the model performance on varying platforms,
multiple GPUs with different computation performances
are assigned for the streaming evaluation. (2) To analyze
the performance fluctuation caused by the sharing of com-
putational resources [9,61,65,70], the streaming evaluation
is performed while the GPU is simultaneously processing
other perception tasks. As depicted in Fig. 1, the streaming
performances of different methods drop steadily as the com-
putation power is increasingly constrained. Besides, the
model rank alters under various hardware constraints, sug-
gesting that the offline performance cannot serve as the de-
terministic criterion for different approaches. Therefore, it
is necessary to introduce our streaming paradigm to vision-centric driving perception. Based on the ASAP bench-
mark, we further establish simple baselines for camera-
based streaming 3D detection, and experiment results show
that forecasting the future state of the object can compen-
sate for the delay in inference time. Notably, the proposed
BEVDepth-Sv improves the streaming performance (mAP-
S) by∼2%,∼3%, and ∼16% on three GPUs (RTX3090,
RTX2070S, GTX1060).
The main contributions are summarized as follows: (1)
We propose the ASAP benchmark to quantitatively eval-
uate the accuracy-latency trade-off of camera-based per-
ception methods, which takes a step towards the practical
vision-centric perception in autonomous driving. (2) An
annotation-extending pipeline is proposed to annotate the
12Hz raw images of the popular nuScenes dataset, which
facilitates the streaming evaluation on camera-based 3D de-
tection. (3) Simple baselines are established in the ASAP
benchmark, which alleviates the influence of inference de-
lay and consistently improves the streaming performances
across different hardware. (4) The SPUR evaluation proto-
col is constructed to facilitate the evaluation of practical de-
ployment, where we investigate the streaming performance
of the proposed baselines and seven modern camera-based
3D detectors under various computational constraints.
|